uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,869,038,155,728
arxiv
\section{Weakly complementary and weakly degradable channels}~\label{s:uno} In quantum mechanics, quantum channels describe evolution of an open system $A$ interacting with external degrees of freedom. In the Sch\"{o}dinger picture these transformations are described by completely positive trace preserving (CPT) linear maps $\Phi$ acting on the set ${\cal D}({\mathcal H_a})$ of the density matrices $\rho_a$ of the system. It is a well known (see e.g. \cite{HPPI}, \cite{LINDBLAD}) that $\Phi$ can be described by a unitary coupling between the system $A$ in input state $\rho_a$ with an external ancillary system $B$ (describing the {\em environment}) prepared in some fixed {\em pure} state. This follows from Stinespring dilation~\cite{STINE} of the map which is unique up to a partial isometry. More generally, one can describe $\Phi$ as a coupling with environment prepared in some {\em mixed} state $\rho_b$, i.e. \begin{eqnarray} \Phi(\rho_a) = \mbox{Tr}_b[ U_{ab} (\rho_a\otimes \rho_b) U_{ab}^\dag] \;, \label{HGCuno} \end{eqnarray} where $\mbox{Tr}_b [ ... ]$ is the partial trace over the environment $B$, $U_{ab}$ is a unitary operator in the composite Hilbert space ${\cal H}_a\otimes {\cal H}_b$. We call Eq.~(\ref{HGCuno}) a ``physical representation'' of $\Phi$ to distinguish it from the Stinespring dilation, and to stress its connection with the physical picture of the noisy evolution represented by $\Phi$. Any Stinespring dilation gives rise to a physical representation. Moreover from any physical representation~(\ref{HGCuno}) one can construct a Stinespring dilation by purifying $\rho_b$ with an external ancillary system $C$, and by replacing $U_{ab}$ with the unitary coupling $U_{abc} = U_{ab}\otimes \openone_{c}$. Equation~(\ref{HGCuno}) motivates the following~\cite{CG} \begin{defn} For any physical representation {\em (\ref{HGCuno})} of the quantum channel $\Phi$ we define its {\em weakly complementary} as the map ${\tilde{\Phi}}:{\cal D}({\cal H}_a) \rightarrow {\cal D}({\cal H}_b)$ which takes the input state $\rho_a$ into the state of the environment $B$ after the interaction with $A$, i.e. \begin{eqnarray} \tilde{\Phi}(\rho_a) = \mbox{\em Tr}_a[ U_{ab} (\rho_a\otimes \rho_b) U_{ab}^\dag] \;. \label{duedue} \end{eqnarray} \end{defn} The transformation~(\ref{duedue}) is CPT, and it describes a quantum channel connecting systems $A$ and $B$. It is a generalization of the {\em complementary (conjugate) channel} $\Phi_{\text{com}}$ defined in Ref.~\cite{DEVSHOR,CONJ0,CONJ}. In particular, if Eq.~(\ref{HGCuno}) arises from a Stinespring dilation (i.e. if $\rho_b$ of Eq.~(\ref{duedue}) is pure) the map $\tilde{\Phi}$ coincides with $\Phi_{\text{com}}$. Hence the latter is a particular instance of a weakly complementary channel of $\Phi$. On the other hand, by using the above purification procedure, we can always represent a weakly complementary map as a composition \begin{equation}\label{comp} \tilde{\Phi}=T\circ\Phi_{\text{com}}, \end{equation} where $T$ is the partial trace over the purifying system (here {\em `` $\; \circ$ ''} denotes the composition of channels). As we will see, the properties of weakly complementary and complementary maps in general differ. \begin{defn} Let $\Phi, \tilde{\Phi}$ be a pair of mutually weakly-complementary channels such that \begin{eqnarray} ({\Psi}\circ {\Phi})(\rho_a) = \tilde{\Phi}(\rho_a) \;,\label{deg} \end{eqnarray} for some channel $ \Psi : {\cal D}({\cal H}_a) \rightarrow {\cal D}({\cal H}_b)$ and all density matrix $\rho_a \in {\cal D}({\cal H}_a)$. Then $\Phi$ is called {\em weakly-degradable} while $\tilde{\Phi}$ -- {\em anti-degradable} (cf.~\cite{CG}). \end{defn} Similarly if \begin{eqnarray} (\overline{\Psi}\circ \tilde{\Phi})(\rho_a) = {\Phi}(\rho_a) \;,\label{antideg} \end{eqnarray} for some channel $ \overline{\Psi} : {\cal D}({\cal H}_b) \rightarrow {\cal D}({\cal H}_a)$ and all density matrix $\rho_a \in {\cal D}({\cal H}_a)$, then $\Phi$ is {\em anti-degradable} while $\tilde{\Phi}$ is {\em weakly-degradable}. In Ref.~\cite{DEVSHOR} the channel $\Phi$ is called {\em degradable} if in Eq.~(\ref{deg}) we replace $\tilde{\Phi}$ with a complementary map $\Phi_{\text{com}}$ of $\Phi$. Clearly any degradable channel~\cite{DEVSHOR} is weakly degradable but the opposite is not necessarily true. Notice, however, that due to Eq.~(\ref{comp}), in the definition of anti-degradable channel we can always replace weakly complementary with complementary (for this reason there is no point in introducing the notion of weakly anti-degradable channel). This allows us to verify that if $\Phi$ is anti-degradable~(\ref{antideg}) then its complementary channel $\Phi_{\text{com}}$ is degradable~\cite{DEVSHOR} and vice-versa. It is also worth pointing out that channels which are unitarily equivalent to a channel $\Phi$ which is weakly degradable (anti-degradable) are also weakly degradable (anti-degradable). Finally an important property of anti-degradable channels is the fact that their quantum capacity~\cite{SETH} is null. As discussed in~\cite{CG} this is a consequence of the no-cloning theorem~\cite{NOCLONING} (more precisely, of the impossibility of cloning with arbitrary high fidelity~\cite{NOCLONING2}). It is useful also to reformulate our definitions in the Heisenberg picture. Here the states of the system are kept fixed and the transformation induced on the system by the channel is described by means of a linear map $\Phi_H$ acting on the algebra ${\cal B}({{\cal H}_a})$ of all bounded operators of $A$ so that \begin{eqnarray} \mbox{Tr}_a [ \Phi(\rho_a) \; \Theta_a ] = \mbox{Tr}_a [ \rho_a \; \Phi_H (\Theta_a)] \label{identity}\;, \end{eqnarray} for all $\rho_a\in {\cal D}({\cal H}_a)$ and for all $\Theta_a \in {\cal B}({\cal H}_a)$. From this it follows that the Heisenberg picture counterpart of the physical representation~(\ref{HGCuno}) is given by the unital channel \begin{eqnarray} \Phi_H(\Theta_a) &=& \mbox{Tr}_b \big[ \; U_{ab}^\dag\; (\Theta_a \otimes \openone_b) \; U_{ab} \; (\openone_a \otimes \rho_b ) \; \big]\;. \label{physicalHEIS} \end{eqnarray} Similarly, from~(\ref{duedue}) it follows that in the Heisenberg picture the weakly complementary of the channel is described by the completely positive unital map \begin{eqnarray} \tilde{\Phi}_H(\Theta_b) = \mbox{Tr}_b \; \big[ \; U_{ab}^\dag (\openone_a \otimes \Theta_b) \; U_{ab} \; \big( \openone_a \otimes \rho_b) \; \big] \label{wconjHeis} \;, \end{eqnarray} which takes bounded operators in ${\cal H}_b$ into bounded operators in ${\cal H}_a$. Within this framework the weak-degradability property~(\ref{deg}) of the channel $\Phi_H$ requires the existence of a channel $\Psi_H$ taking bounded operators of ${\cal H}_b$ into bounded operators of ${\cal H}_a$, such that \begin{eqnarray} (\Phi_H \circ \Psi_H)(\Theta_b) = \tilde{\Phi}_H(\Theta_b) \;,\label{wwwweak} \end{eqnarray} for all $\Theta_b \in {\cal B}({\cal H}_b)$. Similarly we say that a quantum channel $\Phi_H$ is anti-degradable, if there exists a channel $\overline{\Psi}_H$ from ${\cal B}({\cal H}_a)$ to ${\cal B}({\cal H}_b)$, such that \begin{eqnarray} (\tilde{\Phi}_H \circ \overline{\Psi}_H) (\Theta_a) ={\Phi}_H(\Theta_a) \;, \label{wwwanti} \end{eqnarray} for all $\Theta_a \in {\cal B}({\cal H}_a)$. \section{One-mode Bosonic Gaussian channels}\label{s:gaus} Gaussian channels arise from linear dynamics of open Bosonic system interacting with Gaussian environment via quadratic Hamiltonians. Loosely speaking, they can be characterized as CPT maps that transform Gaussian states into Gaussian states~\cite{HW,REV,REV1}. Here we focus on one-mode Bosonic Gaussian channels which act on the density matrices of single Bosonic mode $A$. A classification of such maps obtained recently in the paper~\cite{HOLEVOREP} allows us to simplify the analysis of the weak-degradability property. In the following we start by reviewing the result of Ref.~\cite{HOLEVOREP}, clarifying the connection with the analysis of Ref.~\cite{CG} (cf. also Ref.~\cite{SEW}). Then we pass to the weak-degradability analysis of these channels, showing that with some important exception, they are either weakly degradable or anti-degradable. \subsection{General properties} Consider a single Bosonic mode characterized by canonical observables $Q_a, P_a$ obeying the canonical commutation relation $[Q_a,P_a]=i$. A consistent description of the system can be given in terms of the unitary Weyl operators $V_a(z)= \exp \,[ i(Q_a, P_a)\cdot z ]$, with $z= (x,y)^T$ being a column vector of $R^2$. In this framework the canonical commutation relation is written as \begin{equation*} V_a(z)\;V_a(z^{\prime })=\exp [\frac{i}{2}\Delta (z,z^{\prime })]\; V_a(z+z^{\prime })\;, \label{weyl} \end{equation*} where $\Delta (z,z^{\prime })$ is the symplectic form \begin{equation} \Delta (z,z^{\prime })= -i\; z^T \cdot \sigma_2 \cdot z^\prime =x^{\prime }y-xy^{\prime }\;, \label{sympl-form} \end{equation} with $\sigma_2$ being the second Pauli matrix. Moreover the density operators $\rho_a$ of the system can be expressed in terms of an integral over $z$ of the $V_a(z)$'s, i.e. \begin{eqnarray} \rho_a = \int \frac{d^2 z}{2\pi} \; \phi(\rho_a; z) \; V_a(-z) \;, \label{decomposition} \end{eqnarray} with \begin{eqnarray} \phi( \rho_a; z) = \mbox{Tr}_a [ \rho_a \; V_a(z) ]\;, \label{characte} \end{eqnarray} being the characteristic function of $\rho_a$~\footnote{ In effect an analogous decomposition~(\ref{decomposition}) holds also for all trace class operators of $A$~\cite{HOLEVOBOOK}.}. Consequently a complete description of a quantum channel on $A$ is obtained by specifying its action on the operators $V_a(z)$, or, equivalently, by specifying how to construct the characteristic function $\phi( \Phi(\rho_a); z)$ of the evolved states. In the case of Gaussian channels $\Phi$ this is done by assigning a mapping of the Weyl operators \begin{equation} \Phi_H(V_a(z))=V_a(K\cdot z )\; \exp[ -\frac{1}{2}\; z^T \cdot \alpha \cdot z + i \; m^T \cdot z] \label{linbos}\;, \end{equation} in the Heisenberg picture, or the transformation of the characteristic functions \begin{eqnarray} \phi(\Phi(\rho_a);z) =\phi(\rho_a; K \cdot z)\; \exp[ -\frac{1}{2}\; z^T \cdot \alpha \cdot z + i \; m^T \cdot z ], \label{linbos1} \end{eqnarray} in the Schr\"odinger picture. Here $m$ is a vector, while $K$ and $\alpha$ are real matrices (the latter being symmetric and positive). Equation~(\ref{linbos1}) guarantees that any input Gaussian characteristic function will remain Gaussian under the action of the map. A useful property of Gaussian channels is the fact that the composition of two of them (say $\Phi^{\prime}$ and $\Phi^{\prime\prime}$) is still a Gaussian channel. Indeed one can easily verify that the composite map $\Phi^{\prime\prime}\circ \Phi^{\prime}$ is of the form~(\ref{linbos1}) with $m$, $K$ and $\alpha$ given by \begin{eqnarray} m &=& (K^{\prime\prime})^T \cdot m^{\prime} + m^{\prime\prime} \nonumber \\ K &=& K^{\prime} \; K^{\prime\prime} \label{composition} \\ \alpha &=& (K^{\prime\prime})^T \; \alpha^\prime \; K^{\prime\prime} + \alpha^{\prime\prime}\;. \nonumber \end{eqnarray} Here $m^\prime$, $K^\prime$, and $\alpha^\prime$ belongs to $\Phi^\prime$ while $m^{\prime\prime}$, $K^{\prime\prime}$, and $\alpha^{\prime\prime}$ belongs to $\Phi^{\prime\prime}$. Not all possible choices of $K$, $\alpha$ correspond to transformations $\Phi$ which are completely positive. A necessary and sufficient condition for this last property (adapted to the case of one mode) is provided by the nonnegative definiteness of the following $2\times 2$ Hermitian matrix~\cite{HW,HOLEVOREP} \begin{equation}\label{positive} 2 \; \alpha - \sigma_2 +K^T \; \sigma_2 \; K \;. \end{equation} This matrix reduces to $2 \alpha + (\mbox{Det}[K] -1) \; \sigma_2$ and its nonnegative definiteness to the inequality \begin{eqnarray} \mbox{Det}[\alpha] \geqslant \left(\frac{\mbox{Det}[K]-1}{2}\right)^2 \;. \label{inequality} \end{eqnarray} Within the limit imposed by Eq.~(\ref{inequality}) we can use Eq.~(\ref{linbos1}) to describe the whole set of the one-mode Gaussian channels. \subsection{Channels with single-mode physical representation}\label{s:single} An important subset of one-mode Gaussian channels is given by the maps $\Phi$ which possess a physical representation~(\ref{HGCuno}) with $\rho_b$ being a Gaussian state of a {\em single} external Bosonic mode $B$ and with $U_{ab}$ being a canonical transformation of $Q_a$, $P_a$, $Q_b$ and $P_b$ (the latter being the canonical observables of the mode $B$). In particular let $\rho_b$ be a thermal state of average photon number $N$, i.e. \begin{eqnarray} \phi(\rho_b;z) = \mbox{Tr}_b [ \rho_b \;V_b(z) ] = \exp[ - (N+1/2) |z|^2/2] \; \label{sigmab}, \end{eqnarray} and let $U_{ab}$ be such that \begin{eqnarray} U_{ab}^\dag \;(Q_a , P_a, Q_b, P_b) \; U_{ab} = (Q_a, P_a, Q_b, P_b) \cdot M \;, \label{couplingN} \end{eqnarray} with $M$ being a $4\times 4$ symplectic matrix of block form \begin{eqnarray} M \equiv \left( \begin{array}{ccc} m_{11}&|& m_{21} \\ \hline m_{12}&|& m_{22} \end{array} \right)\;. \label{matricem} \end{eqnarray} This yields the following evolution for the characteristic function $\phi(\rho_a;z)$, \begin{eqnarray} \phi(\Phi(\rho_a); z) &=& \mbox{Tr}_a[ \Phi(\rho_a) \; V_a(z) ] = \mbox{Tr}_a [ \rho_a \;\Phi_H(V_a(z))] \nonumber \\ &=& \mbox{Tr}_{ab} \left[ U_{ab}^\dag \; ( V_a(z) \otimes \openone ) U_{ab} \; ( \rho_a \otimes \rho_b) \right] \nonumber \\ &=& \mbox{Tr}_{ab} \left[\big( V_a(m_{11} \cdot z) \otimes V_b(m_{12}\cdot z) \big) \; ( \rho_a \otimes \rho_b) \right] \nonumber \\ &=& \phi(\rho_a; m_{11} \cdot z) \; \exp[ - (N+1/2) | m_{12}\cdot z|^2 /2]\;, \label{output} \end{eqnarray} which is of the form~(\ref{linbos1}) by choosing $m=0$, $K=m_{11}$ and $\alpha= (N+1/2) \; m_{12}^T \cdot m_{12}$. It is worth stressing that in the case of Eq.~(\ref{output}) the inequality~(\ref{inequality}) is guaranteed by the symplectic nature of the matrix $M$, i.e. by the fact that Eq.~(\ref{couplingN}) preserves the commutation relations among the canonical operators. Indeed we have \begin{eqnarray} \mbox{Det}[\alpha] &=& (N+1/2)^2 \; \mbox{Det}[m_{12}]^2 = (N+1/2)^2 \; (\mbox{Det}[m_{11}]-1)^2\nonumber \\ &\geqslant& (\mbox{Det}[K]-1)^2/4 \label{ineq1}\;, \end{eqnarray} where in the second identity the condition~(\ref{simpcond}) was used. As we shall see, with certain important exception one-mode Gaussian channels~(\ref{linbos}) are unitarily equivalent to transformations which admit physical representation with $\rho_b$ and $U_{ab}$ as in Eqs.~(\ref{sigmab}) and (\ref{couplingN}). \subsection{Canonical form}\label{s:canonical} Following Ref.~\cite{HOLEVOREP} any Gaussian channel~(\ref{linbos1}) can be transformed (through unitarily equivalence) into a simple canonical form. Namely, given a channel $\Phi$ characterized by the vector $m$ and the matrices $K$, $\alpha$ of Eq.~(\ref{linbos1}), one can find unitary operators $U_a $ and $W_a$ such that the channel defined by the mapping \begin{eqnarray} \rho_a \longrightarrow \Phi^{(\text{can})} ( \rho_a) = W_a \; \Phi( U_a \; \rho_a \; U_a^\dag) \; W_a^\dag \qquad \quad \mbox{for all $\rho_a$,} \label{equivalent} \end{eqnarray} is of the form~(\ref{linbos1}) with $m=0$ and with $K$, $\alpha$ replaced, respectively, by the matrices $K_{\text{can}}$, $\alpha_{\text{can}}$ of Table~\ref{t:table}, i.e. \begin{eqnarray} \phi(\Phi^{(\text{can})}(\rho_a);z) =\phi(\rho_a; K_{\text{can}} \cdot z)\; \exp[ -\frac{1}{2}\; z^T \cdot \alpha_{\text{can}} \cdot z ]\;. \label{linbos2} \end{eqnarray} An important consequence of Eq.~(\ref{linbos2}) is that to analyze the weak-degradability properties of a one-mode Gaussian channel it is sufficient to focus on the canonical map $\Phi^{(\text{can})}$ which is unitarily equivalent to it (see remark at the end of Sec.~\ref{s:uno}). Here we will not enter into the details of the derivation of Eqs.~(\ref{equivalent}) and (\ref{linbos2}), see Ref.~\cite{HOLEVOREP}. \begin{table}[t] \begin{tabular} {cl|c|cc} Channel $\Phi$ & &Class & Canonical form $\Phi^{(\text{can})}$ & \\ $\mbox{Det} [K]$ & & & $K_{\text{can}}$ & $\alpha_{\text{can}}$ \\ \hline \hline $0$& $\mbox{rank} [K] = 0$ & $A_1$ & $0$ & $(N_0 +{1}/{2})\; \openone $ \\ $0$& $\mbox{rank} [K] = 1$& $A_2$ & $(\openone + \sigma_3) /2$ & $(N_0 +{1}/{2})\; \openone $ \\ \hline $1$&$\mbox{rank}[\alpha] =1$ & $B_1$ & $\openone$ & $(\openone - \sigma_3) /4$ \\ $1$ & $\mbox{rank}[\alpha] \neq 1$& $B_2$ & $\openone$ & $N_0 \; \openone $ \\ \hline $\kappa^2 \;\;\;(\kappa \neq 0,1) $ && $C$ &$ \kappa \; \openone $ & $|\kappa^2-1| ( N_0 + 1/2) \; \openone$ \\ \hline $-\kappa^2 \;\; (\kappa \neq 0)$ && $D$ &$ \kappa \; \sigma_3 $ & $(\kappa^2+1) ( N_0 + 1/2) \; \openone$ \end{tabular} \caption{Canonical form for one-mode Gaussian Bosonic channels. In the first columns the properties of $K$ and $\alpha$ of the map $\Phi$ are reported. In last two columns instead we give the matrices $K_{\text{can}}$ and $\alpha_{\text{can}}$ of the canonical form $\Phi^{(\text{can})}$ associated with $\Phi$ --- see Eqs.~(\ref{equivalent}) and (\ref{linbos2}). In these expressions $\sigma_3$ is the third Pauli matrix, $N_0$ is a non-negative constant and $\kappa$ is a positive constant. Notice that the constraint~(\ref{inequality}) is always satisfied. In $B_1$ the free parameter $N_c$ has been set equal to $1/2$ --- see discussion below Eq.~(\ref{ALPHACAN}). \label{t:table}} \end{table} The dependence on the matrix $K_{\text{can}}$ of $\Phi^{(\text{can})}$ upon the parameters of $\Phi$ can be summarized as follows, \begin{eqnarray} K_{\text{can}} = \left\{ \begin{array}{cll} \left\{ \begin{array}{lll} \sqrt{\mbox{Det}[K]} \; \openone && \mbox{Det}[K]\geqslant 0 \\ \sqrt{|\mbox{Det}[K]|} \; \sigma_3 && \mbox{Det}[K]<0 \end{array} \right. & &\mbox{rank}[K] \neq 1 \\ \\ (\openone + \sigma_3)/2 & &\mbox{rank}[K] = 1 \;, \end{array} \right.\label{KAPPACAN} \end{eqnarray} with $\sigma_3$ being the third Pauli matrix. Analogously for $\alpha_{\text{can}}$ we have \begin{eqnarray} \alpha_{\text{can}} = \left\{ \begin{array}{cll} \sqrt{\mbox{Det}[\alpha]} \; \openone & &\mbox{rank}[\alpha] \neq 1 \\ \\ \; N_c \; (\openone - \sigma_3)/2 & &\mbox{rank}[\alpha] = 1 \;. \end{array} \right. \label{ALPHACAN} \end{eqnarray} The quantity $N_c$ is a free parameter which can set to any positive value upon properly calibrating the unitaries $U_a$ and $W_a$ of Eq.~(\ref{equivalent}). Following Ref.~\cite{HOLEVOREP} we will assume $N_c=1/2$. Notice also that from Eq.~(\ref{inequality}), $\mbox{rank}[\alpha]=1$ is only possible for $\mbox{Det}[K]=1$. Equations~(\ref{KAPPACAN}) and (\ref{ALPHACAN}) show that only the determinant and the rank of $K$ and $\alpha$ are relevant for defining $K_{\text{can}}$ and $\alpha_{\text{can}}$. Indeed one can verify that $K_{\text{can}}$ and $\alpha_{\text{can}}$ maintain the same determinant and rank of the original matrices $K$ and $\alpha$, respectively. This is a consequence of the fact the $\Phi$ and $\Phi^{(\text{can})}$ are connected through a symplectic transformation for which $\mbox{Det}[K]$, $\mbox{Det}[\alpha]$, $\mbox{rank}[K]$, and $\mbox{rank}[\alpha]$ are invariant quantities. [In particular $\mbox{Det}[ K]$ is directly related with the invariant quantity $q$ analyzed in Ref.~\cite{CG}.] \begin{figure}[t] \centerline{\psfig{file=scheme1.eps,width= 10 cm}} \caption{Pictorial representation of the classification in terms of canonical forms of Table~\ref{t:table}. Depending on the values of $\mbox{Det}[K]$, $\mbox{rank}[K]$ and $\mbox{rank}[\alpha]$ any one-mode Gaussian channel can be transformed to one of the channels of the scheme through unitary transformations as in Eq.~(\ref{equivalent}). The point on the thick oriented line for $\mbox{Det}[K]<0$ represent the maps of $D$, those with $\mbox{Det}[K]>0$ and $\mbox{Det}[K]\neq 1$ represent $C$. The classes $A_{1,2}$ and $B_{1,2}$ are represented by the four colored points of the graph. Notice that the channel $B_2$ and $A_1$ can be obtained as limiting cases of $D$ and $C$. The dotted arrows connect channels which are weakly complementary~(\ref{duedue}) of each others with respect to the physical representations introduced in Sec.~\ref{sec:single}. For instance the weakly complementary of $B_1$ is channel of the class $A_2$ (and vice-versa) --- see Sec.~\ref{s:weakc} and Table~\ref{t:table10} for details. Notice that the weakly complementary channel of $A_1$ belongs to $B_2$. However, not all the channels of $B_2$ have weakly complementary channels which are in $A_1$ --- see Sec.~\ref{sec:b2}.} \label{f:scheme} \end{figure} The six inequivalent canonical forms of Table~\ref{t:table} follow by parametrizing the value of $\sqrt{\mbox{Det}[\alpha]}$ to account for the constraints imposed by the inequality~(\ref{inequality}). It should be noticed that to determine which class a certain channel belongs to, it is only necessary to know if $\mbox{Det}[K]$ is null, equal to $1$, negative or positive ($\neq 1$). If $\mbox{Det}[K]=0$ the class is determined by the rank of the matrix. If $\mbox{Det}[K]=1$ the class is determined by the rank of $\alpha$ (see Fig.~\ref{f:scheme}). Within the various classes, the specific expression of the canonical form depends then upon the effective values of $\mbox{Det}[K]$ and $\mbox{Det}[\alpha]$. We observe also that the class $A_1$ can be obtained as a limiting case (for $\kappa\rightarrow 0$) of the maps of class $C$ or $D$. Analogously the class $B_2$ can be obtained as a limiting case of the maps of class $C$. Indeed consider the channel with $K_{\text{can}} = \kappa \openone$ and $\alpha_{\text{can}} =|\kappa^2 -1| (N_0^\prime +1/2) \openone$ with $N_0^\prime={N_0}/({|\kappa^2 -1|}) -1/2$, with $N_0$ and $\kappa$ positive ($\kappa \neq 0,1$). For $\kappa$ sufficiently close to $1$, $N_0^\prime$ is positive and the maps belongs to the class $C$ of Table~\ref{t:table}. Moreover in the limit of $\kappa\rightarrow 1$ this channel yields the map $B_2$. Finally it is interesting to study how the canonical forms of Table~\ref{t:table} compose under the product~(\ref{composition}). A simple calculation shows that the following rules apply \begin{eqnarray} \label{compo1} \begin{array}{c|cccccc} \circ & A_1 & A_2 & B_1 & B_2 & C & D \\ \hline A_1 & A_1 & A_1 & A_1 & A_1 & A_1 & A_1 \\ A_2 & A_1 & A_2 & A_2 & A_2 & A_2 & A_2 \\ B_1 & A_1 & A_2 & B_1 & B_{1}/B_{2} & C & D \\ B_2 & A_1 & A_2 & B_{1}/B_{2} & B_{2} & C & D \\ C & A_1 & A_2 & C & C & B_2/C & D \\ D & A_1 & A_2 & D & D & D & C \end{array} \end{eqnarray} In this table, for instance, the element on the row 2 and column 3 represents class (i.e. $A_2$) associated to the product $\Phi^{\prime\prime}\circ \Phi^{\prime}$ between a channel $\Phi^{\prime}$ of $B_1$ and a channel $\Phi^{\prime\prime}$ of $A_2$. Notice that the canonical form of the products $B_1\circ B_2$, $B_2\circ B_1$ and $C\circ C$ is not uniquely defined. In the first case in fact, even though the determinant of the matrix $K$ of Eq.~(\ref{composition}) is one, the rank of the corresponding $\alpha$ might be one or different from one depending on the parameters of the two ``factor'' channels: consequently the $B_1\circ B_2$ and $B_2\circ B_1$ might belong either to $B_1$ or to $B_2$. In the case of $C\circ C$ instead it is possible that the resulting channel will have $\mbox{Det}[K]=1$ making it a $B_2$ map. Typically however $C\circ C$ will be a map of $C$. Composition rules analogous to those reported here have been extensively analyzed in Refs.~\cite{CG,ENTROPY,GL}. \subsection{Single-mode physical representation of the canonical forms} \label{sec:single} Apart from the case $B_2$ that will be treated separately (see next section), all canonical transformations of Table~\ref{t:table} can be expressed as in Eq.~(\ref{output}), i.e. through a physical representation~(\ref{HGCuno}) with $\rho_b$ being a thermal state~(\ref{sigmab}) of a single external Bosonic mode $B$ and $U_{ab}$ being a linear transformation~(\ref{couplingN})\footnote{The exceptional role of $B_2$ corresponds to the fact that any one-mode Bosonic Gaussian channel can be represented as a unitary coupling with a single-mode environment plus an additive classical noise (see next section and Ref.~\cite{REV}).}. To show this it is sufficient to verify that, for each of the classes of Table~\ref{t:table} but $B_2$, there exists a non-negative number $N$ and a symplectic matrix $M$ such that Eq.~(\ref{output}) gives the mapping~(\ref{linbos2}). This yields the conditions \begin{eqnarray} m_{11} &=& K_{\text{can}}\;, \label{m11}\\ m_{12} &=& O \;\sqrt{ \frac{\alpha_{\text{can}}}{N+1/2}}\;, \label{m12} \end{eqnarray} with $O^T=O^{-1}$ being an orthogonal $2\times 2$ matrix to be determined through the symplectic condition \begin{eqnarray} \mbox{Det}[m_{11}] + \mbox{Det}[m_{12}] =1 \label{simpcond} \;, \end{eqnarray} which guarantees that $U_{ab}^\dag Q_a U_{ab}$ and $U_{ab}^\dag P_a U_{ab}$ satisfy canonical commutation relations. It is worth noticing that once $m_{11}$ and $m_{12}$ are determined within the constraint~(\ref{simpcond}) the remaining blocks (i.e. $m_{21}$ and $m_{22}$) can always be found in order to satisfy the remaining symplectic conditions of $M$. An explicit example will be provided in few paragraphs. For the classes $A_1$, $A_2$, $B_1$, $D$, and $C$ with $\kappa<1$, Eqs.~(\ref{m12}) and (\ref{simpcond}) can be solved by choosing $O = \openone$ and $N=N_0$. Indeed for $B_1$ the latter setting is not necessary. Any non-negative number will do the job: thus we choose $N=0$ making the density matrix $\rho_b$ of Eq.~(\ref{sigmab}) the vacuum of the $B$. For $C$ with $\kappa>1$ instead a solution is obtained by choosing $O = \sigma_3$ and again $N=N_0$. The corresponding transformations~(\ref{couplingN}) for $Q_a$ and $P_a$ (together with the choice for $N$) are summarized below. \begin{eqnarray} \begin{array}{cc|c|ccc} \mbox{Class} & & \rho_b &U_{ab}^\dag \; Q_a \; U_{ab} && U_{ab}^\dag \; P_a \; U_{ab} \\ \hline A_1 && \mbox{thermal} (N=N_0) & Q_b && P_b \\ A_2 && \mbox{thermal} (N=N_0) & Q_a + Q_b && P_b \\ B_1 && \mbox{vacuum} (N=0)& Q_a && P_a+ P_b \\ C & \kappa<1 & \mbox{thermal} (N=N_0)&\kappa \; Q_a + \sqrt{1-\kappa^2} \; Q_b && \kappa \; P_a + \sqrt{1-\kappa^2} \; P_b \\ C & \kappa> 1 & \mbox{thermal} (N=N_0) &\kappa \; Q_a + \sqrt{\kappa^2-1} \; Q_b && \kappa \; P_a - \sqrt{\kappa^2-1} \; P_b\\ D & & \mbox{thermal} (N=N_0) & {\kappa} \; Q_a + \sqrt{\kappa^2 +1} \; Q_b && - {\kappa} \; P_a + \sqrt{\kappa^2+1} \; P_b \;. \nonumber \end{array} \end{eqnarray} To complete the definition of the unitary operators $U_{ab}$ we need to provide also the transformations of $Q_b$ and $P_b$. This corresponds to fixing the blocks $m_{21}$ and $m_{22}$ of $M$ and cannot be done uniquely: one possible choice is presented in the following table \begin{eqnarray} \begin{array}{cc|ccc} \mbox{Class} & &U_{ab}^\dag \; Q_b \; U_{ab} &&U_{ab}^\dag \; P_b \; U_{ab} \\ \hline A_1 && Q_a && P_a \\ A_2 && Q_a && P_a - P_b \\ B_1 && Q_a - Q_b && -P_b \\ C & \kappa<1 & \sqrt{1-\kappa^2} \; Q_a- \kappa \; Q_b && \sqrt{1-\kappa^2} \; P_a- \kappa \; P_b \\ C & \kappa> 1 & \sqrt{\kappa^2-1} \; Q_a+ \kappa \; Q_b && - \sqrt{\kappa^2-1} \; P_a+ \kappa \; P_b \\ D & & \sqrt{\kappa^2 +1} \; Q_a + {\kappa} \; Q_b && \; \sqrt{\kappa^2+1} \; P_a - {\kappa} \; P_b \;. \end{array} \nonumber \end{eqnarray} The above definitions make explicit the fact that the canonical form $C$ represents attenuator ($\kappa<1$) and amplifier ($\kappa>1$) channel~\cite{HW}. We will see in the following sections that the class $D$ is formed by the weakly complementary of the amplifier channels of the class $C$. For the sake of clarity the explicit expression for the matrices $M$ of the various classes has been reported in App.~\ref{appendiceM}. Finally it is important to notice that the above physical representations are equivalent to Stinespring representations only when the average photon number $N$ of $\rho_b$ nullifies. In this case the environment $B$ is represented by a pure input state (i.e. the vacuum). According to our definitions this is always the case for the canonical form $B_1$ while for the canonical forms $A_1$, $A_2$, $C$ and $D$ it happens for $N_0=0$. \subsection{The class $B_2$: additive classical noise channel}\label{sec:b2} As mentioned in the previous section the class $B_2$ of Table~\ref{t:table} must be treated separately. The map $B_2$ corresponds\footnote{This can be seen for instance by evaluating the characteristic function of the state~(\ref{additive}) and comparing it with Eq.~(\ref{linbos2}).} to the additive classical noise channel~\cite{HW} defined by \begin{eqnarray} \Phi(\rho_a) = \int d^2 z \; p(z) \; V_a(z) \; \rho_a \; V_a(-z) \label{additive}\;, \end{eqnarray} with $p(z) = (2 \pi N_0)^{-1} \; \exp[-|z|^2/(2N_0) ]$ which, in Heisenberg picture, can be seen as a random shift of the annihilation operator $a$. These channels admit a natural physical representation which involve two environmental modes in a pure state (see Ref.~\cite{HOLEVOREP} for details) but do not have a physical representations~(\ref{HGCuno}) involving a single environmental mode. This can be verified by noticing that in this case, from Eqs.~(\ref{m11}) and (\ref{m12}) we get \begin{eqnarray} m_{11} &=& \openone \label{m11B2} \\ m_{12} &=& \sqrt{N_0/(N+1/2)} \; O\;, \label{m12B2} \end{eqnarray} which yields \begin{eqnarray} \mbox{Det}[m_{11}] + \mbox{Det}[m_{12}] = 1 \pm N_0/(N+1/2) \;, \end{eqnarray} independently of the choice of the orthogonal matrix $O$\footnote{This follows from the fact that $\mbox{Det}[O]=\pm 1$ since $O^T =O^{-1}$.}. Therefore, apart from the trivial case $N_0=0$, the only solution to the constraint~(\ref{simpcond}) is by taking the limit $N\rightarrow \infty$. This would correspond to representing the channel $B_2$ in terms of a linear coupling with a single-mode thermal state $\rho_b$ of ``infinite'' temperature. Unfortunately this is not a well defined object. However we can use the ``asymptotic'' representation described at the end of Sec.~\ref{s:canonical} where it was shown how to obtain $B_2$ as limiting case of $C$ class maps, to claim at least that there exists a one-parameter family of one-mode Gaussian channels which admits single-mode physical representation and which converges to $B_2$. \begin{table}[t!] \begin{tabular} {cc|cc|c} Class of ${\Phi}$ & & Weak complementary channel $\tilde{\Phi}$ & & Class of $\tilde{\Phi}$ \\ && $K$ &$\alpha$ & \\ \hline \hline $A_1$ && $\openone$ & $0$ & $B_2$ \\ \hline $A_2$ && $\openone$ & $(N_0+1/2) \; (\openone -\sigma_3)/2$ & $B_1$ \\ \hline $B_1$ && $(\openone+\sigma_3)/2$ & $\openone/2$ & $A_2$ \\ \hline $C$ & $\kappa<1$ & $ \sqrt{1-\kappa^2} \; \openone$ & $ k^2 (N_0+1/2) \openone $ & $C\;\; (\kappa<1)$ \\ $C$ & $\kappa> 1$ & $ \sqrt{\kappa^2-1} \; \sigma_3$ & $\kappa^2 (N_0+1/2) \; \openone $ & $D$\\ \hline $D$ & & $ \sqrt{\kappa^2 +1} \; \openone$ & $ \kappa^2 (N_0+1/2)\; \openone $ & $C \;\; (\kappa>1)$ \end{tabular} \caption{Description of the weakly complementary~(\ref{duedue}) of the canonical forms $A_1$, $A_2$, $B_1$, $C$ and $D$ of Table~\ref{t:table} constructed from the physical representations~(\ref{HGCuno}) given in Sec.~\ref{sec:single}. In the first column is indicated the class of $\Phi$. In the central columns instead is given a description of $\tilde{\Phi}$ in terms of the representation~(\ref{linbos1}). Finally in the last column is reported the canonical form corresponding to the map $\tilde{\Phi}$. In all cases the identification is immediate: for instance the canonical form of the map $\tilde{\Phi}_{A_1}$ belongs to the class $B_2$, while the canonical form of the map $\tilde{\Phi}_{D}$ is the class $C$ with $\mbox{Det}[K_{\text{can}}]>1$. In the case of $\tilde{\Phi}_{A_2}$ the identification with the class $B_1$ was done by exploiting the possibility freely varying $N_c$ of Eq.~(\ref{ALPHACAN}) --- see Ref.~\cite{HOLEVOREP}. A pictorial representation of the above weak-degradability connections is given in Fig.~\ref{f:scheme}. \label{t:table10}} \end{table} \section{Weak-degradability of one-mode Gaussian channels}\label{s:DEGRADABILE} In the previous section we have seen that all one-mode Gaussian channels are unitarily equivalent to one of the canonical forms of Table~\ref{t:table}. Moreover we verified that, with the exception of the class $B_2$, all the canonical forms admits a physical representation~(\ref{HGCuno}) with $\rho_b$ being a thermal state of a single environmental mode and $U_{ab}$ being a linear coupling. Here we will use such representations to construct the weakly complementary~(\ref{duedue}) of these channels and to study their weak-degradability properties. \subsection{Weakly complementary channels}\label{s:weakc} In this section we construct the weakly complementary channels $\tilde{\Phi}$ of the class $A_1$, $A_2$, $B_1$, $C$ and $D$ starting from their single-mode physical representations~(\ref{HGCuno}) of Sec.~\ref{sec:single}. Because of the linearity of $U_{ab}$ and the fact that $\rho_b$ is Gaussian, the channels $\tilde{\Phi}$ are Gaussian. This can be seen for instance by computing the characteristic function~(\ref{characte}) of the output state $\tilde{\Phi}(\rho_a)$ \begin{eqnarray} \phi(\tilde{\Phi}(\rho_a); z) &=& \mbox{Tr}_b[ \tilde{\Phi}(\rho_a) \; V_b(z) ] = \mbox{Tr}_{b} [ \rho_a \; \tilde{\Phi}_H(V_b(z)) ] \nonumber \\ &=& \phi( \rho_a; m_{21} \cdot z) \; \exp[ - \frac{1}{2} ( N+ 1/2) \; | m_{22} \cdot z|^2 ] \label{charN} \;, \end{eqnarray} where $m_{21}$, $m_{22}$ are the blocks elements of the matrix $M$ of Eq.~(\ref{matricem}) associated with the transformations $U_{ab}$, and with $N$ being the average photon number of $\rho_b$ (the values of these quantities are given in the tables of Sec.~\ref{sec:single} --- see also App.~\ref{appendiceM}). By setting $m=0$, $K= m_{21}$ and $\alpha= (N+1/2) \; m_{22}^T \; m_{22}$, Eq.~(\ref{charN}) has the same structure~(\ref{linbos1}) of the one-mode Gaussian channel of $A$. Therefore by cascading $\tilde{\Phi}$ with an isometry which exchanges $A$ with $B$ (see Refs.~\cite{SARO,CG}) we can then treat $\tilde{\Phi}$ as an one-mode Gaussian channel operating on $A$ (this is possible because both $A$ and $B$ are Bosonic one-mode systems). With the help of Table~\ref{t:table} we can then determine which classes can be associated with the transformation~(\ref{charN}). This is summarized in Table~\ref{t:table10}. \subsection{Weak-degradability properties}\label{sec:wdp} Using the compositions rules of Eqs.~(\ref{composition}) and (\ref{compo1}) it is easy to verify that the canonical forms $A_1$, $A_2$, $D$ and $C$ with $\kappa\leqslant \sqrt{1/2}$ are anti-degradable~(\ref{wwwanti}). Vice-versa one can verify that the canonical forms $B_1$ and $C$ with $\kappa\geqslant \sqrt{1/2}$ are weakly degradable~(\ref{wwwweak}) --- for $C$, $D$ and $A_1$ these results have been proved in Ref.~\cite{CG}. Through unitary equivalence this can be summarized by saying that all one-mode Gaussian channels~(\ref{linbos1}) having $\mbox{Det}[K]\leqslant 1/2$ are anti-degradable, while the others (with the exception of the channels belonging to $B_2$) are weakly degradable (see Fig.~\ref{f:scheme2}). \begin{figure}[t] \centerline{\psfig{file=scheme2.eps,width= 10 cm}} \caption{Pictorial representation of the weak-degradability regions for one-mode Gaussian channels. All canonical forms with $\mbox{Det}[K]\leqslant 1/2$ are anti-degradable: this includes the classes $A_{1}$, $A_2$, $D$ and part of the $C$. The remaining (with the exception of $B_2$) are instead weakly degradable. Moreover $B_1$ is also degradable in the sense of Ref.~\cite{DEVSHOR}. The same holds for channels of canonical form $C$ with $N_0=0$: the exact expression for the quantum capacity of these channels has been given in Ref.~\cite{WOLF}.} \label{f:scheme2} \end{figure} In the following we verify the above relations by explicitly constructing the connecting channels $\Psi$ and $\overline{\Psi}$ of Eqs.~(\ref{wwwweak}) and (\ref{wwwanti}) for each of the mentioned canonical forms. Indeed one has: \begin{itemize} \item For a channel $\Phi$ of standard form $A_1$ or $A_2$, anti-degradability can be shown by simply taking $\overline{\Psi}$ of Eq.~(\ref{wwwanti}) coincident with the channel $\Phi$. The result immediately follows from the composition rule~(\ref{composition}). \item For a channel $\Phi$ of $B_1$, weak-degradability comes by assuming the map $\Psi$ to be equal to the weakly complementary channel $\tilde{\Phi}$ of $\Phi$ (see Table~\ref{t:table10}). As pointed out in Ref.~\cite{HOLEVOREP} this also implies the degradability of $\Phi$ in the sense of Ref.~\cite{DEVSHOR}. Let us remind that for $B_1$ the physical representation given in Sec.~\ref{sec:single} was constructed with an environmental state $\rho_b$ initially prepared in the vacuum state, which is pure. Therefore in this case our representation gives rise to a Stinespring dilation. \item For a channel $\Phi$ of the class $C$ with $K_{\text{can}}= \kappa \; \openone$ and $\alpha_{\text{can}} =|\kappa^2 -1| (N_0+1/2) \openone$ we have the following three possibilities: \begin{itemize} \item If $\kappa \leqslant \sqrt{1/2}$ the channel is anti-degradable and the connecting map $\overline{\Psi}$ is a channel of $C$ characterized by $K_{\text{can}}= \kappa^\prime \; \openone$ and $\alpha_{\text{can}} =(1-(\kappa^\prime)^2) (N_0+1/2) \openone$ with $\kappa^\prime = \kappa/\sqrt{1-\kappa^2} <1$. \item If $\kappa \in [\sqrt{1/2},1[$ the channel is weakly degradable and the connecting map ${\Psi}$ is again a channel of $C$ defined as in the previous case but with $\kappa^\prime = \sqrt{1-\kappa^2}/\kappa <1$. For $N_0=0$ the channel is also degradable~\cite{DEVSHOR} since our physical representation is equivalent to a Stinespring representation. \item If $\kappa >1$ the channel is weakly degradable and the connecting map ${\Psi}$ is a channel of $D$ with $K_{\text{can}}= \kappa^\prime \; \openone$ and $\alpha_{\text{can}} =((\kappa^\prime)^2-1) (N_0+1/2) \openone$ with $\kappa^\prime =\sqrt{k^2-1}/k$. As in the previous case, for $N_0=0$ the channel is also degradable~\cite{DEVSHOR}. \end{itemize} \item For a channel $\Phi$ of $D$ with $K_{\text{can}} = \kappa \; \sigma_3$ and $\alpha_{\text{can}} = (\kappa^2 +1) (N_0+1/2) \openone$ ($\kappa>0$ and $N_0\geqslant0$) we can prove anti-degradability by choosing $\overline{\Psi}$ of Eq.~(\ref{wwwanti}) to be yet another maps of $D$ with $K_{\text{can}} =\kappa^\prime \; \sigma_3$ and $\alpha_{\text{can}} = ((\kappa^\prime)^2 +1) (N_0+1/2) \openone$ where $\kappa^\prime=\kappa/\sqrt{\kappa^2 +1}$. From Eq.~(\ref{composition}) and Table~\ref{t:table10} it then follows that $\Psi \circ \tilde{\Phi}$ is indeed equal to $\Phi$. \end{itemize} Concerning the case $B_2$ it was shown in Ref.~\cite{HOLEVOREP} that the channel is neither anti-degradable nor degradable in the sense of~\cite{DEVSHOR} (apart from the trivial case $N_0=0$ which corresponds to the identity map). On the other hand one can use the continuity argument given in Sec.~\ref{sec:b2} to claim that the channel $B_2$ can be arbitrarily approximated with maps which are weakly degradable (those belonging to $C$ for instance). \section{One-mode Gaussian channels with $\mbox{Det}[K]>1/2$ and having null quantum capacity}\label{sec:null} In the previous section we saw that all channels~(\ref{linbos1}) with $\mbox{Det}[K]\leqslant 1/2$ are anti-degradable. Consequently these channel must have null quantum capacity~\cite{CG,SARO}. Here we go a little further showing that the set of the maps (\ref{linbos1}) which can be proved to have null quantum capacity include also some maps with $\mbox{Det}[K] >1/2$. To do this we will use the following simple fact: {\em Let be $\Phi_1$ a quantum channel with null quantum capacity and let be $\Phi_2$ some quantum channel. Then the composite channels $\Phi_1\circ \Phi_2$ and $\Phi_2 \circ \Phi_1$ have null quantum capacity.} The proof of this property follows by interpreting $\Phi_2$ as a quantum operation performed either at the decoding or at encoding stage of the channel $\Phi_1$. This shows that the quantum capacities of $\Phi_1\circ \Phi_2$ and $\Phi_2 \circ \Phi_1$ cannot be greater than the capacity of $\Phi_1$ (which is null). In the following we will present two cases where the above property turns out to provide some nontrivial results. \subsection{Composition of two class $D$ channels}\label{sec:2D} \begin{figure}[t] \centerline{\psfig{file=plot.eps,width= 10 cm}} \caption{The dark-grey area of the plot is the region of the parameters $N_0$ and $\mbox{Det}[K]=\kappa^2$ where a channel with canonical form $C$ can have not null quantum capacity. For $\mbox{Det}[K]<1/2$ the channel is anti-degradable. In the remaining white area the quantum capacity is null since these maps can be obtained by a composition of channels one of which being anti-degradable. The curve in black refers to the bound of Eq.~(\ref{enneN}). The contour of the dark-grey area is instead given by Eq.~(\ref{enneNN}).} \label{f:plot} \end{figure} We observe that according to composition rule~(\ref{compo1}) the combination of any two channels $\Phi_1$ and $\Phi_2$ of $D$ produces a map $\Phi_{21} \equiv \Phi_2\circ \Phi_1$ which is in the class $C$. Since the class $D$ is anti-degradable the resulting channel must have null quantum capacity. Let then $\kappa_j \sigma_3$ and $(\kappa^2_j +1) (N_j+1/2) \openone$ be the matrices $K_{\text{can}}$ and $\alpha_{\text{can}}$ of the channels $\Phi_j$, for $j=1,2$. From Eq.~(\ref{composition}) one can then verify that $\Phi_{21}$ has the canonical form $C$ with parameters \begin{eqnarray} \kappa &=& \kappa_1 \kappa_2 \;,\label{kappa} \\ N_0 &=& \frac{(\kappa_2^2+1) N_2 + \kappa_2^2 (\kappa_1^2+1) N_1 }{|\kappa_1^2\kappa_2^2 -1|} + \frac{1}{2} \left( \frac{\kappa^2_1\kappa_2^2 + 2 \kappa_2^2 +1}{ |\kappa_1^2\kappa_2^2 -1|} -1 \right) \;. \label{enne} \end{eqnarray} Equation~(\ref{kappa}) shows that by varying $\kappa_j$, $\kappa$ can take any positive values: in particular it can be greater than $\sqrt{1/2}$ transforming $\Phi_{21}$ into a channel which does not belong to the anti-degradable area of Fig.~\ref{f:scheme2}. On the other hand, by varying the $N_j$ and $\kappa_2$, but keeping the product $\kappa_1\kappa_2$ fixed, the parameter $N_0$ can assume any value satisfying the inequality \begin{eqnarray} N_{0} &\geqslant& \frac{1}{2} \left( \frac{\kappa^2 +1}{ |\kappa^2 -1|} -1 \right) \;. \label{enneN} \end{eqnarray} We can therefore conclude that all channels $C$ with $\kappa$ and $N_0$ as in Eq.~(\ref{enneN}) have null quantum capacity --- see Fig.~\ref{f:plot}. A similar bound was found in a completely different way in Ref.~\cite{HW}. \subsection{Composition of two class $C$ channels}\label{sec:2C} Consider now the composition of two class $C$ channels, i.e. $\Phi_{1}$ and $\Phi_2$, with one of them (say $\Phi_2$) being anti-degradable. Here, the canonical form of $\Phi_1$ and $\Phi_2$ have matrices $K_{\text{can}}$ and $\alpha_{\text{can}}$ given by $K_i=\kappa_j \openone$ and $\alpha_j= |\kappa^2_j -1| (N_j+1/2) \openone$, where for $j=1,2$, $N_j$ and $\kappa_j$ are positive numbers, with $\kappa_{1}\neq 0,1$ and with $\kappa_2 \in ]0,\sqrt{1/2}]$ (to ensure anti-degradability). From Eq.~(\ref{composition}) follows then that the composite map $\Phi_{21}= \Phi_2 \circ \Phi_1$ has still a $C$ canonical form with parameters \begin{eqnarray} \kappa &=& \kappa_1 \kappa_2 \;,\label{kappa1} \\ N_0 &=& \frac{|\kappa_2^2-1| N_2 + \kappa_2^2 |\kappa_1^2-1| N_1 }{|\kappa_1^2\kappa_2^2 -1|} + \frac{1}{2} \left( \frac{\kappa^2_2|\kappa_1^2-1| + | \kappa_2^2 -1|}{ |\kappa_1^2\kappa_2^2 -1|} -1 \right) \;. \label{enne111} \end{eqnarray} As in the previous example, $\kappa$ can assume any positive value. Vice-versa keeping $\kappa$ fixed, and varying $\kappa_1 >1$ and $N_{1,2}$ it follows that $N_0$ can take any values which satisfy the inequality \begin{eqnarray} N_{0} &\geqslant& \frac{1}{2} \left( \frac{\kappa^2}{ |\kappa^2 -1|} -1 \right) \;. \label{enneNN} \end{eqnarray} We can then conclude that all maps $C$ with $\kappa$ and $N_0$ as above must possess null quantum capacity. The result has been plotted in Fig.~\ref{f:plot}. Notice that the constraint~(\ref{enneNN}) is an improvement with respect to the constraint of Eq.~(\ref{enneN}). \section{Conclusion}\label{sec:con} In this paper we provide a full weak-degradability classification of one-mode Gaussian channels by exploiting the canonical form decomposition of Ref.~\cite{HOLEVOREP}. Within this context we identify those channels which are anti-degradable. By exploiting composition rules of Gaussian maps, this allows us to strengthen the bound for one-mode Gaussian channels which have not null quantum capacity. F.C. and V.G. thank the Quantum Information research program of Centro di Ricerca Matematica Ennio De Giorgi of Scuola Normale Superiore for financial support. A. H. acknowledges hospitality of Centre for Quantum Computation, Department of Applied Mathematics and Theoretical Physics, Cambridge University.
2,869,038,155,729
arxiv
\section{Update in 2021} This manuscript was written in January 2014 and once submitted to ICML 2014. Unfortunately, we did not continue this line of research and did not publish this article either. Today, we decide to publish it and expect the idea and empirical results can be helpful to those who would like to understand and investigate the problems such as why deep learning works. Since 2014, several related works emerge, among which the most close ones are the information bottleneck principle \cite{tishby2015deep} and maximum coding rate reduction principle \cite{chan2021redunet}. Both of these two principles are based on supervised learning which assumes the labels of data are known, while the principle of this paper investigate the structure learning through unsupervised learning. We assume the optimal structure of neural networks can be derived from the input features even without labels. Furthermore, let $x$ , $z$, $y$ indicate the input data, the learned representation and labels respectively. Information bottleneck principle \cite{tishby2015deep} aims to maximize the mutual information between $z$ and $y$, meanwhile minimize the mutual information between $x$ and $z$. However, the information maximization principle in this paper aims to maximize the mutual information between $x$ and $z$. The maximum coding rate reduction principle \cite{chan2021redunet}, on the one hand, tries to maximize the mutual information between $x$ and $z$, which is essentially the same as information maximization principle. However, on the other hand, the $MCR^{2}$ principle also leverages the labels of data and minimize the volume of $z$ within each class. In linear model, information maximization principle leads to PCA (principal component analysis) while maximum coding rate reduction leads to LDA (linear discriminant analysis). \section{Introduction} Recent years, people have witnessed a resurgence of neural networks in the machine learning community. Indeed, systems built on deep neural network techniques (DNN) demonstrate remarkable empirical performance in a wide range of applications. For examples, convolutional neural networks keeps the records in the ImageNet challenge ILSVRC 2012 \cite{krizhevsky_2012} and ILSVRC 2013 \cite{zeiler_2013}. The core of state-of-the-art speech recognition systems are also based on DNN techniques \cite{mohamed_2009,deng_2011}. In the applications to natural language processing, neural networks are making steady progress too \cite{collobert_2011,mikolov_2013}. Empirical evidences for understanding why deep neural networks works so well are also accumulated. Besides the techniques such as dropout, local normalization, and data augmentation, the evidences suggest that the architecture or structure of neural networks plays a significant role in its success. For example, an early result reported by \cite{jarrett_2009} finds that, a two-layer architecture is always better than a single-layer one. More surprisingly, the paper observes that, given an appropriate structure, even random assignment of networks parameters can yield a decent performance. In addition, in the state-of-the-art systems such as \cite{krizhevsky_2012, zeiler_2013,mohamed_2009,deng_2011}, the network structure, in particularly, the inter-layer connection, the number of nodes in each layer, and the depth of the networks, are all designed by human experts in a very careful and probably painful way. This requires in-depth domain knowledge (e.g., the structure of convolutional neural networks \cite{lecun_1998} largely originates from the inspiration of biological nervous system \cite{hubel_1962,fukushima_1982}, and the network structure in \cite{deng_2011} heavily depend on the domain knowledge in speech recognition) or hundreds of times of trial-and-error \cite{jarrett_2009}. Given this situation, a natural question arises: can we learn a good network structure for DNN from scratch in a fully automatic fashion? What is the principled way to achieve this? To the best of our knowledge, the studies on these important questions are still very limited. Only \cite{chen_2013} shows the possibility of learning the number of nodes in each layer with a nonparametric Bayesian approach. However, there is still no attempt on the automatic learning of inter-layer connections and the depth of DNN. These are exactly the focus of our paper. For this purpose, we borrow an important principle, called the efficient coding principle, from the domain of biological nervous systems, in which area there exists tremendous research work on understanding the structure of human brains. The principle basically says that a good structure (brain structure in their case and the structure of DNN in our case) forms an efficient internal representation of external environments \cite{barlow_1961,linsker_1988}. Rephrased by our familiar language, the principle suggests that the structure of a good network should match the statistical structure of input signals. In particular, it should maximize the mutual information between the inputs and outputs, or equivalently maximize the entropy of the output signals under mild assumptions. While the principle seems intuitive and a little informal, we show that it has a solid theoretical foundation in terms of Bayesian optimal classification, and thus has a strong connection with the optimality of the neural networks from the machine learning perspective. In particular, we first show that the principle suggests us to maximize the independence between the output signals. Then we notice that the top layer of any neural network is a \emph{softmax} linear classifier, and the independency between the nodes in the top hidden layer is a sufficient condition for the \emph{softmax} linear classifier to be the Bayesian optimal classifier. This theoretical foundation also provides us a clear way to determine the depth of the deep neural networks: if after multiple layers of non-linear transformations (learned under the guidelines of the efficient coding principle) the hidden nodes become statistically independent of each other, then there is no need to add another hidden layer (i.e., the depth of the network is finalized) since we have already been optimal in terms of the classification error. We then investigate how to design structure learning algorithm based on the principle of efficient coding. We show that sparse coding can implement the principle under the assumption of zero-peaked and heavy-tailed prior distributions. Based on this discovery, we design an effective structure learning algorithm based on \emph{global group sparse coding}. When customized for image analysis, we discuss how the proposed algorithm can learn inter-layer connections, handle invariance, and determine the depth. We conduct a set of experiments on a widely used dataset for image classification. We have several interesting findings. First, although we have not imposed any prior knowledge onto the structure learning process, the DNN with our automatically learned structure can provide a very competitive classification accuracy, which is very close to the well-tuned CNN model that is designed by human experts. Second, our algorithm can automatically discover the local connection structure, simply due to the match of the statistical structure of input signals. Third, we notice that the pooling operation specifically designed in CNN can also be automatically implemented by our learning algorithm based on group sparse coding. All these results demonstrate the power of automatic structure learning based on the efficient coding principle. While our work is just a preliminary step towards automatic structure learning, we have seen very positive signs suggesting that structure learning could be an important direction to better understand DNN, to further improve the performance of DNN, and to generalize the application scope of DNN based learning algorithms. \section{The Principle for Structure Learning} The key of unsupervised structure learning for DNN is to adopt an appropriate principle to guide the procedure of structure learning. In this section, we describe our used principle and discuss its advantages for structure learning. \begin{figure} \centering \subfigure[]{ \includegraphics[bb = 60 5 580 467,width=0.22\textwidth]{efficient_coding} \label{efficient_coding:sub1} } \hspace{0.03\textwidth} \subfigure[]{ \includegraphics[bb = 95 325 439 739, width=0.18\textwidth]{efficient_coding2} \label{effcient_coding:sub2} } \caption{Fig. (a) shows an pipeline of linear and nonlinear transformation; Fig. (b) shows an example of the cumulated distribution function (CDF) transformation, which can map arbitrary distribution into a uniform distribution in $[0,1]$. } \label{efficient_coding} \end{figure} To guide the structure learning for deep neural networks, we borrow a principle from computational neuroscience. In fact, various hypotheses have been proposed to understand the magic structure of biological nervous system in the literature. The core problem under investigation include: what is the goal of sensory coding or what type of neuronal representation is optimal in an ecological sense? Of all the attempts on answering this question, the principles rooted in information theory have been proved to be successful. For ease of illustration of these principles, we give some notations first. Figure \ref{efficient_coding:sub1} shows how the data go through one layer of a typical neural network. The input $\mathbf{X}$ is firstly processed by a linear transformation $\mathbf{W}$ and followed by a component-wise transformation $\mathbf{\sigma}$, in which each component is usually a bounded, invertible nonlinear function. That is, \begin{equation} Z_{i}=\sigma_{i}(U_{i})=\sigma_{i}(\mathbf{W}_{i}^T{\mathbf{X}}) \end{equation} Without loss of generality, the range of $\sigma_{i}$ is usually assumed $[0,1]$. $\mathbf{Z}$ is the neuronal representation or a coding of the input $\mathbf{X}$. Among the information-theoretic principles proposed in the literature, the \emph{redundancy reduction principle} developed by Barlow \cite{barlow_1961} has been at the origin of many theoretical and experimental studies. Let $I(\mathbf{Z})$ denote the mutual information between output units. Barlow's theory proposes that, the output of each unit should be statistically independent from each other in an optimal neural coding. That is, the objective function is the minimization of $I(\mathbf{Z})$. Another principle developed by Linsker \cite{linsker_1988} advocates that the system should maximize the amount of information that the output conveys about the input signal, that is, maximizing $I(\mathbf{X};\mathbf{Z})$. As shown in the following theorem, the aforementioned principles are actually equivalent to each other under certain conditions. \begin{theorem} \label{theorem:mmi_me} Let the component-wise nonlinear transfer function $\mathbf{\sigma}_{i}$ be the cumulated distribution function (CDF) of $U_{i}$, minimizing $I(\mathbf{Z})$ is equivalent to maximizing $I(\mathbf{X};\mathbf{Z})$. \end{theorem} A sketch proof is given in supplementary material. The theory indicates that for bounded output neural networks, minimizing the mutual information between outputs is equivalent to maximizing the mutual information between inputs and outputs. Due to this equivalence, we will not distinguish the two principles, and will uniformly refer to them as ``efficient coding principles''. Since the efficient coding principle is rooted in computational neuroscience and information theory, one may wonder whether it really ensures optimal neural network structures from the machine learning perspective. Through the following theorem we show that this principle has a strong theoretical connection with pattern classification tasks. \begin{theorem} \cite{minsky_1961} With (conditional) independent features, linear classifier is optimal in the sense of minimum Bayesian error. \end{theorem} Actually, this theorems has its particular implication in the context of structure learning for deep neural networks. As we know, no matter how a deep neural network is structured, its top layer is always a \emph{softmax} linear classifier. Then according to the above theorem, if we can achieve the independency between the nodes in the top hidden layer by means of structure learning, then it will ensure (as a sufficient condition) that the \emph{softmax} linear classifier would be the Bayesian optimal classifier. In other words, there is no need to adopt more complicated non-linear classifiers at all. In this sense, this theoretical foundation also provides us a clear way to determine the depth of the deep neural networks: if after multiple layers of non-linear transformations (learned under the guidelines of the efficient coding principle) the hidden nodes become statistically independent of each other, then we should stop growing the depth of the neural networks since we have already been optimal in terms of the possibly best classification error we could ever get. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{entropy_accuracy} \caption{The correlation between feature entropy and classification accuracy.} \label{entropy_accuracy} \end{figure} \begin{figure*} \centering \subfigure[]{ \includegraphics[width=0.45\textwidth]{pixel_spatial_mi} \label{spatial_mi:pixel} } \hspace{0.03\textwidth} \subfigure[]{ \includegraphics[width=0.45\textwidth]{edge_spatial_mi} \label{spatial_mi:edge} } \caption{Fig. (a) The mutual information between pixels decays with the increasing of the spatial distance between pixels; Fig. (b) The similar decay behavior can be observed when each location is represented as an edge direction. This figure demonstrates that, with pixel representation, two locations seems independent if the spatial distance exceeds 10. However, after extracting edge directions at each location and represent the location with the edge direction, two locations are no longer seemingly independent. Such observation implies the redundancies emerge with more abstract representation.} \label{spatial_mi} \end{figure*} \section{Empirical Observations} In this section, we show some empirical studies regarding the efficient coding principle. For this purpose, we need to compute the coding efficiency (or information gain) provided by a layer of neural network (in terms of the change of the mutual information). Please refer to the Appendix for the method we used. \subsection{Coding Efficiency and Classification Accuracy} \label{chapter:entropy_accuracy} We generate a set of random structures and use them to extract features from whitened CIFAR-10 data set. The coding efficiency of the extracted features and the corresponding classification on training set are evaluated \footnote{We deliberately use training set because we want to know the relation between fitting quality and coding efficiency}. Figure \ref{entropy_accuracy} demonstrates the relation between feature entropy and classification accuracy of softmax classifier. A positive correlation between entropy and classification accuracy can be observed. Noting that the efficient coding principle is an unsupervised objective function, this positive correlation is somewhat surprising. \subsection{Spatial Redundancy of Images} Figure \ref{spatial_mi} shows the redundancy properties of natural images. The result is obtained at whitened CIFAR-10 data. The large value of mutual information between nearby elements indicates the redundancy of information. In both figures, we can observe the decay behavior of mutual information with the increasing of spatial distance. This suggests the elements sufficiently far-away from each other are nearly independent. This is not surprising, as the phenomenon is already widely formulated by Markov assumption in various probabilistic model. Interestingly, Figure \ref{spatial_mi:edge} which shows that, after feature extraction by edge detector, the redundancies between nearby pixels are removed, however, new dependencies among edges emerge. The dependencies between edges spread much broader than that of pixels. This suggests that a single layer transformation is not sufficient for the purpose of redundancy reduction. We need another transformation to remove the redundancies between edge representations. \subsection{Multi-layer Redundancy Reduction} Figure \ref{entropy_depth} illustrates entropy increasing after adding more layers of transformation. We train several layers of Gaussian-binary RBM on contrast normalized CIFAR-10 and several layers of sparse coding on whitened CIFAR-10 \footnote{Gaussian-binary RBM assumes a Gaussian distribution of the visual data, while sparse coding as an implementation of ICA assumes non-Gaussian data}. We can observe: (1), For both RBM and sparse coding, an additional layer bring further entropy gain though the marginal gain vanishes with the number of layers. (2), the sparse coding produce features with much higher entropy than RBM because of sparse coding is more dedicated to the goal of redundancy reduction. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{entropy_depth} \caption{With RBM and sparse coding, adding more layer always help remove redundancy further. However, the gain of redundancy reduction gradually becomes marginal. } \label{entropy_depth} \end{figure} \section{Algorithms} \label{section:algorithm} In this section, we propose algorithms for structure learning based on the principle of efficient coding. \subsection{Implementing the Principle by Sparse Coding} The transformation in Figure \ref{efficient_coding:sub1} shows how to get $\mathbf{U}$ from $\mathbf{X}$ through $\mathbf{U}=\mathbf{W}^{T}\mathbf{X}$. Inversely, $\mathbf{X}$ can be viewed as being generated by a probabilistic process governed by $\mathbf{U}$, such as \begin{equation} \mathbf{X}=\mathbf{D}^{T}\mathbf{U}+\mathbf{N} \end{equation}where $\mathbf{N}$ is a vector of independently and identically distributed Gaussian noise, $\mathbf{D}$ describes how each source signal generates an observation vector. If the dimensions of $\mathbf{X}$ and $\mathbf{U}$ are equal and transformation matrix $\mathbf{W}$ is full rank, a trivial relation between $\mathbf{V}$ and $\mathbf{W}$ is $\mathbf{W}^{T}\mathbf{D}=\mathbf{I}$, where $\mathbf{I}$ is an identify matrix. In this way, learning optimal data transformation matrix $\mathbf{W}$ is equivalent to inferring the optimal data generating matrix $\mathbf{D}$. Usually, to infer the simplest possible signal formation process, some additional assumptions on the prior distribution of $\mathbf{U}$ are made. As the efficient coding principle indicates, the independent or factorial codes are preferred, that is, assuming \begin{equation} p(\mathbf{U})=\prod_{i}p(U_{i}). \end{equation} When $p(U_{i})$ is a distribution peaked at zero with heavy tails, it can be shown the model leads to the so-called sparse coding \cite{field_1994}. Therefore our structure learning algorithm described in the next subsection is based on sparse coding\footnote{We note that both ICA and sparse coding can implement the efficient coding principle. The former adds assumption on the CDF of $U_{i}$ while the latter adds assumption on the probability density function of $U_{i}$. However, ICA is difficult to be extended to over-complete case, and we do not use it in this work. }. \begin{figure} \centering \subfigure[]{ \includegraphics[width=0.15\textwidth]{block9} \label{block_toy:sub1} } \subfigure[]{ \includegraphics[width=0.25\textwidth]{block_toy} \label{block_toy:sub2} } \caption{(a) An example of synthesize images: each image is composed of 9 blocks, and each block is sampled from a random patch of a random image. (b) 36 basis learned on the synthesized image set by sparse coding. Apparently, each basis only focuses on one block of the input image. } \label{block_toy} \end{figure} We construct a synthesized data set from whitened CIFAR-10. Each image consists of $3 \times 3$ blocks, each of which is sampled from a random location of a random image. We can assume the pixels from different blocks are independent from each other, while the pixels in the same block possess the statistical properties of natural images. An ideal algorithm should discover and converge to the correct structure. That is, each basis is only connected to pixels in a single block. We evaluated RBM, sparse auto-encoder, and sparse coding. Sparse coding (see Fig. \ref{block_toy}) is the only one that can perfectly recover the block structures. \subsection{The Proposed Algorithms} Algorithm \ref{alg:structure_learning} shows how to learn the structure from unsupervised data. As shown in the algorithm, we learn the structure layer by layer in a bottom-up manner. We put the raw features at layer $1$ and learn the connection between layer $1$ and layer $2$ given the predefined number\footnote{As aforementioned, we focus on inter-layer connection learning and depth learning in this work and assume the number of nodes of each layer is given in advance. One can also leverage the technique proposed in \cite{chen_2013} to automatically learning the number of nodes in each layer.} of nodes in layer $2$. Specifically, the inter-layer connection is initialized with full connections. Trained on unlabeled data, due to the ICA properties of sparse coding, the inter-layer connection will converge to a sparse connected network. In other words, the weights of most of the edges will converge to zero during the learning process. After the connection between layer $k$ and $k+1$ has been learnt, we can estimate the entropy of layer $k+1$ according to Equation \ref{equation:ee_knn} and compare it with that of layer $k$. If the entropy gain between the two layers is smaller than a threshold $\epsilon$, we terminate the learning process; otherwise we add layer $k+2$ and continue to learning process. In other words, the depth of the network is determined according to the cues of entropy gain. Algorithm \ref{alg:bp_structure_priming} shows how to learn a better DNN based on the structure output by Algorithm \ref{alg:structure_learning} and supervised data. First, the learned sparse connection and the weights are used to initialize a multi-layer feed-forward network. The inter-layer connection inherits from the structure mask learned in Algorithm \ref{alg:structure_learning}, and the weights of connection is initialized by the weights learned by sparse coding. Then via training on labeled data, the weights\footnote{The structure will be fixed} are fine-tuned further. While the two algorithms look quite similar to several existing algorithms at the first glance, we would like to highlight and elaborate several differences and some implementation details as follows. \begin{algorithm}[tb] \caption{Structure Learning with Sparse Coding} \label{alg:structure_learning} \begin{algorithmic}[1] \STATE {\bfseries Input:} $\mathbf{X}$, each column is an example $\mathbf{X}_{i}\in \mathcal{R}^{d}$ \STATE {\bfseries Output:} dictionary $\mathbf{D}_{k}$, structure mask $\mathbf{M}_{k}$, depth $K$ \STATE Initialize $k=0$, $\mathbf{U}_{k}=\mathbf{X}$ \REPEAT \STATE Whiten $\mathbf{U}_{k}$ \STATE $\mathbf{D}_{k},\!\mathbf{U}_{k+1}\!\!=\!\!\underset{\mathbf{D}_{k},\mathbf{U}_{k+1}}{\argmin} \|\mathbf{U}_{k}-\mathbf{D}_{k}^{T}\mathbf{U}_{k+1}\|_{2}\!+\!\lambda \|\mathbf{U}_{k+1}\|_{1}$ \STATE Calculate $\mathbf{M}_{k}$ by thresholding $\mathbf{D}_{k}$ \STATE $\mathbf{Z}_{k+1}=\sigma(\mathbf{U}_{k+1})$, where $\sigma$ is feature-wise CDF \STATE Estimate $H(\mathbf{Z}_{k+1})$ \STATE $k=k+1$ \UNTIL{$|\frac{H(\mathbf{Z}_{k+1})-H(\mathbf{Z}_{k})}{H(\mathbf{Z}_{k})}| < \epsilon$} \STATE $K = k+1$ \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{Back-propagation with Structure Priming} \label{alg:bp_structure_priming} \begin{algorithmic}[1] \STATE {\bfseries Input:} $\mathbf{X}$, $\mathbf{D}_{k}$, $\mathbf{M}_{k}$, $k=1,\ldots,K$ \STATE {\bfseries Output:} $\mathbf{W}_{k}$, $k=1,\ldots,K$ \STATE $\mathbf{U}_{0}=\mathbf{X}$ \STATE $\mathbf{W}_{k}=\mathbf{M}_{k}\circ\mathbf{D}_{k}$ for $k=1,\cdots,K$ \STATE /* Forward pass */ \FOR {$k=0$ {\bfseries to} $K$} \STATE $\mathbf{U}_{k+1}=\mathbf{W}_{k}^{T}\mathbf{U}_{k}$ \STATE $[\mathbf{U}_{k+1}]_{i}=\mathrm{sign}([\mathbf{U}_{k+1}]_{i})(|[\mathbf{U}_{k+1}]_{i}|-\lambda)_{+}$ \ENDFOR \STATE /* Back-propagation */ \STATE $\delta \mathbf{U}_{K}=\frac{\partial \mathrm{Loss}}{\partial \mathbf{U}_{K}}$ \FOR {$k=K$ {\bfseries down to} $1$} \STATE $[\delta \mathbf{U}_{k}]_{i} = 0$ if $[\mathbf{U}_{k}]_{i} = 0$ \STATE $\delta \mathbf{U}_{k-1} = \mathbf{W}_{k-1}\mathbf{U}_{k}$ \STATE $\delta \mathbf{W}_{k-1} = \mathbf{U}_{k-1}\delta\mathbf{U}_{k}^{T}$ \STATE $\delta \mathbf{W}_{k-1} = \mathbf{M}_{k-1} \circ \delta\mathbf{W}_{k-1}$ \STATE $\mathbf{W}_{k-1} = \mathbf{W}_{k-1} - \gamma \delta \mathbf{W}_{k-1}$ \ENDFOR \end{algorithmic} \end{algorithm} \paragraph{Training sparse coding in global range} The dramatic difference between the usage of sparse coding in this paper and that of existing work is that, the sparse coding is trained in global range. Take image as an example, here we train sparse coding on the whole image instead of traditional way which trains sparse coding on small patches \cite{yang_2009}. Therefore, we do not need to predefine the inter-layer connection such as the spatial range of local connection in convolutional networks. Instead, the algorithm itself is able to learn the optimal inter-layer connections. We will see in Section \ref{section:experiments}, the inter-layer connections learned on images happens to resemble the local connection structure in CNN. We notice that deconvolutional network proposed in \cite{zeiler_2010} also trains sparse coding on whole image; however, like the patch-based sparse coding, it also needs a pre-determined spatial range for convolutional filter. \paragraph{Sparsifying inter-layer connection} Once obtaining $\mathbf{D}_{k}$ by Algorithm \ref{alg:structure_learning}, we know the strength of each connection between adjacent-layer neurons. Intuitively, the weak connection can be removed without significantly affect the behavior of network. Concretely, a binary mask matrix $\mathbf{M}_{k}$ is calculated by thresholding $\mathbf{D}_{k}$ \begin{equation} [\mathbf{M}_{k}]_{i}=\left\{\begin{array}{rl} 1, & |[\mathbf{D}_{k}]_i| \geq t\\ 0, & \mathrm{otherwise} \end{array} \right. \end{equation} The parameter $t$ can be chosen according to what density of the mask matrix $\mathbf{M}_{k}$ we expect. For example, if we want to keep $10\%$ connections in the network, we can calculate the histogram of the absolute value in $\mathbf{D}_{k}$ and choose $t$ at the position of $10\%$ quantile. The resulting $\mathbf{M}_{k}$ will act as a structure priming for the feed-forward network in Algorithm \ref{alg:bp_structure_priming}. \paragraph{Handling invariance} In tasks such as image classification, invariance based on pooling is crucial for practical performance. The output of neurons who have similar receptive fields are aggregated through an OR operation to obtain shift invariance. To endow the algorithm with this capability, the neurons should firstly be separated into overlapping or non-overlapping groups according to their selectivities. Since Algorithm \ref{alg:structure_learning} is based the sparse coding framework, it can be easily modified to handle invariance using group lasso\footnote{ Similar ideas appear in \cite{hyvarinen_tica_2001,le_icml_2012}.} \cite{yuan_2006}. With group lasso, the dictionary learning becomes \begin{equation} \mathbf{D}_{k}=\underset{\mathbf{D}_{k}}{\argmin} \|\mathbf{U}_{k}\!-\!\mathbf{D}_{k}^{T}\mathbf{A}\|_{2}\!+\!\lambda \sum_{n}\!\sum_{g} \|\mathbf{A}_{g}^{n}\|_{2} \end{equation} where $\mathbf{A}_{g}^{n}$ denotes the reconstruction coefficients of the $n$-th example in the $g$-th group. Also, the shrinkage operation in FISTA \cite{beck_2009} algorithm will change accordingly. \paragraph{Back-propagation with structure priming} Both $\mathbf{M}_{k}$ and $\mathbf{D}_{k}$ provide structure priming for feed-forward network. The weights $\mathbf{W}_{k}$ are initialized as $\mathbf{M}_{k}\circ\mathbf{D}_{k}$ in feed-forward pass, and $\mathbf{M}_{k}$ also mask the gradient $\delta \mathbf{W}_{k-1}$ in back-propagation by \begin{equation} \delta \mathbf{W}_{k-1} = \mathbf{M}_{k-1} \circ \mathbf{W}_{k-1} \end{equation} where $\circ$ denotes Hadamard product. This implementation is the same as DropConnect approach in \cite{wan_2013}. However, in DropConnect, the mask matrix is randomly generated and also it only used in full connection layer. \paragraph{One step ISTA approximation} Besides using the dictionary $\mathbf{D}_{k}$ to initialize $\mathbf{W}_{k}$, we also would like the feature map to inherit the sparse properties from sparse coding. Therefore, we implement the feed-forward transformation as a one-step ISTA approximation to the solution of lasso. The idea comes from \cite{gregor-2010} which uses several steps to obtain an efficient approximation to lasso. The shrinkage operation for standard sparse coding and group sparse coding are \setlength{\arraycolsep}{2pt} \begin{eqnarray} [\mathbf{U}_{k+1}]_{i}&=&\mathrm{sign}([\mathbf{U}_{k+1}]_{i})(|[\mathbf{U}_{k+1}]_{i}|-\lambda)_{+}\\ {}[\mathbf{U}_{k+1}]_{i}&=&\mathrm{sign}([\mathbf{U}_{k+1}]_{i})(1-\frac{\lambda}{\|[\mathbf{U}_{k+1}]_{g}^{i}\|_{2}})_{+} \end{eqnarray} \setlength{\arraycolsep}{5pt}where $\|[\mathbf{U}_{k+1}]_{g}^{i}\|_{2}$ indicates the $\ell$-2 norm of the group variables $[\mathbf{U}_{k+1}]_{i}$ belongs to. Note that the nonlinear transfer function used in feed-forward network is different from the transfer function used for estimating entropy $H(\mathbf{Z}_{k})$ where the transfer function the CDF of feature map. \paragraph{CUDA implementation of sparse coding} Training a sparse coding model on whole images instead of patches is much demanding to computation resource. In our experiments, even the highly optimized sparse modeling package SPAMS \cite{marial_2010} requires several days to convergence. Regarding that GPGPU has been a common option for deep learning algorithm, we implement a CUDA sparse coding based on FISTA algorithm. As noted in \cite{gregor-2010}, coordinate descent (CD) may be the fastest algorithm for sparse inference, it is true for CPU implementation. However, CD algorithm is not innate to parallel implementation due to its sequential nature. Our CUDA implementation of an online sparse coding algorithm based on FISTA algorithm speedup the process $7\!\sim\!10$ times over SPAMS. \section{Experiments} \label{section:experiments} \begin{table}[t] \caption{Classification accuracies of baseline convolutional neural network.} \label{baseline} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcr} \hline \abovespace\belowspace ID & Configuration & Test accuracy \\ \hline \abovespace cudaconv & Adaptive LR & $81.0\%$ \\ standard & Fixed LR & $77.5\%$ \\ nodropout & - dropout & $75.7\%$\\ nopadding & - padding & $73.3\%$ \\ nonorm & - normalization & $73.1\%$\\ nopooling & - pooling & $65.4\%$\\ 2conv & nonorm with 2 conv & $72.5\%$\\ 1conv & nonorm with 1 conv & $60.4\%$\\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \subsection{Baseline and experimental protocols} All the experiments are carried out on the well-known CIFAR-10 data set. It contain 10 classes and totally has 50000 and 10000 $32 \times 32$ color images in training and testing set respectively. As a baseline, we reproduce the cuda-convnet experiments on CIFAR-10 \cite{krizhevsky_2012}. In our implementation, we don't use the data augmentation technique such as image translation and horizontal reflection, except that the mean image is subtracted from all the images as the cuda-convnet setting does. With an adaptive learning rate, we get an $81\%$ test accuracy with a single model. Since we are focusing on the network structure such as inter-layer connection and depth, we evaluate the contribution of techniques such as adaptive learning rate, dropout, padding and local normalization to the baseline system. These results are reported in Table \ref{baseline}. Note that in the table, the configuration in each line is modified from the above line without otherwise stated. Therefore, the NONORM is a setting without modules such as adaptive learning rate, dropout, padding and local normalization. This setting reflects the contribution of network architecture. If we further remove the pooling layer from NONORM, the performance drops to $65.4\%$, which implies pooling plays a significant role in this task. Removing the $3$-rd convolutional layer from NONORM, the performance drops slightly to $72.5\%$, which implies the $3$-rd convolution layer contributes marginally. If further removing the $2$-nd convolutional layer from 2CONV, the accuracy drops dramatically to $60.4\%$, which indicates two convolutional layers are essential. Different from the input of convolutional network, the input images to sparse coding are all whitened. Empirically, we find whitening is crucial for sparse coding to get meaningful structure \footnote{This is consistent with the observation that whitening is essential for independent component analysis (ICA) \cite{hyvarinen_2001}}. In both CNN and our algorithm, the learning rate is fixed to $0.001$. Without otherwise stated, the network includes a 10-output softmax layer. The sparse coding dictionaries in all the layers are with 2048 dimension and group size is 4. All the experiments are carried out on a Tesla T20K GPU. \subsection{Overall performance} \begin{table}[t] \caption{Classification accuracies of structure learning.} \label{structure} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcr} \hline \abovespace\belowspace ID & Configuration & Test accuracy \\ \hline \abovespace 1layer & 512 & $63.0\%$ \\ 2layer & 512/512 & $68.0\%$ \\ 3layer & 512/512/512 & $69.8\%$\\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} From Table \ref{structure}, we can observe that the single layer network of learned structure achieves a test accuracy $63.0\%$ which outperforms $60.4\%$ of the single layer setting in Table \ref{baseline}. The two layer architecture achieves a performance $68\%$ which outperforms one layer model but is inferior to $72.5\%$ produced by CNN with two convolutional layer. \subsection{Evaluating inter-layer connection density} \begin{table}[t] \caption{Evaluating the role of inter-layer connections.} \label{table:density} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccr} \hline \abovespace\belowspace Density & random & RBM & sparse coding\\ \hline \abovespace $0.1\%$ & $30.2\%$ & $23.2\%$ & $40.7\%$\\ $0.25\%$ & $31.6\%$ & $28.4\%$ & $40.1\%$\\ $0.5\%$ & $30.3\%$ & $31.3\%$ & $51.2\%$\\ $10\%$ & $48.2\%$ & $39.7\%$ & $57.2\%$\\ $30\%$ & $54.7\%$ & $39.6\%$ & $56.2\%$\\ $70\%$ & $56.2\%$ & $39.4\%$ & $56.6\%$\\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} To demonstrate the role of inter-layer connection, we compare three types of structures in a single layer network: (1) randomly generated structures, (2) structures by sparsifying restricted Boltzmann machines (RBM), (3) structures learned by sparse coding. We define the connection density as the ratio between active connections in structure mask $\mathbf{M}_{k}$ and the number of full connections. As shown in Table \ref{table:density} we evaluate the three settings in several connection density level. Basically, we can observe denser connections bring performance gain. However, the performance saturates even keeping randomly $30\%$ connections. The structures generated by sparse coding outperform the random structure. Surprisingly, structures generated by RBM are even inferior to random structures. In the learned structure, at the same density level, sparse coding always outperforms RBM. For sparse coding, by keep $10\%$ connections is sufficient. Theoretically, a full connection network can emulate any sparse connection ones just by setting the dis-connected weights of the to zeros. If a sparsely connected network is known to be optimal, ideally, a full connection network with appropriate weights can yield exactly the same behavior. However, the BP algorithm usually can not converge to this optimal weights, due to the local optimum properties. \subsection{Evaluating the role of structure mask} \begin{table}[t] \caption{Evaluating the role of structure mask.} \label{table:mask} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcr} \hline \abovespace\belowspace Setting & One layer & Two layer\\ \hline \abovespace BP & $57.9\%$ & $50.2\%$ \\ Weight & $57.6\%$ & $57.4\%$ \\ Weight+BP & $63.7\%$ & $62.4\%$ \\ Mask+BP & $58.8\%$ & $58.1\%$ \\ Weight+Mask+BP & $64.0\%$ & $68.6\%$\\ \hline \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} To investigate the role of the learned structure mask, we carry out the following experiments: (1) randomly initialized BP; (2), initializing the network parameter without fine-tuning; (3), initializing the network parameter with pre-trained dictionary and fine-tuned with BP; (4), restricting the network structure with learned mask and randomly initializing the parameters, finally fine-tuning with BP; (5), restricting the network structure with mask and initializing the connection parameters with pre-trained dictionary, finally fine-tuning with BP. The results are reported in Table \ref{table:mask}. We can observe that, even use the pre-trained dictionary as a feature extractor, it significantly outperforms BP with random initialization. Fine-tuning with BP always brings performance gain. The Mask+BP outperforming BP indicates that the structure prior provided by sparse coding is very useful. Finally, the strategy of combining Weight+Mask+BP outperforms all the others. \subsection{Evaluating network depth} Table \ref{structure} shows the performances of networks with different depth. We can observe that adding more layers is helpful, however the marginal performance gain diminishes as the depth increases. Interestingly, Figure \ref{entropy_accuracy} shows a similar behavior of coding efficiency. This empirically justifies the approach determining the depth by coding efficiency. \section{Conclusions} In this work, we have studied the problem of structure learning for DNN. We have proposed to use the principle of efficient coding principle for unsupervised structure learning and have designed effective structure learning algorithms based on global sparse coding, which can achieve the performance as good as the best human-designed structure (i.e., convolutional neural networks). For the future work, we will investigate the following aspects. First, we have empirically shown that redundancy reduction is positively correlated to accuracy improvement. We will explore the theoretical connections between the principle of redundancy reduction and the performance of DNN. Second, we will extend and apply the proposed algorithms to other applications including speech recognition and natural language processing. Third, we will study the structure learning problem in the supervised setting.
2,869,038,155,730
arxiv
\section{Introduction \& Motivation} The near-ultraviolet (NUV) spectral range ($\lambda=2000-4000$ \AA) in stellar flares is critical for understanding the underlying physics of the impulsive release of magnetic energy in flares, the explosive hydrodynamic response of the stellar atmosphere, and the effects on ozone chemistry and surface biology of (potentially) habitable worlds around M dwarfs. However, the spectral characteristics of the NUV wavelength range are a critical missing piece in the observational picture of stellar flares. Stellar flares are often observed in the NUV with $U$-band ($3200-4000$ \AA) photometry, but spectral observations at $\lambda < 3500$ \AA\ are rare. While part of the NUV was sampled during the Great Flare of AD Leo \citep{HP91}, these observations only covered part of the gradual decay phase\footnote{The International Ultraviolet Explorer (IUE) LWP spectra from $1900-3100$ \AA\ observations of the Great Flare of AD Leo started at 1200 s in Figure 1 of \citet{HP91}.}. Though nearly half of white-light flare radiation is emitted in the NUV where the continuum distribution is thought to peak \citep[][]{HP91}, there is a lack of time-resolved impulsive phase flare spectra in the NUV. \citet{Robinson1993, Robinson1995} present the only flux-calibrated spectrum (also from IUE) of a flare impulsive phase, though this spectrum has a 20-minute integration time and includes a significantly long time of gradual decay phase radiation. With accurately flux-calibrated spectra, the flare continuum peak ($\lambda_{\rm{flare,peak}}$) in the NUV ostensibly provides a high-cadence characterization of the temperature evolution in the lower atmosphere through Wien's Law. The temperature evolution is difficult to constrain from optical spectra\footnote{In the optical, the systematic errors on color temperature measurements are as large as 2000 K for low-amplitude flares and in the gradual decay phase \citep{Kowalski2013}. } at $\lambda > 4000$ \AA\ where there is much lower contrast against the non-flaring red photosphere of M dwarfs. Time-resolved impulsive phase spectra in the NUV have much larger flare contrast and would robustly constrain $\lambda_{\rm{flare,peak}}$ and its time evolution without being affected by uncertainties from the subtraction of the pre-flare spectrum. Broadband photometry in the FUV, NUV, and the Johnson $U, B, V$, and $R$ filters during the Great Flare on AD Leo \citep{HP91} suggest that the continuum flux distribution (with specific flux density units of \AA$^{-1}$) exhibits a value of $\lambda_{\rm{flare,peak}}$ within the $U$ band and decreases toward FUV wavelengths \citep{HF92} like a $T\sim9000-10000$ K blackbody. Other moderate-amplitude flares on AD Leo also exhibit the general distribution of a $T\sim9000$ K blackbody \citep{Hawley2003}. However, a giant flare observed in the FUV and NUV GALEX bandpasses have suggested temperatures much higher with $T>50,000$ K \citep{Robinson2005}. The FUV evolves faster than the NUV \citep{Hawley2003, Welsh2006} which may indicate that a homogeneous flare source does not explain all of the FUV and NUV radiation \citep[e.g., see Appendix of ][]{K18, Kowalski2017B}. Thus, detailed spectra of the NUV will help distinguish between lines and continua and constrain the broadband spectral shape. Furthermore, broadband photometry can result in degeneracies among emission models \citep{Allred2006}, so spectral characterization at high time-cadence using a combination of space-based and ground-based observations that cover $\lambda=2000-4800$ \AA\ are important for a comprehensive understanding of the heating at high column mass in stellar flares. Spectral observations from the ground in the $U$-band indicate that blackbody radiation alone does not explain white-light radiation. A Balmer continuum component is necessary to explain the jump in flux \citep{Kowalski2010}, but the jump is much smaller than expected from a source emitting hydrogen recombination at $T\sim10,000$ K over low continuum optical depth \citep[][hereafter, K13]{Kowalski2013}. The spectral shape in the impulsive phase at $\lambda=3400-3650$ \AA\ roughly exhibits the same color temperature of $\sim10,000-12,000$ K as inferred in the blue-optical ($4000-4800$ \AA) wavelength regime (K13). There are several spectral observations of flares below the atmospheric cutoff, but they are not readily comparable to models. The flux calibration accuracy of the echelle flare spectra of YZ CMi from HST/STIS \citep{Hawley2007} has not been assessed (STIS ISR 1998-18) for the purpose of constraining the continuum shape and comparing to radiative-hydrodynamic (RHD) flare model predictions. Moreover, no ground-based observations of the Balmer jump wavelength region ($3600-4000$ \AA) were available for these flares; the Balmer jump region and blue-optical wavelength region are essential for a correct interpretation of the NUV continuum spectrum as either optically-thick (hot blackbody-like in the optical) or optically thin Balmer continuum radiation (cool blackbody-like in the optical). \citet{Wargelin2017} present Swift UV grism spectra during a flare on Proxima Centauri, but the data have not been flux-calibrated, and the contribution from second order light has not been assessed for this purpose \citep{Kuin2015}. A blackbody-like spectrum with a color temperature of $T \sim10,000$ K in the impulsive phase of some dMe flares means that there is significant heating at high column mass \citep[$m\gtrsim 0.01$ g cm$^{-2}$;][]{K18}, which is not possible to reproduce in RHD simulations of low-to-moderately high flux electron beam energy deposition rates \citep{Allred2006}. RHD models with very high beam energy deposition rates \citep{Kowalski2015} suggest that the broadband appearance of a hot blackbody can be explained by hydrogen recombination emissivity that escapes over regions of the atmosphere at $T\sim10,000$ K with wavelength dependent continuum optical depth of $\tau_{\lambda, \rm{continuum}}$ between $\sim0.4 - 5$. These models also predict evolution of $\lambda_{\rm{flare,peak}}$ due to the evolving continuum optical depths in a chromospheric condensation that cools from $T\gtrsim50,000$ K to $T\sim10,000-12,000$ K over several seconds \citep[][hereafter, K16]{Kowalski2015, Kowalski2016}. Thus, the location of the $\lambda_{\rm{flare,peak}}$ may be more sensitive to continuum optical depth than flare temperature. Most models with electron beam heating produce 10,000 K material, but 10,000 K material with large continuum optical depth requires extreme heating scenarios \citep[see also][]{Cram1982, Houdebine1992,Christian2003} that challenge how the standard solar flare paradigm applies to dMe flares, which can be much more energetic. Accurately flux calibrated measurements of $\lambda_{\rm{peak}}$ in low-to-moderate-energy dMe flares would provide unambiguous comparisons between the heating of high column mass in solar and dMe flares; however, these observations are the most difficult to obtain due to the lack of accessibility to the $\lambda \sim 2000-3500$ \AA\ wavelength region except with the largest ground-based telescopes or from space. K13 classified a sample of dMe flares with NUV and optical spectra ($\lambda=3420 - 7000$ \AA) and $U$-band photometry according to the impulsiveness, which is the $U$-band peak flux enhancement (minus 1) divided by the full-width-at-half-maximum ($t_{1/2}$; in minutes) of the light curve. The impulsive flare (IF) events are those with fast evolution to a bright peak amplitude, whereas the gradual flare (GF) events are those that exhibited longer rise times to the peaks, which sometimes result from a superposition of several relatively low-amplitude fast flares (e.g., the GF1 event in K13) and/or a gradually rising continuum flux (e.g., the GF1 event in K13; the GF2 event in K16). The IF-type events are distinguished for their small Balmer jump ratios and small ratios of the H$\gamma$ line-integrated flux divided by the 4170 \AA\ (blue) continuum flux. GF-type events exhibit larger Balmer jump ratios and H$\gamma$-to-blue continuum ratios. Between the IF and GF events are events with intermediate values of the Balmer jump ratio and line-to-continuum ratio: these are the ``hybrid'' flare (HF) events, which also show striking evidence (c.f. Figure 8 of K13) for hot ($9000-12,000$ K) blackbody-like radiation at $\lambda>4000$ \AA\ like most IF-type events in their spectroscopic sample\footnote{K16 analyzed another large sample of dMe flares with narrowband continuum photometry and found that some IF-type events, which make up most of the events in a sample that is limited by signal-to-noise at flare peak, exhibit larger Balmer jump ratios.}, but the amplitudes are rather low and require further confirmation for the presence of this spectral phenomenon. The physical underpinning of the IF/HF/GF type classification (which, like spectral types, is a continuous classification scheme) of dMe flares and the relationship between the continuum color temperature and line-to-continuum ratios has not yet been solved, but NUV spectra at $\lambda<3500$ \AA\ for each type of flare would help establish which types show the dominant $T \sim 10^4$ K component that is so difficult to reproduce in RHD flare simulations. In this paper, we describe the first time-resolved, accurately flux-calibrated flare spectra during two HF/GF-type dMe flare events in the NUV using spectral data from the Hubble Space Telescope (HST)/Cosmic Origins Spectrograph (COS) with simultaneous spectra at the Balmer jump and optical photometry. This is the first paper in a series and focuses on the presentation of the data and a comparison to existing radiative-hydrodynamic flare models. In Section \ref{sec:data}, we describe the data reduction of the ground-based and HST observations; in Section \ref{sec:photanalysis}, we describe the derived properties from the broadband flare light curves; in Section \ref{sec:bj}, we present the Balmer jump properties and optical flare spectra; in Section \ref{sec:discussion1}, we discuss the flare colors in the context of the events in K13 and K16; in Section \ref{sec:cosanalysis}, we present the HST/COS spectral characteristics and the time-evolution of the continuum and emission lines; in Section \ref{sec:combined}, we combine the HST data with the Balmer jump spectra to compare to radiative-hydrodynamic models in the literature; in Section \ref{sec:discussion2}, we discuss the implications for heating at high column mass; in Section \ref{sec:discussion3}, we discuss the significance of these new flare observations for photochemical modeling of planets in dM habitable zones; and in Section \ref{sec:conclusions}, we present a summary of what we have learned about dMe flares from this study. In Paper II of this series, we will present new radiative-hydrodynamic modeling avenues of heating at high column mass that have been motivated by our new NUV constraints on the continuum radiation. \section{Data} \label{sec:data} On 2014 Sep 01, we monitored the dM4e star GJ 1243 with nine ground-based telescopes and the HST/COS for eight orbits (HST GO 13323, Dataset LCD201010) over an elapsed time of 11 hr and 15 min. The datasets are summarized in the observing log in Table \ref{table:obslog}. We chose this relatively faint ($V=12.8$) dMe star because at that time conservative bright object limits were in place for COS and STIS that were not employed for previous studies of brighter flare stars (e.g., AD Leo, $V=9.5$). After Servicing Mission 4 there was increased concern about stochastically occurring large amplitude stellar flares, which could cause detector shut down and potentially harm the photon counting detectors of STIS and COS. This necessitated a careful target selection for the proposed science. Since 2014, the bright limits for observing flare stars with HST have been set according to ISR STIS 2017-02 and ISR COS 2017-01. GJ 1243 has been monitored extensively by the \emph{Kepler} satellite, providing a robust white-light flare rate over eleven months of one-minute cadence data \citep{Hawley2014, Davenport2014, Silverberg2016}. In the 5.6 hours of monitoring with COS, we were able to justify to the HST TAC that the robust flare rate from \emph{Kepler} guaranteed two moderate-sized flares and many smaller events would occur. In this paper, we describe the data and analysis of the two moderate-amplitude flares with $\Delta U \sim -1.5$ mag (at peak) that occurred towards the end of the target visibility in the second (LCD201ADQ) and sixth orbits (LCD201DAQ) of HST. \begin{deluxetable}{lccc} \rotate \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{Observation Log} \tablehead{ \colhead{Telescope} & \colhead{Instrument} & \colhead{UT} & \colhead{Wavelength range [\AA] / Filters}} \startdata HST & COS & 2014-08-31 23:14 - 2014-09-01 10:29 & 2444 - 2841 \\ ARC 3.5m at APO & DIS & 2:03 - 10:54 & 3400 - 7500 \\ 4.2m WHT at La Palma & ISIS & 2014-08-31 21:27 - 2014-09-01 4:05 & 3400 - 8000 \\ Keck 10m & LRIS & 5:06 - 10:20 & 3120 - 5390 \\ Harlan J. Smith 2.7m at McDonald & VIRUS-P & 2:11 - 9:35 & 3400 - 6850 \\ \hline Otto Struve 2.1m at McDonald & 2-channel P45J photometer & 2:22 - 8:24 & $U$ \\ Aristarchos 2.3m at Helmos Observatory & RISE2 & 2014-08-31 22:05:37 - 2014-09-01 1:20 & $V+R$ \\ INT 2.5m at La Palma & WFC & 2014-08-31 21:54 - 2014-09-01 4:16 & Str\"omgren $u$ \\ 0.8m at McDonald & PFC & 5:24 - 7:59 & $BVRI$ \\ ARCSAT 0.5m at APO & Flarecam & 2:21 - 10:30 & SDSS $gri$ \\ \enddata \tablecomments{Unless noted, UT times correspond to times on 2014-09-01. } \end{deluxetable}\label{table:obslog} \subsection{Hubble Space Telescope/COS Spectra} \label{sec:cosdata} Existing flare literature describes $U$-band measurements as NUV, whereas space observatories reserve the term NUV for shorter wavelengths not visible from the ground. In this paper, we will use ``$U$ band'' to describe ground-based (photometry or spectral) measurements at wavelengths longer than the atmospheric cutoff and ``NUV" to describe space-based (HST/COS) measurements at wavelengths shorter than the atmospheric cutoff. We employed the G230L grating ($\lambda_{\rm{cen}}=2635$ \AA) with COS on HST, resulting in wavelength coverage in the NUV from $\lambda \sim 2444$ \AA\ to $\lambda \sim 2841$ \AA\ (the NUVB stripe) and a dispersion of 0.39 \AA\ pixel$^{-1}$ ($42 - 47$ km s$^{-1}$ pixel$^{-1}$). The target was acquired and centered in the Primary Science Aperture using the ACQ/PEAKXD and ACQ/PEAKD modes (with $\lambda_{\rm{cen}}=3360$ \AA\ for the acquisition). We used the standard Space Telescope Science Institute Python data extraction and reduction routines (\emph{splittag} and \emph{x1dcorr}). We extracted NUVB/TIME-TAG spectra at 5~s intervals and integrated over this wavelength range of COS/G230L for broadband light curve analysis in Section \ref{sec:photanalysis}. At $2444-2509$ \AA, there is vignetting making this spectral range unsuitable for continuum shape characterization. We extracted TIME-TAG spectra for the $2510-2841$ \AA\ spectral range for the 60~s exposure times corresponding to the spectra from the William Herschel Telescope (WHT; Section \ref{sec:wht}) and from the Apache Point Observatory (APO; Section \ref{sec:apodata}) for detailed spectral analysis in Section \ref{sec:cosanalysis}. We employed the standard reference files for wavelength and flux calibration of the HST/COS data, but we used an aperture of $y_{\rm{cen}} \pm 15$ pixels without an aperture correction (the standard aperture size is $\pm 57$ pixels). This reduced aperture results in improved S/N for faint objects like M dwarfs in the NUV (ISR COS 2017-03). However, the smaller aperture decreases the HST/COS fluxes of GJ 1243 by $10-15$\% in non-flaring and flaring times (consistent with the findings of ISR COS 2017-03); this minor loss of light does not affect our calculation of the continuum flux ratios (Section \ref{sec:cos_peak_analy}) within the COS range by more than $\sim2$\%. Since the flare spectra exhibit a low count rate, the smaller aperture with slightly better S/N is important for identifying continuum regions that are free of faint emission lines and for calculating the line fluxes of the relatively faint Fe II emission lines. In Section \ref{sec:combined}, we apply a wavelength-independent aperture correction of 14\% to the flare spectra for comparison to the absolute flux calibration of the Balmer jump and optical spectra from the ground-based observatories. The wavelength solution from the pipeline was manually adjusted by $+2.4$ \AA; this shift is consistent with the quoted accuracy of COS/G230L. We also discovered an unexplained jump in the wavelength solution by $\sim1$ pixel at approximately halfway through each orbit; this was not obviously related to any stellar activity or other known instrumental effects\footnote{STScI help desk, private communication.}. Our results are not affected by this wavelength calibration uncertainty. The NUV light curve of GJ 1243 at 5~s time-binning is shown in Figure \ref{fig:hstlc}. Two moderate-sized flares occurred in the HST monitoring at the ends of second and sixth orbits, and many smaller events also occurred. The flare peaking at 00:46 UT on 2014 Sep 01 (hereafter, HST-1) and the flare peaking at 7:07 UT on 2014 Sep 01 (hereafter, HST-2) are analyzed in detail with the coordinated ground-based observations from Mauna Kea (Keck I/LRIS), the Apache Point Observatory (ARC 3.5-m/DIS, ARCSAT 0.5-m/Flarecam), the McDonald Observatory (Harlan J. Smith 2.7-m/VIRUS-P, Otto Struve 2.1-m/photometer, 0.8-m/PFC), the Isaac Newton Group of Telescopes at La Palma (4.2-m WHT/ISIS, 2.5-m INT/WFC), and the Helmos Observatory (Aristarchos 2.3-m/RISE2). \clearpage \begin{figure} \includegraphics[scale=0.45]{allorbit_5s.eps} \caption{Light curve of GJ 1243 for eight orbits of the HST/COS, showing the G230L specific flux density integrated over wavelength from $2444-2841$ \AA\ in 5 second time bins. Two moderate-sized flares are indicated as HST-1 and HST-2 and are discussed throughout the text. The target visibility in each orbit has a duration of 2800~s, and the spectral observations in the first orbit are shorter due to the target acquisition. } \label{fig:hstlc} \end{figure} \clearpage \subsection{Spectra from the ARC 3.5m at the APO} \label{sec:apodata} We employed a 5\arcsec\ slit oriented at the parallactic angle for observations with the Dual-Imaging Spectrograph (DIS) on the ARC 3.5-m at APO. The spectral resolution, measured\footnote{Through a wide slit that is much larger than the seeing, the actual spectral resolution from a point source can be significantly higher than indicated by the arc lamp lines.} from the quiescent hydrogen Balmer $\delta$ and $\alpha$ lines, is R$\sim$590 in the blue and $R\sim780$ in the red. The exposure times are 60~s, and the spectrophotometric standard stars Feige 110 and PG 1708$+$602 were used to flux calibrate the data. The data reduction and flux calibration were performed using IRAF, and the specific procedures applied for low-resolution flare spectroscopy are described in K13 and K16. The quiescent spectra were scaled to the $B$-band mag (14.47) and the $V$-band mag (12.83) of GJ 1243 from \citet{Reid2004}, using the bandpass zeropoints from \citet{Bessel2013}. From all ground-based spectra (and also for spectra in Section \ref{sec:wht}), we calculate the quantities from K13 and K16 from the flare-only (pre-flare subtracted excess) specific flux density at Earth ([erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$]), which is hereafter just denoted as ``flare-only flux'' with a prime symbol (e.g., $F_{\lambda}^{\prime}$, following \citet{Kowalski2012SoPh}; note that prime symbols were not used in the notation of K13 or K16). Specifically, the calculated quantities are the following: the average flare-only flux in spectral windows that contain mostly continuum radiation (C3615$^{\prime}$, C4170$^{\prime}$, C6010$^{\prime}$), the color temperature at blue optical wavelengths at $\lambda=4000-4800$ \AA\ ($T_{\rm{BB}}$), the Balmer jump ratio (C3615$^{\prime}$/C4170$^{\prime}$), the blue-to-red continuum flux ratio (C4170$^{\prime}$/C6010$^{\prime}$), the emission line fluxes (continuum subtracted) integrated over the line wavelengths\footnote{Corresponding to the 1.5\arcsec\ slit line windows in Table 3 of K13.}, and the line-to-continuum ratio of the H$\gamma$ line-integrated flux to C4170$^{\prime}$\ (H$\gamma$/C4170$^{\prime}$). The HST-2 event occurred during these spectral observations. \subsection{Spectra from the WHT 4.2m} \label{sec:wht} Data were obtained in the red and blue arms of Intermediate dispersion Spectrograph and Imaging System (ISIS) at the WHT with gratings R158B and R316R. The seeing varied from 1.1-1.8\arcsec\ through the night, and a 2\arcsec\ slit was employed. The slit was repositioned to the parallactic angle several times throughout the night (between HST orbital visibility periods). Exposure times were 10 sec for the red and 60 sec for the blue with a readout of about 6 sec. Standard IRAF procedures were used to reduce the data. Observations of GJ 1243 were obtained from 2014-08-31 21:27 UT to 2014-09-01 4:05 UT; the HST-1 event occurred at 00:46 UT through an airmass of sec $z = 1.2$, where $z$ is the zenith angle. The wavelength solution was obtained with a CuNe$+$CuAr lamp at the beginning of the night using a 0.7\arcsec\ slit. The dispersion is 3.25 \AA\ in the blue and 1.8 \AA\ in the red. The spectral resolution (measured from quiescent spectrum Balmer $\delta$ and $\alpha$ emission lines) is $\sim480$ in the blue and $\sim1340$ in the red. We used a spectrophotometric standard star PG1708$+$602 obtained at an airmass of 1.2 at 2014-Aug-31 20:32 UT to flux calibrate the data. Several more observations of standard stars were obtained through the night to assess the residual extinction compared to the standard extinction curve from King (1985); large flux variations were found from star to star at $\lambda \gtrsim 8000$ \AA. In the blue, the usable wavelength range is $3400 - 5320$ \AA\ and in the red the usable range is $5500 - 8000$ \AA. \subsubsection{Calculation of flare-only flux} We use the spectrum scaling algorithm of K13 and K16 to correct for variable slit loss due to seeing changes through the night, and we subtract a pre-flare spectrum to obtain the flare-only flux. The pre-flare spectrum was calculated over an average of 11 blue spectra before HST-1, and the scaling factor was determined using increments of 0.005 relative to this pre-flare spectrum (see K16). Because the exposure times for the blue and red spectra differed significantly, we averaged (four) red spectra to the times of each blue exposure; the scaling obtained from the averaged red spectra was applied to each blue spectrum. We verified that a continuum flux ratio ($F_{6010}/F_{6800}$) change occurs in the spectra between 00:46:11 and 00:46:41 (corresponding to the peak of HST-1) before applying the scale factor and subtracting the pre-flare. We also find that the equivalent width of H$\alpha$ closely follows its line-integrated flare-only flux. These checks imply that the scaling algorithm is robust. We also checked that the red continuum flare-only flux was greater than zero during the flare and negligible before the pre-flare spectrum. In Section \ref{sec:bj}, we show that the broadband increase from the red spectra during the peak of HST-1 is consistent with the data in the $\sim V+R$ photometric band from the Helmos Observatory (Section \ref{sec:aristarchos}). \clearpage \subsection{Low-Resolution Spectroscopy from Keck 10m / LRIS} \label{sec:keckdata} We obtained low-resolution spectroscopy from 3120 \AA\ to 5390 \AA\ with the 10m Keck/LRIS for one half night. Inspired by the \citet{Herczeg2008} spectra of T Tauri spectra with LRIS, we aimed to measure the Balmer continuum shape down to the atmospheric cutoff during a flare. Combined with the HST/COS data, the slope from 3100-3600 \AA\ would facilitate constraining the peak of the continuum (we do this in Section \ref{sec:combined}). Three 30 second flats were obtained with a deuterium lamp, which is a bright ultraviolet continuum source, for the bluemost 300 pixels ($\lambda < 3645$ \AA). Halogen flats were used for the redder pixels. A master flat was made by combining these two flats. We employed the atmospheric dispersion corrector (ADC) optimized for $3200-7000$ \AA, and we oriented the 1.5\arcsec\ slit at the parallactic angle (perpendicular to the horizon). We binned 2x2 with low gain and fast readout resulting in 37~s between consecutive exposures with 45~s integration times. The seeing was 0.73 \arcsec, and the conditions were clear. The wavelength calibration was performed using a HgNeArZnCd lamp set, with Hg, Zn, and Cd providing the most lines in the NUV. We achieved 0.3 \AA\ residuals with a cubic spline (order$=2$) fit to 15 arc features. With the aid of the Keck/LRIS tool ARCPLOTS, the bluest feature that could be reliably identified is Cd 3261 \AA. At bluer wavelengths between 3120 \AA\ and 3260 \AA, we relied on an extrapolation of the (low order, cubic spline) wavelength solution. We obtained blue spectra with the 400/3400 grism, the 1.5\arcsec slit, the dichroic 560, and a dispersion of 2.1 \AA\ pixel$^{-1}$. The spectral resolution, measured from the quiescent H$\delta$ emission line of GJ 1243, is $R\sim440$. We obtained five exposures of the spectrophotometric standard star BD$+$28 4211 at three values of airmass from 1.02 to 1.54. The seeing was good enough to place the slit on the standard sdO star without contamination from the nearby fainter, redder star. Using the calibrations from \citet{Oke1990} and the standard airmass extinction curve for Mauna Kea, we applied a second order extinction correction and flux calibrated the spectra. At 5:22:40 UT (before all standard star and most GJ 1243 observations), the SP580 short-pass filter was inserted to block contamination in the blue from the red light of the star. An observation of one of the BD$+$28 4211 exposures through the SP580 filter is shown in Figure \ref{fig:calib}(a). The observation shows flux variations where there should be a continuum from the hot sdO star. We found out that the variations are due to (undocumented) transmission curve wiggle-like variations in the SP580 filter, which was not used for the flat fields taken for this night and thus not able to be removed at early stages in the data reduction. We scale the median of the CALSPEC observation of BD$+$28 4211 to the median of the Keck/LRIS observation and calculate a filter correction curve. We apply this correction to all Keck/LRIS observations of GJ 1243. Example flare-only spectra before and after this correction are shown in Figure \ref{fig:calib}(b). The Keck Observatory obtained deuterium flats with the SP580 on 2015-02-13 for us but with a different CCD binning than our observations. Additionally, the spectra became double-peaked due to focus problems during the main flare event. Due to the problems with the observations on 2014-09-01, we caution that the Keck calibration potentially exhibits some intractable, but relatively minor, inaccuracies. However, in Section \ref{sec:keck_lris}, we show the absolute flux calibration agrees well with the APO 3.5m spectra for overlapping wavelength coverage. As these Keck spectra are among the very few \citep{Fuhrmeister2008, Fuhrmeister2011} flare observations down to the atmospheric limit, they are used for our analysis (Section \ref{sec:combined}) to demonstrate the value of future spectral observations at wavelengths at the atmospheric cutoff near 3200 \AA. \begin{figure} \begin{center} \includegraphics[scale=0.35]{keckcorrected2018.eps} \includegraphics[scale=0.35]{keckcorrected2018_b.eps} \caption{\textbf{(a)} Keck/LRIS observation of the spectrophotometric standard star BD$+$28 4211 taken at similar airmass and time as the HST-2 flare during the GJ 1243 observations. The SP580 correction curve shows the values that are multiplied by the spectra of GJ 1243 to correct for the transmission of the SP580 filter. \textbf{(b)} The correction curve from panel (a) is multiplied by the HST-2 flare spectrum of GJ 1243 to obtain the corrected spectrum (\emph{red}). The correction eliminates the fluctuations but does not affect the broad-wavelength shape of the continuum at $\lambda<3600$ \AA. Vertical dashed lines indicate the bluemost wavelength (3120 \AA) that is calibrated using the spectrophotometric standard star flux.} \label{fig:calib} \end{center} \end{figure} The three spectra before the flare were averaged and scaled by the quiescent $B$-band magnitude of GJ 1243. At each time during the flare, the flare-only spectra were calculated using the algorithm in \citet{Kowalski2013} for spectra with only blue-wavelength coverage. We analyze the Keck spectra S\#116 and \#117 which coincidentally cover the same times as S\#152-153 from APO. The value of C3615$^{\prime}$\ from this Keck flare spectrum is in remarkable agreement with the C3615$^{\prime}$\ over S\#152 and S\#153 from APO (Section \ref{sec:keck_lris}), suggesting that the absolute flare-only flux levels are robust. \subsection{$U$-band Photometry from the McDonald 2.1m} \label{sec:uband} $U$-band data were obtained with the Otto Struve 2.1-m Telescope and the 2-channel P45J photometer at the McDonald Observatory. Observations were obtained using a Johnson $U$ filter combined with a copper sulphate filter to eliminate the red leak present otherwise. This instrument provided a continuous sequence of photometric measurements without the time gaps necessary for CCD detector readouts. Integration times of 5 seconds were used. One photometer was used to observe GJ 1243 through a 14.5\arcsec\ aperture, while a brighter, nearby, comparison star was observed with the second photometer through a smaller ($\sim$10\arcsec) aperture. This instrumental setup provided reliable differential photometry, even through light cloud. The main observational difficulty resulted from inaccurate telescope tracking at large hour angles but the resulting effects were mitigated by monitoring the comparison star signal. As the stars drifted, the comparison star reached the edge of its aperture first and GJ1243 was then recentered in its aperture. Recentering was performed just before the start of each HST orbit as a matter of routine. The sky signal was monitored frequently; typically at the start and end of each HST Earth occultation. On 2014 Sep 01 data were collected between 02:22 and 08:24 UT with a cloud-free sky and several flares were recorded in the $U$-band light curve. The largest of these flares corresponded to the HST-2 flare observed by COS. \subsection{$U$-band Photometry from the Isaac Newton Telescope} \label{sec:intdata} Str\"omgren u-band (hereafter, denoted as $U$) data were obtained at La Palma with the 2.54-m Isaac Newton Telescope and the Wide Field Camera. Exposure times were 20 seconds with 5.5 second for readout between exposures. A binning of 2x2 was used. The flat fields were obtained with a different windowing than the observations. Without flat fielding, we found that no larger than 5\% variations occurred on short (10 minute) timescales, and no larger than 10\% variations occurred on long (hour) timescales in the relative photometry of two comparison stars. The relative photometry on shorter timescales during quiescent times of GJ 1243 (just before the HST-1 flare) varies by only $\sim2.5$\%. Thus we are confident that flares are robustly detected and characterized in these data. The weather was mostly clear with high and decreasing humidity throughout the night. We used standard IRAF procedures to bias-subtract and perform aperture photometry with an aperture radius of 5 pixels, which is a factor of 1.3 larger than the FWHM of the target PSF at the time of the flare. \subsection{Photometry from the Helmos Observatory/Aristarchos 2.3-m} \label{sec:aristarchos} GJ 1243 was observed on the night of 2014 Aug 31 with the RISE2 instrument \citep{rise2} on the 2.3-m Aristarchos telescope at Helmos Observatory in Greece. The 1024x1024 E2V CCD47-20 back-illuminated CCD has pixels that are 13 microns in size providing a pixel scale of 0.51\arcsec\ and a field-of-view of 8.7\arcmin\ x 8.7\arcmin\ \citep{rise2}. The exposure time was set to 5 s while the $V+R$ filter was used. Aperture photometry was extracted from GJ 1243 and the 4 brightest comparison stars available in the field, using a 7 pixel or 3.6\arcsec\ radius with the IRAF apphot package and a 5 pixel wide sky annulus 10 pixels from the source. \subsection{Additional Photometry and Spectroscopy} \label{sec:others} We obtained three additional datasets that are not discussed in the analysis in this paper but are available upon request to the first author: $BVRI$ photometry from 2014-Sep-01 5:24 - 7:59 UT with the Prime Focus Corrector on the 0.8-m at the McDonald Observatory (exposure times were 30~s, 6~s, 2~s, 1~s for $BVRI$, respectively); low-resolution optical spectroscopy with the VIRUS-P spectrograph on the Harlan J. Smith 2.7-m telescope from 2014-Sep-01 2:11 - 9:35 UT at the McDonald Observatory (exposure times were 100~s); SDSS $gri$ photometry from 2014-Sep-01 2:21 - 10:30 UT with Flarecam on the ARCSAT 0.5-m telescope at the Apache Point Observatory (exposure times were 30~s, 15~s, and 15~s for $gri$, respectively). \section{Flare Light Curve Analysis} \label{sec:photanalysis} A large fraction of the durations of the HST-1 and HST-2 flares were simultaneously observed from the ground-based telescopes and the HST. We use the high-cadence $U$-band light curves of the flares to characterize them and place them in context of previously observed dMe flares. To calculate flare energies in the $U$-band, we estimate the quiescent $U$-band flux ($2.9\times10^{-15}$ \cgs\ ), using the $B$-band mag (14.47) and the $V$-band mag (12.83) of GJ 1243 from \citet{Reid2004} and the zeropoint flux density values from \citet{Bessel2013}. We follow \citet{Hawley2014} and assume the $U-B=0.93$ of YZ CMi since there is no published value of the $U$-band magnitude for GJ 1243. From our quiescent spectra (Section \ref{sec:keck_lris}), we estimate a $U$-magnitude of $\sim15.5$. However, a precise conversion from spectrophotometry to $U$-band photometry is generally not possible due to uncertainties in the blue response of the telescope, instrument, and atmosphere that factor into any system's total $U$-band response function. The spectrophotometric $U$-band magnitude of GJ 1243 results in $\sim10$\% lower inferred flare energies, which is not critical for our analysis. Using the $U-B$ color of YZ CMi facilitates a direct comparison to the flare energies in \citep{Hawley2014}, and we note that the NUV is also rather variable in quiescence (see Section \ref{sec:combined}). We performed our analysis using a distance of 12.05 pc \citep{Harrington1980}. Subsequently, Gaia DR2 \citep{Gaia, GaiaDR2} published a parallax distance of 11.9787 $\pm$ 0.0052. This 0.5\% difference in distance (1\% difference in flare energy) is negligible for our analysis. From the broadband $U$-band and NUV light curves, we calculate flare energies, flare peak flux enhancements ($I_f+1$), the FWHM of the light curves ($t_{1/2}$), and the impulsiveness indices ($I_f/t_{1/2}$) from K13. Some properties of the NUV light curves are given in Table \ref{table:hstdata}. \clearpage \begin{itemize} \item HST-1: Figure \ref{fig:lcfigs}(a) shows the NUV light curve for the HST-1 flare at 5~s intervals. This flare exhibits two to three fast-rise events in the impulsive phase, which produces a maximum NUV flux enhancement of $I_f+1 = 9.5$. The 60~s integration times of the WHT spectra and the NUV light curves extracted from these times are shown on this figure. We focus our WHT spectral analysis on S\#163, which corresponds to the times covering the peak, initial fast decay, and initial gradual decay phases. The $U$-band energy of this event is $4.4\times10^{31}$ erg, with 35\% of the energy radiated in the impulsive phase. The total flare duration (including several smaller $U$-band events in the gradual decay) is $\sim1800$~s. The peak $U$-band flux enhancement is $I_f+1\sim 4.3$ ($\Delta U \sim -1.6$ mag, at $t_{\rm{exp}}=20$~s) and the value of $t_{1/2}=135$~s, giving a $U$-band impulsiveness index of $\sim1.5$. \begin{figure} \includegraphics[scale=0.45]{hst1_ref.eps} \includegraphics[scale=0.45]{hst2_ref_nuv.eps} \caption{\textbf{(a)} Wavelength-integrated (2444-2841 \AA) NUV light curve of the HST-1 flare event over 16 minutes of HST observations binned by 5~s. At 00:56 UT, GJ 1243 was occulted by the Earth. The spectral integration times of the ISIS spectra from the WHT 4.2-m are indicated as vertical lines (exposure start by a vertical dashed lines, exposure end by vertical dotted lines). The sequential numbering of the spectra along the bottom (S\#) is used throughout the text. \textbf{(b)} Wavelength-integrated NUV light curve of the HST-2 flare event at 5~s time-binning over 16 minutes of COS observations. The spectral integration times of the DIS spectra from the ARC 3.5-m are indicated as vertical lines (exposure start by a vertical dashed lines, end by vertical dotted lines). The sequential numbering of the spectra along the top (S\#) is used throughout the text. Arrows indicate the times over which Keck/LRIS spectra S\#116 and S\#117 are averaged and analyzed in Section \ref{sec:combined}. Note that the pre-flare flux is 70\% of the pre-flare flux in panel (a), but both flares are normalized to the same quiescent wavelength-integrated NUV flux value ($3.54\times10^{-13}$ erg cm$^{-2}$ s$^{-1}$) obtained just before HST-1 (the varying level of quiescent flux can be seen in Figure \ref{fig:hstlc}). Since the two light curves are normalized to the same quantity, the specific luminosity the flares can be directly compared. Both panels show the wavelength-integrated ($\lambda=2510-2841$ \AA) NUV light curves as \emph{cyan asterisks} extracted at the exposure times of the respective ground-based spectra. } \label{fig:lcfigs} \end{figure} \item HST-2: Figure \ref{fig:lcfigs}(b) shows the NUV light curve for the HST-2 flare at 5~s intervals, to be compared directly to HST-1 in panel (a). This flare exhibits a peak NUV flux enhancement of $I_f+1 = 8$. This flare also has several fast events in the impulsive phase, though these fast events have different relative amplitudes compared to those in the impulsive phase of HST-1. The 60~s integration times of the APO spectra and the NUV light curves extracted from these times are shown on this figure. We focus our APO spectral analysis on the average of S\#152 and S\#153, which correspond to the fast rise, peak, fast decay, and initial gradual decay phases. The 5~s cadence $U$-band light curve covering HST-2 is shown in Figure \ref{fig:uband}. Note that eleven other flares with amplitudes $I_f+1<2$ occur over the 5.73 hours of these $U$-band observations. As seen in the NUV light curve (Figure \ref{fig:lcfigs}(b)), the HST-2 event exhibits two $\sim20$~s episodes of fast rise over a total impulsive phase duration of 80~s, with a peak flux enhancement of $I_f+1=3.7$ ($\Delta U_{\rm{peak}} = -1.4$ mag) that is followed by a short fast decay phase and a much longer gradual decay phase that begins at 80\% of the flare maximum. The total flare duration is 600 s, but HST went behind the Earth before the flare was completely over, as for HST-1. The $U$-band equivalent duration \citep{Gershberg1972} of 450s gives a $U$-band energy of 1.6$\times10^{31}$ erg, placing it very similar in energy to the $U$-band flare on GJ 1243 reported in \citet{Hawley2014} and in the middle of the energy distribution of flares on GJ 1243 \citep{Hawley2014}. The peak $U$-band luminosity is nearly $10^{29}$ erg s$^{-1}$. In contrast to HST-1, about 1/5 of the total energy is radiated in the impulsive phase. At 5~s cadence, the $t_{1/2}$ value is 120~s, which gives a $U$-band impulsiveness index of 1.4 (or 0.9 when the 5~s U-band light curve is binned to 20~s), which is similarly small like the impulsiveness of HST-1. \end{itemize} \begin{deluxetable}{lcccccccccc} \rotate \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{NUV Flare Peak Properties} \tablehead{ \colhead{Flare} & \colhead{S\#} & \colhead{$I_f+1$ at peak*} & \colhead{$t_{1/2}$ (s)*} & \colhead{$t_{\rm{fast}}$ (s)*} & \colhead{$t_{\rm{imp}}$ (s)*} & \colhead{Fraction energy in continuum} & \colhead{Fe II / C2650$^{\prime}$} & \colhead{Mg II / C2650$^{\prime}$} & \colhead{Fe II / Mg II} & \colhead{C2650$^{\prime}$\ / C2820$^{\prime}$\ (err)} } \startdata HST-1 & 163 (WHT) & 9.5 & 65 & $<5-15$ & 95 & 0.62 & 20 & 46 & 0.43 & 0.94 (0.11) \\ HST-2 & 152-153 (APO) & 8 & $60-75$ & 15 & 60 & 0.65 & 18 & 39 & 0.46 & 0.88 (0.08) \\ \enddata \tablecomments{*From high-cadence ($t_{\rm{exp}}=5$~s) light curves (Figures \ref{fig:lcfigs}(a)-(b)). $t_{\rm{fast}}$ is the duration range for the times of fast rise in the light curves. $t_{\rm{imp}}$ is the duration of the impulsive phase including all bursts in the light curves. Fe II refers to the line-integrated flux of the four emission lines Fe II $\lambda2599.15$, $\lambda2600.17$, $\lambda2631.83$, and $\lambda2632.11$. Mg II refers to the line-integrated flux of Mg II $\lambda2791.6$, $\lambda2796.35$ ($k$), $\lambda2798.8$, and $\lambda2803.5$ ($h$). Prime symbols indicate flare-only specific flux values in continuum regions. } \end{deluxetable}\label{table:hstdata} According to the high-cadence $U$-band ($t_{\rm{exp}}=5-20$~s) impulsiveness index categorization from K13, the HST-1 and HST-2 flares are hybrid flare (HF) events; they fall between HF1 and HF2 in the sample of K13. HST-1 and HST-2 have similar peak flux enhancements and two to three episodes of fast rise in their respective impulsive phases. However, HST-1 is several times more energetic, exhibits a slower rate of gradual decay, and is three times longer total duration. At a few $\sim10^{31}$ erg in the $U$-band, the HST-1 and HST-2 events likely have bolometric white-light flare energies that are comparable to (or greater than) the total white-light energy produced in the largest X-class flares observed in the Sun \citep{Neidig1994, Woods2004, Kretzschmar2011, Osten2015}. \citet{Kosovichev2001} has shown that the famous 2000 July 14 Bastille Day solar flare has a double-peaked optical light curve, and each peak is attributed to a spatially distinct, but neighboring and magnetically connected, set of two ribbons. \citet{Qiu2010} has shown that this flare consists of two phases of magnetic field unshearing, rapid spreading of ribbons, and enhanced reconnection rate. Perhaps the two flares on GJ 1243 each consisted of two spatially adjacent, two-ribbon events that produced the multiple episodes of fast rise in the impulsive phases, as in the Bastille Day solar flare event. A more detailed comparison to solar flares will be presented in a future work. \begin{figure} \includegraphics[scale=0.3]{hst2_uband.eps} \includegraphics[scale=0.3]{hst2_ref_norm_5s.eps} \includegraphics[scale=0.3]{hst2_uband_vs_spec.eps} \caption{ \textbf{(a)} The $U$-band light curve of HST-2 at its original 5~s cadence, at a 60~s cadence, and a 30-minute average. \textbf{(b)} Comparison of HST-2 in the $U$-band and NUV at the same high-time ($t_{\rm{exp}}=5$~s) cadence; the light curve of NUV from Figure \ref{fig:lcfigs}(b) is reproduced in \emph{blue circles}. The peak-normalized flux values have the pre-flare levels subtracted before dividing by peak values. The NUV is slightly faster to reach peak and to reach the gradual decay phase than the $U$-band; thus the NUV has shorter values of $t_{1/2}$ (see Table \ref{table:hstdata}). \textbf{(c)} The peak-normalized $U$-band photometry and the $U$-band spectrophotometry (from the 3.5-m/DIS spectra) are in satisfactory agreement when binned to 60~s exposure times. The value of C3615$^{\prime}$\ from the 3.5-m/DIS spectra also follows the $U$-band.} \label{fig:uband} \end{figure} \subsection{The $U$-band \emph{vs.} NUV: similarities and differences} \label{sec:nuvdiff} The light curves of HST-1 and HST-2 in Figure \ref{fig:lcfigs} demonstrate the importance of high-time resolution with $t_{\rm{exp}}=5$~s or better during flares. Figure \ref{fig:uband} shows several more light curves of HST-2. In Figure \ref{fig:uband}(a), the $U$-band photometry from the Otto Struve Telescope \label{sec:uband} is binned to the ground-based spectral integration times of 60~s (\emph{teal/cyan asterisks}), which follows the 60s-binned light curve of the wavelength-integrated flux in Figure \ref{fig:lcfigs}(b). Clearly, a 1-minute average obscures light curve detail and confuses the identification of the distinct phases of the flare: at relatively low cadence, two points appear as the impulsive phase while the fast decay phase corresponds to the gradual decay phase in the higher cadence data. In \emph{Kepler} 30-minute cadence data \citep[e.g.][]{Walkowicz2011, Shibayama2013, Davenport2016, Yang2017, VanD}, the flare would be a single point at the level indicated in the plot. From the lower-cadence ($t_{\rm{exp}}=60$~s) light curves, the HST-1 and HST-2 events are classified as hybrid flare (HF) events with impulsiveness indices of $0.6-0.8$ (K13). When we compare the high-cadence $U$-band data to the high-cadence NUV light curve of HST-2 (Figure \ref{fig:uband}(b)), we find that the NUV peaks slightly before and decays faster in the fast decay phase. Although the $U$-band data of HST-1 is lower cadence ($t_{\rm{exp}} = 20$~s) and was not reduced with a flat field (Section \ref{sec:intdata}), the peak-normalized NUV light curve decays slightly faster than the $U$-band light curve, as in the HST-2 event. The difference in timescales between the two regimes of the NUV is surprising and should be investigated further with high-cadence observations and pursued with radiative-hydrodynamic flare modeling. We speculate that possible sources of the differences could be 1) relatively more Balmer continuum radiation in the $U$-band, which also has a contribution from high order Balmer lines, and/or 2) a moderately high and variable optical depth as a function of the wavelength in the Balmer continuum from $\lambda=2600$ \AA\ to the Balmer series limit at 3646 \AA. In Figure \ref{fig:uband}(c) we compare the 60s-binned $U$-band observations to two quantities obtained from the ARC 3.5-m/DIS spectra (Section \ref{sec:apodata}), which are discussed in the next section (Section \ref{sec:bj}). \section{Balmer Jump Spectral Analysis} \label{sec:bj} The interpretation of the continuum radiation in the NUV range critically depends on the properties of the Balmer jump and optical continuum radiation constrained by the ground-based telescopes. In this section, we analyze the Balmer jump spectra covering the peaks and initial decay phases of HST-1 (S\#163 from the WHT in Figure \ref{fig:lcfigs}(a)) and of HST-2 (an average of S\#152-153 from the ARC 3.5m in Figure \ref{fig:lcfigs}(b)). Spectral quantities described in K13 and K16 are calculated from these spectra and are given in Table \ref{table:data}. The peak flare-only spectrum of HST-1 (S\#163; Figure \ref{fig:lcfigs}(a)) at $\lambda=3400-5300$ \AA\ in the $U$-band, the blue-optical, and the red optical ($\lambda=5500-8000$ \AA) are shown in Figure \ref{fig:fullsed}. To our knowledge, Figure \ref{fig:fullsed} is the first robust characterization through the full optical wavelength regime near the peak of a moderate-amplitude, HF event on a dMe star. Near the peak (S\#163), HST-1 exhibits a moderate Balmer jump ratio, $\chi_{\rm{flare}}=$C3615$^{\prime}$/C4170$^{\prime}$\ of $3.6\pm0.3$. No significant spectral variation (except in total flux) occurs from S\#162 to S\#163. On the right axis of Figure \ref{fig:fullsed} we show the contrast ($\Delta$mag) of the flare-only continuum windows. From the flare spectrum, we estimate the magnitude change over the wavelengths corresponding to the bandpass of the Aristarchos photometry ($\sim V+R$ band), which is shown in Figure \ref{fig:aristarchos}. The agreement of $-0.045$ mag is remarkable\footnote{We note that the H$\alpha$ line flux accounts for only $\sim13$\% of the $\lambda=5600-7000$ \AA\ flare-only flux, which is an estimated $\Delta\rm{mag} \sim -0.005$ in the Aristarchos bandpass.}. With this verification of the flux calibration, we measure the blue-to-red flux ratio, C4170$^{\prime}$/C6010$^{\prime}$, to be 1.15$\pm0.24$ in the flare spectrum in Figure \ref{fig:fullsed}. The optical continuum is nearly flat, and it is fit by an (unweighted) line with a nearly flat slope in continuum-only wavelength windows at $\lambda=4000-8000$ \AA\ in Figure \ref{fig:fullsed}. The value of C4170$^{\prime}$/C6010$^{\prime}$\ calculated from the best-fit line is 1.08 and is consistent with the continuum flux ratio calculated from the data. We also calculate the percentage of $4000-8000$ \AA\ energy under the line fit (\emph{light blue line}) to be 80\% of the flare-only energy, and the $\lambda=3420-5200$ \AA\ HB percentage\footnote{The HB quantity is defined in K13. If we do not subtract a linear fit from the Balmer continuum wavelengths, the HB percentage is 60\%.} to be 48\%, which is similar to the impulsive/peak phase properties of other GF-type events in K13 (cf their Fig 21). \begin{figure} \includegraphics[scale=0.50]{HST1_fullSED.eps} \caption{HST-1 flare spectrum over the peak and initial decay phases at $\lambda=3400-8000$ \AA. Four 10-second red spectra from the WHT are coadded for the time corresponding to the 60s exposure S\#163 in the blue from the WHT. The \emph{red circles} indicate the magnitude changes (flare contrast; right axis) within continuum-only wavelength regions: the dot-dashed red line shows the average magnitude change in the spectrum over the filter of the Aristarchos photometry (Figure \ref{fig:aristarchos}). The flare-only continuum from $\lambda=4000 - 8000$ \AA\ is nearly flat and is fitted with a line (\emph{dashed cyan}). The non-flaring spectrum scaled by 0.25 is shown as a \emph{dotted line}. } \label{fig:fullsed} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.55]{aristarchos.eps} \caption{ Light curve of GJ 1243 from Aristarchos in $\sim V+R$ band. The \emph{cyan asterisk} shows the average photometry over the exposure time of S\#163 from the WHT. Note, the photometry does not return to the pre-flare level for the time-range shown. } \end{center} \label{fig:aristarchos} \end{figure} In Figure \ref{fig:balmerjump} we show the spectra S\#152, S\#153, and the average of S\#152-153 at $\lambda > 3430$ \AA\ from APO/DIS for HST-2, which showcases the Balmer jump properties of this flare and a hot, $T_{\rm{BB}}\sim9000$ K, blackbody continuum that is fitted to the blue optical ($4000-4800$ \AA) wavelength regime of the average of S\#152 and S\#153. The observed spectra S\#152 and S\#153 include the rise, peak, and post-peak phases of the flare (Figure \ref{fig:lcfigs}(b)). Though high-time resolution is important, we coadd S\#152 to S\#153 in order to attain a robust estimate of a color temperature at blue-optical wavelengths where the flare contrast against the molecular band pseudo-continuum in the preflare spectrum is low ($\le 20$\%). The Balmer jump ratio, $\chi_{\rm{flare}}$, is $3.8 \pm0.4$ in the coadded spectrum, which does not vary significantly between the S\#152 (3.5$\pm0.4$) and S\#153 (4.2$\pm0.9$) spectra\footnote{Despite the issues with the calibration of the Keck/LRIS spectra, we calculate a similarly moderate Balmer jump ratio of $\sim3$ at the peak of HST-2.}, following the calculation of flare color errors in K13\footnote{If we use the error on the weighted mean (instead of the standard deviation of the flux within each continuum window as in K13) we obtain an uncertainty of $0.17$ and 0.24 on the Balmer jump ratios for S\#152 and S\#153, respectively, which suggests that there is significant intraflare variation with a larger Balmer jump ratio in the gradual decay phase. This is qualitatively consistent with the evolution of the Balmer jump ratios for other dMe events in K13. In Section \ref{sec:combined}, we calculate the flare color errors from these spectra as the error of the weighted means with a systematic flux calibration uncertainty of 5\% added in quadrature. } The Balmer lines are broad in this flare. Moreover, they are bright relative to the continuum, exhibiting a value of H$\gamma$/C4170$^{\prime}$\ of 160. The last identifiable Balmer line is H14 $\lambda3722$, while He I 3705 and He I 4026 are in emission, and Ca II K is characteristically fainter during the flare than the nearby Balmer lines (e.g., H8). In this event, the value of $\chi_{\rm{flare}}$, H$\gamma$/C4170$^{\prime}$, the HB percentage, and BaC3615/C3615$^{\prime}$\ are all very similar to the respective quantities in HST-1 in Figure \ref{fig:fullsed}. The F11 RHD continuum model spectrum from \citet{Kowalski2015} is shown and scaled to match the observed flare-only flux at $\lambda=3500-3640$ \AA. The overall shape at $\lambda < 3646$ \AA\ is well-represented by this RHD continuum spectrum, which results from hydrogen recombination radiation formed over low continuum optical depth. However, the observed Balmer jump ratio is not reproduced in the RHD model, which has a value of $\chi_{\rm{flare}}=11$. The RHD model is scaled to the lowest possible values in the observed spectra \citep[qualitatively like the fitting method of][]{Milligan2014} and is also shown (\emph{dotted red line}). The blending of the higher order Balmer lines and the dissolved level pseudo-continuum at $\lambda=3646-4000$ \AA\ will be addressed in Paper II using the modeling techniques in \citet{Kowalski2017B}. The evolution of C3615$^{\prime}$\ and the $U$-band that is synthesized from the spectra using the transmission curve in \citet{Bessel2013} are shown in Figure \ref{fig:uband}(c) compared to the 60s-binned $U$-band light curve. They follow each other remarkably, which demonstrates the robustness of our calculation of the flare-only flux for moderate amplitude flares (Section \ref{sec:apodata}). In this figure, the light curves are normalized to their peak values, and the spectrophotometric $U$-band peak value of $I_f+1$ is within 10\% of the 2.1-m/$U$-band light curve peak flux enhancement ($I_f+1 \sim 3$; Figure \ref{fig:uband}(a)) binned to 60 s. HST-2 exhibits a flux enhancement of $\sim$3 in the NUV at C3615$^{\prime}$, but there is only a flux enhancement of 1.2 in the blue continuum at C4170$^{\prime}$. This low flare contrast in the blue makes this flare is a good case to combine with the constraints from shorter wavelengths to test RHD model predictions and the extrapolation of a $T=9000$ K blackbody (Section \ref{sec:combined}). \begin{figure} \begin{center} \includegraphics[scale=0.70]{APO_HST2_balmerjump_K15model.eps} \caption{Impulsive phase Balmer jump flare spectra of the HST-2 event, shown from $\lambda=3430-4200$ \AA. A blackbody with $T_{\rm{BB}}=9000$ K is shown as the \emph{purple} curve, which is fit to the \emph{black} flare spectrum; the F11 RHD model ($t=2.2$~s) is shown in \emph{red} scaled to the observation; the \emph{dotted red} line shows the RHD F11 model that is scaled by eye to the bottom troughs in the observed flux at $\lambda < 3600$ \AA. The \emph{dotted black} line is the pre-flare (background) spectrum of GJ 1243. The calculated quantities from the average of S\#152 and S\#153 (\emph{black} with \emph{grey} error bars) spectrum are given in Table \ref{table:data}. } \end{center} \label{fig:balmerjump} \end{figure} \section{Discussion I: The HST-1 and HST-2 flare events in the context of other \lowercase{d}M\lowercase{e} flares with Balmer jump spectra} \label{sec:discussion1} The spectral properties of the HST-1 and HST-2 events are similar to the characteristics of the GF-type events in K13, while the impulsiveness indices from the broadband photometry are similar to the HF-type events in K13. Hereafter, we refer to HST-1 and HST-2 as HF/GF-type events. Over the flare peak times of HST-1 and HST-2, the moderate Balmer jump ratios ($\gtrsim 3.5-4$), large H$\gamma$/C4170$^{\prime}$ ratios ($140-200$), relatively large fraction of total flare flux due to Balmer radiation (0.3), large values of BaC3615/C3615$^{\prime}$ (0.7), and small impulsiveness indices (1.5) from the $U$-band light curves (Section \ref{sec:photanalysis}) are consistent with properties of HF and GF-type events in K13. These values are given in Table \ref{table:data} along with the Balmer line decrements. Notably, the values of H$\gamma$/C4170$^{\prime}$\ ratios are among the largest observed in GF-type dMe flares in K13; as noted for another HF/GF type event on GJ 1243 in \citet{Silverberg2016}, this quantity is a useful proxy for the Balmer jump ratio. We extend the linear relationship from K13 (cf. their Figure 11 and Equation 5) for the Balmer jump and line-to-continuum ratios in Figure \ref{fig:hgamma}; HST-1 and HST-2 flares establish a new regime among these quantities. \begin{figure} \begin{center} \includegraphics[scale=0.70]{Hg_c4170_data.eps} \caption{ Large Balmer jumps ($\chi_{\rm{flare, peak}}$) occur in flares with more prominent Balmer line radiation (relative to the blue continuum flux, C4170$^{\prime}$); the fit from K13 is shown as the \emph{dashed line}. The values for HST-1, HST-2, IF4 (from K16), and IF11 (from K16) are consistent with this general trend, but HST-1 and HST-2 exhibit among the largest impulsive phase values of these quantities compared to the flares in K13. The error bars calculated as for the APO/DIS spectra of K13.} \end{center} \label{fig:hgamma} \end{figure} The HST-1 and HST-2 flare-only spectra exhibit moderate Balmer jumps similar to other HF and GF events in K13, placing them at one end of the empirical color-color sequence from K16. The Balmer jump ratios of HST-1 and HST-2 indicate significant Balmer continuum radiation present, although there is suggestive evidence for $T\sim 8500-9500$ K blackbody radiation in the blue-optical (Table \ref{table:data}, Figure \ref{fig:balmerjump}) at a low flux level in HST-2. However, the blue-to-red optical continuum distribution (C4170$^{\prime}$/C6010$^{\prime}$) is best fit by a lower temperature ($T_{\rm{FcolorR}}$, which is the blackbody color temperature that fits the ratio C4170$^{\prime}$/C6010$^{\prime}$; K16) by several thousand degrees. The lower color temperature over wider spectral range may be due to a dominant, redder continuum component that has been referred to as the ``conundruum'' radiation in K13. In Figure \ref{fig:flarecolors}, we show the Balmer jump ratio vs. blue-to-red optical flux ratio covering the peak times of HST-1 (S\#163) and HST-2 (S\#152-153). The flare colors for HST-1 and HST-2 are statistically similar; averaging several spectra in the impulsive phase of HST-1 does not significantly change the location of this event in this color-color space. These events fall to the upper, left region of the distribution for dMe flare peaks from K16, and their robust values (obtained from spectra) establish an unexpected new regime of continuum flux ratios that also includes two flares on YZ CMi (IF12 and GF1) observed with narrowband photometry from ULTRACAM (K16). The colors of blackbodies are shown. The divergence of the actual flare emission from blackbody colors reflects the increasing prominence of the Balmer jump and Balmer lines; a similar argument has recently been applied to coarse measurements of the Balmer jump in solar flare data in \citet{Hao2017}. Many IF events and other HF events more clearly exhibit 10,000 K blackbody-like radiation at blue-optical wavelengths (cf. Figure 8 of K13). The spectra of the HST-1 and HST-2 events are significantly different from the two moderate-amplitude events IF4 and IF11 on YZ CMi from K16. The flare colors from 60~s integration time, ARC 3.5-m/DIS spectra of IF4 and IF11 from K16 are also indicated in Figure \ref{fig:flarecolors}. These events lie much closer to the blackbody line at the other end of the color-color distribution with smaller Balmer jump ratios ($1.8-2.3$), hotter blue-optical continua with optical color temperatures of $\sim9000-12,000$ K robustly constrained from spectra and from ULTRACAM photometry (see K16). The more impulsive events have strikingly smaller line-to-continuum ratios as well (Figure \ref{fig:hgamma}). Because IF4 and IF11 exhibit similar peak flux enhancements compared to HST-1 and HST-2, the amplitude of a flare in the $U$-band alone does not determine where a flare is located along the color-color sequence and thus if a flare produces energetically dominant $T\sim10,000$ K blackbody-like radiation. \begin{figure} \begin{center} \includegraphics[scale=0.70]{Figure12_HST.eps} \caption{ Color-color diagram of flares integrated over the peak times. \emph{Squares} show values obtained from low spectral resolution spectra. HST-1 and HST-2 establish a new regime that are neither consistent with F11 RHD models (Figure \ref{fig:balmerjump}), hot blackbody curves, the F13 RHD models from K16 (\emph{light gray stars}), or other dMe flares that are more impulsive in their broadband time evolution. Two similar amplitude but more impulsive flares (with ARC 3.5-m/DIS spectra) on YZ CMi from K16 are shown here as \emph{purple squares}: IF4 at (2.01$\pm0.23$,1.79$\pm0.12$) and IF11 at (2.11$\pm0.27$, 2.19$\pm$0.24). The values for HST-1 and HST-2 are given in Table \ref{table:data}. The error bars on the flare colors from APO/DIS spectra are calculated as in K13. For the flare-peak ULTRACAM colors, the \emph{black} error bars show only the statistical uncertainties (for comparisons of flares on the same star and same observing night; see K16) while the \emph{cyan} error bars include a 5\% systematic uncertainty in the flare color calibration (see K13, K16). The \emph{dark gray star} is the ``DG CVn superflare multithread (F13) model'' from \citet{Osten2016}; see text. } \end{center} \label{fig:flarecolors} \end{figure} \floattable \begin{deluxetable}{lcccccccccccccc} \rotate \tabletypesize{\tiny} \tablewidth{0pt} \tablecaption{The $U$-band and Optical Peak Flare Spectral Properties} \tablehead{ \colhead{Flare} & \colhead{S\#} & \colhead{UT} & \colhead{HST orbit time [s]} & \colhead{C3615$^{\prime}$/C4170$^{\prime}$\ (err)} & \colhead{C4170$^{\prime}$/C6010$^{\prime}$\ (err)} & \colhead{$T_{\rm{BB}}$ [K] (err) }& \colhead{$T_{\rm{FcolorR}}$} & \colhead{H$\gamma$/C4170$^{\prime}$} & \colhead{H$\gamma$/H$\delta$} & \colhead{H$\gamma$/H$\beta$} & \colhead{H$\gamma$/H$\alpha$} & \colhead{H11/H$\gamma$} & \colhead{BaC3615/C3615$^{\prime}$} & \colhead{HB frac} } \startdata HST-1 & 163 & 00:46:16 - 00:47:16 & $2219 - 2279$ & 3.6 (0.3, 0.2) & 1.15 (0.24,0.07) & 7300 (300) & 6500 (700) & 191 (12, 5) & 1.1 & 0.8 & 1.0 & 0.15 & 0.72 & 0.48 \\ HST-2 & 152-153 & 07:06:38 - 07:08:50 & $2113 - 2246$ & 3.81 (0.42, 0.24) & 1.24 (0.22, 0.10) & 9000 (500) & 5800-7200 & 159 (20, 5) & 1 & 0.9 & 1.23 & 0.1-0.14 & 0.71 & 0.46 \\ \enddata \tablecomments{The UT times occur on 2014-09-01. See Figure \ref{fig:lcfigs}(a)-(b) for S\#\ numbering. When two errors are given in parentheses, the first error corresponds to the result of error propagation when $\sigma$ is the standard deviation of flux values; the second error in parentheses corresponds to the result of error propagation when $\sigma$ is the error of the weighted mean. The line-integrated flux of H11 was obtained by subtracting a constant determined as the flux between the H11 and H10 lines. Prime symbols indicate flare-only specific flux values in continuum regions. } \end{deluxetable}\label{table:data} \clearpage \section{HST/COS Spectral Analysis} \label{sec:cosanalysis} \subsection{Continuum and Line Identification} We extract an HST/COS spectrum averaged over each flare's duration to identify continuum wavelength regions and emission lines. Figure \ref{fig:reference}(a) shows master pre-flare and flare-only spectra for HST-1 over the NUV wavelength range $\lambda=2444-2841$ \AA\ (including the short-wavelength region affected by vignetting). The pre-flare spectrum exhibits emission lines because the strong NUV lines form in the stellar chromosphere, where temperature generally increases outwards. During the flare, the line-free wavelength windows from $\lambda=2635-2665$ \AA\ and $2668-2692$ \AA\ are indicated by grey rectangles; the average flare-only flux in these two windows is hereafter C2650$^{\prime}$. Three additional continuum windows that we use in the analysis are the following: $2553.19 - 2559.84$ \& $2569.59 - 2574.7$ \AA\ (C2555$^{\prime}$), $2772.5 - 2788.5$ \AA\ (C2780$^{\prime}$), and $2810.35 - 2834.92$ \AA\ (C2820$^{\prime}$). We calculate the average, standard deviation, weighted mean, and error in the weighted mean over each of these continuum windows. The flare produces bona-fide enhanced continuum flux in these windows that are line free above the noise (Figure \ref{fig:reference}(b)). Most of the wavelength range for the flare and quiescent spectra consists of Fe II emission lines, where vacuum rest wavelengths from \citet{Nave} are retrieved from NIST and are indicated by vertical \emph{grey dashed lines}; many of these identifications have upper energy levels of $E_{\rm{upper}}/hc \sim40,000$ cm$^{-1}$, where $E_{\rm{upper}}$ is the upper level energy above the ground state of Fe II. Models of Fe II will presented in a future work to constrain flaring temperatures and densities implied by this value of $E_{\rm{upper}}$. We note that Fe II lines with $E_{\rm{upper}}/hc \sim 60,000$ cm$^{-1}$ are prominent in IRIS spectra \citep{DePontieu2014} of solar flares in the NUV and these lines (in LTE) are sensitive to temperatures around $T\sim 8,000 - 18,000$ K for a range of densities \citep[][Kowalski et al. 2018, in prep]{Kowalski2017A}. The brightest flare Fe II lines in the HST spectra are the $\lambda2599.15+2600.17$ (hereafter, Fe II $\lambda2600$) and $\lambda2631.83+2632.11$ (hereafter, Fe II $\lambda2632$). We calculate the line-integrated, continuum-subtracted flux in these four Fe II lines and add them for a line-to-continuum ratio of Fe II / C2650$^{\prime}$, similar to H$\gamma$/C4170$^{\prime}$\ in the optical. The Mg II $h$ and $k$ lines are indicated by vertical \emph{dot-dashed lines} in panel (c). The Mg II triplet lines at 2791.6 \AA\ and 2798.8 \AA\ are also detected in and outside the flares and are indicated by \emph{dot-dashed lines} in (c). We sum the Mg II line emission for the ratio of Mg II / C2650$^{\prime}$. \begin{figure} \begin{center} \includegraphics[scale=0.70]{hst1_ref_spec.eps} \caption{ \textbf{(a)} HST-1 flare spectrum averaged over its duration, and a master pre-flare spectrum, are shown. Filled grey shaded areas indicate continuum regions, black arrows indicate the 100 \AA\ expanded view in panel (b), and gray arrows indicate the 100 \AA\ expanded view in panel (c). \textbf{(b)} Representative error bars are shown, and a continuum enhancement is evident just redward of the Fe II of lines. Rest wavelengths of Fe II lines from NIST are indicated. \textbf{(c)} Dot-dashed lines indicate rest wavelengths of Mg II from NIST; the $k$ and $h$ lines are at 2796.35 \AA\ and 2803.53 \AA, respectively. } \end{center} \label{fig:reference} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{hst1_peakcos163.eps} \includegraphics[scale=0.5]{hst2_peakcos152_153.eps} \caption{ \textbf{(a)} HST/COS flare spectrum extracted over the time of S\#163 from the WHT ground-based spectrum. \textbf{(b)} HST/COS flare spectrum extracted over the time of S\#152-153 from the APO ground-based spectra. The best fit line (m$\lambda+$b) is shown in \emph{cyan} and is fit to the continuum windows indicated in Figure \ref{fig:reference}(a). } \end{center} \label{fig:hst_peak_spec} \end{figure} \subsection{Peak COS Spectra Analysis} \label{sec:cos_peak_analy} In Figure \ref{fig:hst_peak_spec}, we show the TIME-TAG spectra from HST/COS extracted over the times of S\#163 of HST-1 and S\#152-153 of HST-2, for which the optical and Balmer jump spectra were presented in Section \ref{sec:bj}. The signal-to-noise is clearly low: C2650$^{\prime}$/$\sigma_{\rm{C}2650} = 2.2$ in panel (a) if we use the standard deviation of the flux over this wavelength interval as $\sigma$. However, the error in the weighted mean over the C2650$^{\prime}$\ window gives a signal-to-noise of 16. The signal-to-noise is 24 using the weighted mean of the four continuum windows, C2555$^{\prime}$, C2650$^{\prime}$, C2780$^{\prime}$, and C2820$^{\prime}$. The slope of the continuum using a linear fit of the form $m\lambda+b$ to these four continuum windows helps constrain the value of $\lambda_{\rm{peak}}$ and the relative line and continuum energy in the flare. These linear fits are nearly flat and are shown (\emph{cyan}) in Figure \ref{fig:hst_peak_spec} for HST-1 and HST-2. For HST-1, we integrate under this best-fit line to find that $\sim60$\% of the flare energy from $2510-2841$ \AA\ at this time is due to the fitted continuum level. The C2650$^{\prime}$/C2820$^{\prime}$\ NUV flare color is 0.94$\pm0.11$, indicating a rather flat continuum though with nearly 10\% uncertainty. For HST-2, we integrate under the best-fit line to the continuum to find that 65\% of the flare energy from 2510-2841 \AA\ at this time is due to the continuum radiation. The C2650$^{\prime}$/C2820$^{\prime}$\ flare color value is 0.88$\pm0.08$ for HST-2, which is also statistically consistent with an approximately flat continuum. These HST/COS spectra include the peak times with significant decay phase radiation (Figure \ref{fig:lcfigs}). Therefore, we extract the TIME-TAG data of HST-1 over the rise phase, approximately corresponding to the times of S\#162 in Figure \ref{fig:lcfigs}. We confirm that there is no significant difference in the linear fit to the continuum windows in this spectrum, and the flare color C2650$^{\prime}$/C2820$^{\prime}$$=0.79\pm0.08$. In Figures \ref{fig:hst_peak_spec} (a)-(b) the flare spectra qualitatively look as if they are scaled quiescent spectra. However, scaling the quiescent spectrum to the value of C2650$^{\prime}$\ reveals that the ratio of line-to-continuum flux in the quiescent is clearly much larger than in the flare. This property is similar to optical flare spectra, in which many of the same very prominent lines are in emission compared to quiescence but the relative energy in the continuum greatly increases. Detailed modeling of HST flare continuum formation will be explored in a future work; in these models, the explosive continuum enhancement is a result of electron beam heating that increases the optical depths, ionization fractions, and Balmer continuum emissivity at large chromospheric column mass. Two more differences are notable in the flare spectra compared to quiescence. During the peak HST-1 and HST-2 spectra, the Fe II line complex at 2510-2565 \AA\ becomes stronger than in quiescence relative to the 2590-2635 \AA\ complex. This may be expected since the 2590-2635 \AA\ complex is more optically thick, which may have a smaller relative increase in flux as the optical depths increase during the flare. While this effect occurs in both HST-1 and HST-2, the relative flux in Fe II $\lambda$2632 Fe II $+\lambda2600$ to the Fe II $\lambda$2626.5 line (which shares the same upper state as the Fe II $\lambda$2600 line) differs: this ratio appears larger in HST-1 than in HST-2 and in quiescence. These effects should be investigated with higher signal-to-noise data. \clearpage \subsection{Time Evolution} In the optical, the emission lines decay slower than the continuum flux \citep{HP91}. We speculate that this well-established phenomenon may be due to a decreasing energy deposition rate into the lower atmosphere throughout the flare, thereby causing the optically thinnest transitions (e.g., continuum, high-$n$ hydrogen lines) to fade by a larger percentage (relative to peak) as the heating at high column mass becomes weaker in the decay phase. The HST-1 and HST-2 flare data allow us to characterize the Balmer continuum evolution at lower optical depth than at the Balmer series limit while also constraining the NUV Mg II and Fe II lines, which may evolve gradually like the Ca II K lines since calcium, iron, and magnesium have similarly low first ionization potentials of $6-8$ eV. In the gradual phase of the Great Flare of AD Leo, the Mg II lines evolved similarly to the Ca II K line. Differences in the time evolution would thus reveal properties of the optical depths in the same flaring plasma. In Section \ref{sec:nuvdiff}, we compared the broadband NUV to the $U$-band photometry evolution at high cadence. Here, we follow K13 and calculate the $t_{1/2}$ values and the peak-normalized light curves to compare the time-evolution of each emission line and continuum measure at lower time resolution, $t_{\rm{exp}} \sim60$~s. We extract the COS/G230L TIME-TAG data over the duration of each ground-based spectral exposure time for HST-1 and HST-2 in Figure \ref{fig:lcfigs} and subtract the master pre-flare spectrum for each flare. In Figure \ref{fig:panels}, we show the time evolution of C2650$^{\prime}$\ compared to the ground based-spectral continuum evolution of C3615$^{\prime}$\ and C4170$^{\prime}$\ for HST-1 (panel a) and HST-2 (panel b). The weighted-mean of the four HST continuum windows (``NUV ave cont'') is also shown to closely follow C2650$^{\prime}$, as expected. The evolution of the continuum windows over the NUV is similar\footnote{The continuum light curves in the NUV in Figure \ref{fig:lcfigs} seem to show a faster decay than the C3615$^{\prime}$\ and C4170$^{\prime}$, but this difference is not significant when taking the un-weighted mean of the NUV continuum windows.} to the evolution of C3615$^{\prime}$\ and C4170$^{\prime}$\ at low time resolution. Compared to the emission lines (panels (c)-(d)), it is clear that the NUV continuum windows exhibit a closer time-evolution to the Balmer continuum (C3615$^{\prime}$) and the blue-optical continuum (C4170$^{\prime}$). Figure \ref{fig:panels}(c) shows the peak-normalized, line-integrated flux evolution for HST-1 for several emission lines. The Mg II flux is similar to H$\gamma$ flux in the decay, but it weakly responds to the first impulsive heating event like Ca II K. HST-1 exhibits the typical pattern in the decay among spectral quantities: from most elevated relative to peak to least elevated, the ordering is the following: Ca II K, H$\alpha$, and higher order Balmer lines (e.g., H$\gamma$) which are all slow compared to the relative decline of the continuum measures, similar to other flares in K13. The Fe II lines in the HST spectra are slower to decline than the HST NUV continuum in HST-1. The time-evolution differences between the NUV continuum measures and Fe II in the HST spectra, and the similarity among the panchromatic continuum measures, suggest that the NUV continuum measures are not misidentified from a blend of faint, Fe II lines that are unresolved at the relatively low-spectral resolution of these observations. In HST-2 (Figure \ref{fig:panels}(d)), Mg II has a slower rise and decay than all lines except for Ca II K. Here, the peak-normalized time-evolution of each Balmer line is similar, which has been noted for other HF events in K13; in IF events, the higher order hydrogen lines are faster to fade than the lower order hydrogen lines, such as H$\alpha$. We also plot the $t_{1/2}$ value of each spectral component against the wavelength of the transition (following K13), which is known as a time-decrement, for HST-1 and HST-2 in Figure \ref{fig:thalf}. HST-1 exhibits a time-decrement like some other IF-type events from K13, but the H$\alpha$ value is larger than a linear extrapolation from the higher order lines, as occurs in the GF-type time-decrement presented in K13 (the classification as ``hybrid'' is thus appropriate). HST-2 has a flat time-decrement like the HF-type event discussed in K13. The $t_{1/2}$ values of the C4170$^{\prime}$\ quantity are larger than many values calculated from similar temporal resolution data in K13 (\emph{grey diamonds}). From these data of HST-1 and HST-2, we conclude that Mg II does not rise as fast as the Balmer lines, and is more like Ca II K, while it decays like one of the lower-order Balmer lines, such as H$\alpha$ or H$\gamma$. Among all the emission lines in these events, the Ca II K line has the most gradual time-evolution. The Fe II lines have a rise like Mg II but a decay like that of H$\gamma$. The Fe II lines are not good proxies of the NUV continuum measures in these two flares; perhaps these relatively strong Fe II lines are too optically thick to originate from the same deep layers where continuum radiation can escape. The Fe II / C2650$^{\prime}$\ values covering the peak times of HST-1 and HST-2 are given in Table \ref{table:hstdata}. The time-evolution of the emission lines will be compared in detail to RHD models to test the hypothesis that the different decay rates are due to variation in the optical depths of the lines. Emission line profile changes and line shifts would also provide strong constraints on the radiative-hydrodynamic modeling \citep{Kuridze2015, Brown2018}. The HST spectra exhibit anomalous mid-orbit dispersion shifts by one pixel (Section \ref{sec:cosdata}), which prevents us from readily constraining the time-evolution of the line shifts using the techniques for EUV and NUV solar flare spectra \citep{Graham2015, Brown2016}. Recent observations of a long-duration flare on the dM3.5e star EV Lac showed striking blue wing enhancements in H$\alpha$ over several hours \citep{Honda2018}. The spectral resolution in the optical data of GJ 1243 is too low to robustly characterize line shifts and asymmetries, but we compare the H$\alpha$ equivalent width evolution of HST-1 (from the WHT/ISIS red spectra with 10~s exposure times) to this EV Lac flare. The HST-1 event achieves a peak equivalent width of 14 \AA\ in H$\alpha$ with a rise time of only two minutes, while the EV Lac flare has a much longer rise time ($\sim10-30$ minutes) to a peak equivalent width of $\le$11 \AA. We speculate that these two flares may correspond to different regimes of explosive and gradual chromospheric heating rates \citep[e.g.,][]{Fisher1989}. \begin{figure} \begin{center} \includegraphics[scale=0.35]{hst1_Clc.eps} \includegraphics[scale=0.35]{hst1_lines.eps} \includegraphics[scale=0.35]{hst2_Clc.eps} \includegraphics[scale=0.35]{hst2_lines.eps} \caption{ Peak-normalized time-evolution of continuum windows (a)-(b) and emission lines (c)-(d) for HST-1 shown over 35 minutes and HST-2 shown over 16 minutes. For both HST-1 and HST-2, the NUV continuum evolution is faster than the emission lines, most notably the Fe II lines. All continuum measures evolve similarly. } \end{center} \label{fig:panels} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.5]{hst1_thalf.eps} \includegraphics[scale=0.5]{hst2_thalf.eps} \caption{ The time-decrements for HST-1 and HST-2 calculated from the peak-normalized light curves in Figure \ref{fig:lcfigs}; the FWHM of the light curve is the $t_{1/2}$ value. Emission lines are indicated by \emph{triangles}, continuum measures C2650$^{\prime}$, C3615$^{\prime}$, and C4170$^{\prime}$\ as \emph{filled black circles}, and the broadband (NUV and $U$-band) measures at high-time ($t_{\rm{exp}}=5$~s) resolution by \emph{blue filled circles}. The continuum and broadband measures are the fastest quantities. The \emph{grey diamonds} are values of C4170$^{\prime}$\ from other flares (at similar temporal $\sim30-60$~s resolution) from K13. The apparent impulsive phases (Figure \ref{fig:lcfigs}, \emph{teal asterisks}) of HST-1 and HST-2 are relatively long-duration at this time-resolution due to several shorter periods of fast and gradually rising emission that are evident in the high-time ($t_{\rm{exp}}=5$~s) resolution light curves (Figures \ref{fig:lcfigs} - \ref{fig:uband}). } \end{center} \label{fig:thalf} \end{figure} \subsection{Flare Data at the $U$-band Atmospheric Cutoff (3120 - 3500 \AA)} \label{sec:keck_lris} The C3200$^{\prime}$/C3615$^{\prime}$\ is a new flare color spectral index that is obtained from the Keck/LRIS spectra, which have wavelength coverage at $\lambda>3120$ \AA. We average over S\#116 (07:06:48 - 07:07:33) and S\#117 (07:08:10 - 07:08:55) from the Keck/LRIS flare-only spectra of HST-2 and show this spectrum\footnote{For HST-1, we do not calculate this quantity because it is not possible to continuously rotate the slit at the parallactic angle at the WHT.} in Figure \ref{fig:lris}. We fit a line to the wavelengths from 3120 \AA\ to 3700 \AA. The value of C3200$^{\prime}$/C3615$^{\prime}$\ is 0.93$\pm0.05$, which is consistent with the ratio obtained from the linear fit. We also fit a line to $\lambda=3430-3700$ \AA\ in the APO spectrum of HST-2 from Figure \ref{fig:balmerjump} and extrapolate to compare to the slope obtained from Keck. The linear fit is slightly flatter in the APO flare spectrum, which could be due to flux calibration uncertainties in the Keck/LRIS (see Section \ref{sec:keckdata}). Generally, the absolute flux values and slopes are in good agreement (Figure \ref{fig:lris}). \begin{figure} \begin{center} \includegraphics[scale=0.5]{keckflare.eps} \caption{ Keck/LRIS spectra from 07:06:48 UT to 07:08:55 UT, covering the HST-2 peak and corresponding to similar times of the spectrum in Figure \ref{fig:balmerjump}. A linear fit to 3120 - 3700 \AA\ is shown as the cyan line, and the pre-flare from Keck is the dotted line. The red line is a fit to 3430 - 3700 \AA\ in Figure \ref{fig:balmerjump} and extrapolated to shorter wavelengths. The slopes and flux agree remarkably well. Arrows in Figure \ref{fig:lcfigs}(b) indicate the times over which two Keck/LRIS spectra are averaged. } \end{center} \label{fig:lris} \end{figure} \section{New Constraints for RHD Models} \label{sec:combined} How do the short wavelength HST/COS and Keck/LRIS spectra supplement the longer-wavelength NUV and optical spectral constraints on RHD models? Since we do not have spectra of GJ 1243 with overlapping wavelength coverage, we evaluate the fidelity of the absolute flux calibration of the HST/COS data in order to compare to the optical flux. The spectra from the ground-based telescopes are calibrated to the $V-$ and/or $B-$band magnitudes of GJ 1243 (Section \ref{sec:apodata}). The spectrophotometric evolution is consistent with the $U$-band photometry (Figure \ref{fig:uband}) for HST-2, while the broadband enhancement in HST-1 in the red is consistent with the Aristarchos $V+R$ photometry (Figure \ref{fig:aristarchos}). We follow \citet{Sirianni2005} and calculate the filter-weighted specific flux density of the pre-flare NUV spectrum of HST-2 using the effective area response matrix of Swift/UVOT/UVW1, which has a central wavelength at $\sim2600$ \AA. We use a serendipitous Swift/UVW1 observation of GJ 1243 retrieved from MAST (Obs ID 00040116002; $t_{\rm{exp}}=232.6$~s) and an aperture radius of 1.2x the PSF FWHM to obtain a count rate. This count rate is converted to specific flux density using the conversion provided by the Swift team. Compared to the Swift/UVOT photometry, the filter-weighted specific flux density of the HST-2 pre-flare spectrum is 7\% lower. We confirmed that the pre-flare continuum level for HST-1 is consistent with the pre-flare continuum of HST-2. By inspection of the spectra, the emission lines in HST-1 are elevated by nearly a factor of two over the pre-flare fluxes of HST-2; therefore the synthetic UVW1 flux is sensibly 30\%\ higher since there is a larger fraction of energy in the emission lines in quiescence in the NUV \citep[e.g.,][]{Hawley2007}. At the peaks of HST-1 (S\#163) and HST-2 (S\#152-153), we extract HST/COS spectra using an aperture of $\pm57$ pixels and divide by the spectra extracted with the smaller, higher signal-to-noise aperture of $\pm15$ pixels (see Section \ref{sec:cosdata}), thus obtaining an aperture correction of 14$\pm8$\%, which is consistent in both flares. In this section, we apply this aperture correction as a scale factor (1.14) to the HST/COS spectra to compare to the flux-calibrated spectra from the ground-based telescopes. The quoted absolute calibration accuracy of G230L is somewhat larger than 2\% (COS ISR 2010-01), and thus we adopt a conservative uncertainty of 10\%\ for the absolute flux calibration of the HST/COS spectra. Figure \ref{fig:ultimate} shows the composite NUV spectra for S\#152-153 during HST-2 (panel a) and S\#163 during HST-1 (panel b). The HST/COS data are shown at approximately the same spectral resolution. These pan-chromatic flare spectra demonstrate that a $T=9000$ K blackbody that is extrapolated from the optical does not satisfactorily explain the NUV flare continuum over the impulsive phases of these HF/GF-type events. In Figure \ref{fig:ultimate}, we show the average $\lambda=2510-2841$ \AA\ specific flux density (\emph{blue points}), including line and continuum radiation. This broadband flare-only specific flux value is approximately equal to the Balmer continuum flare-only specific flux in the middle of the $U$-band (e.g., C3615$^{\prime}$). The 9000 K blackbody underestimates the continuum flare flux by a factor of two at the NUV wavelengths; the broadband flux including emission lines (which result in pseudo-continua in this range) is under-estimated by a factor of three compared to the blackbody extrapolation. Figure \ref{fig:ultimate} also shows how two RHD models from the literature compare to the NUV data. The F11 RHD model \citep[from][]{Kowalski2015} clearly exhibits too large of a Balmer jump. If scaled to the C3615$^{\prime}$\ value, the flux decrease towards short wavelengths in the Balmer continuum is consistent with the lower error bars of the underlying continuum in the NUV range. However, the C4170$^{\prime}$\ is vastly under-estimated as noted in Section \ref{sec:bj}. Another problem with the F11 RHD model is the very high H$\gamma$/C4170$^{\prime}$\ value. Although H$\gamma$/C4170$^{\prime}$\ was reported to be 150 in \citet{Kowalski2015} (and thus consistent with these observations), the line calculations in that work employed a Voigt profile with the damping parameters from \citet{Sutton1978} for the electric pressure broadening\footnote{Collisional broadening from ambient protons and electrons, which is also sometimes referred to as the linear Stark effect.}. However, \citet{Kowalski2017B} implemented an accurate hydrogen broadening prescription \citep[as first pointed out for solar flare spectra in][]{JohnsKrull1997} using the RH code \citep{Uitenbroek2001} to model flare atmospheres. We use the RH code to recalculate the F11 RHD flare spectrum \citep[at $t=2.2$~s; see][for details]{Kowalski2015} using the new hydrogen line profiles (``TB09HM88'') from \citet{Tremblay2009}. The new H$\gamma$/C4170$^{\prime}$\ value is much larger near $\sim330$, which is far from the observed range of $150-200$ in HST-1 and HST-2. As expected, the new electric pressure broadening profiles make the hydrogen Balmer lines broader and brighter \citep[see also][for details]{Kowalski2017B}. These profiles are much more accurate in comparison to the absorption lines of Vega's spectrum or in white dwarf spectra and are thus preferred for modeling the flare broadening, which is due to large electron densities and optical depths. The F13 multithread continuum model (``DG CVn superflare multithread (F13) model'') from \citet{Osten2016} is shown in Figure \ref{fig:ultimate}. The multithread model is an average burst spectrum of impulsive heating ($\Delta t=2.3$~s) and impulsive cooling ($\Delta t = 2.7$~s) from the F13 model of \citet{Kowalski2015}, where impulsive cooling refers to the phase of relaxation after the electron beam is turned off at 2.3~s. The multithread model also includes an additional snapshot at $t=4$~s to represent the gradual cooling of previously heated loops over a 25x larger area. The multithread approach originates from modeling spatially unresolved soft X-ray light curves of solar flares \citep{Warren2006}. The DG CVn superflare multithread (F13) model is most consistent with the continuum flux ratios, the line-to-continuum ratios \citep[line radiation not shown here; see][]{Kowalski2017B}, and the NUV, $U$-band, and blue-optical continuum flux distribution. This model was recalculated with the improved hydrogen broadening prescription in \citet{Kowalski2017B}; the value of H$\gamma$/C4170$^{\prime}$\ is 120 \citep[Table 2 of][]{Kowalski2017B}, making it deficient in line radiation compared to the observations. \begin{figure} \begin{center} \includegraphics[scale=0.70]{allNUVspec_HST2.eps} \includegraphics[scale=0.70]{allNUVspec_HST1.eps} \caption{ \textbf{(a)} Data of HST-2 for S\#152-153 in Figure \ref{fig:balmerjump} and HST/COS spectra binned to the same time interval and similar spectral resolution. The wavelength ranges for C4170$^{\prime}$\ and for the H$\gamma$ line-integrated flux are indicated by vertical \emph{dashed lines} and \emph{dotted lines}, respectively. Three continuum models with a range of Balmer jump ratios are shown, normalized to the specific flux density of the APO spectra at $\lambda=4170$ \AA\ (the blackbody is taken from Figure \ref{fig:balmerjump} and is fit to the blue-optical continuum windows). The \emph{pink line} is the continuum fit from Figure \ref{fig:hst_peak_spec}(b), and the \emph{dark blue point} shows the flare excess over the NUV wavelength range with an error bar indicating a (conservative)10\% systematic uncertainty in the absolute flux calibration of HST; see text. \textbf{(b)} Same as in panel (a) but showing HST-1 corresponding to the times of S\#163 from the WHT. Note that the optical continuum in this flare is rather flat (Figure \ref{fig:fullsed}); here we scale a $T=9000$ K blackbody to the C4170$^{\prime}$\ of this spectrum, which satisfactorily matches the excess continuum shape between $4000-4400$ \AA. \label{fig:ultimate} } \end{center} \end{figure} Each continuum (or line-to-continuum) flux ratio is compared to the models and given a score as $\frac{F_{\lambda,\rm{obs}}-F_{\lambda,\rm{model}}}{\sigma}$. In the NUV range, we use C2650$^{\prime}$/C2820$^{\prime}$, where the values of C2650$^{\prime}$\ and C2820$^{\prime}$\ are calculated using the weighted means and the errors of these weighted means. We use consistently defined values of $\sigma$ for the colors from the ground-based data to compare to the ratios from HST. Thus, we use the weighted mean and the error in the weighted mean in each continuum window for the score calculation, whereas the error bars on the flare colors of HST-1, HST-2, IF4, and IF11 in Figure \ref{fig:flarecolors} use the standard deviation of the flux in each continuum window (see K13 for justification) in the propagation of the uncertainty of the flare color. Here, we also add in quadrature a systematic uncertainty in the color calibration for the ground-based spectra ($\sim5$\% for each flare color index; see Appendices of K13 and also K16). This error propagation\footnote{Calculated in this manner, the flare color uncertainties for IF4/F2 and IF11/F1 in Figure \ref{fig:flarecolors} are $\sim0.1$. Compared to K13, the error bars from APO/DIS spectra in Figure \ref{fig:flarecolors} are conservative estimates for these two events as well.} gives $\chi_{\rm{flare}} = 3.81 \pm0.24$ and C4170$^{\prime}$\ /C6010$^{\prime}$$=1.24\pm0.1$ for HST-2 (S\#152-153). These uncertainties are the second values in the parentheses in Table \ref{table:data} and are more consistent with the flare color error propagation for the peak-flare ULTRACAM photometry (\emph{cyan} error bars) in Figure \ref{fig:flarecolors} here. Table \ref{table:modchk} shows the scores for the following flare colors of HST-2 (S\#152-153): C2650$^{\prime}$/C2820$^{\prime}$$=0.88 \pm0.08$, C3200$^{\prime}$/C3615$^{\prime}$$=0.93\pm0.05$, and H$\gamma$/C4170$^{\prime}$$=159 \pm 5$. The ratios of C2650$^{\prime}$/C3615$^{\prime}$\ and C2650$^{\prime}$/C4170$^{\prime}$\ include a 10\% systematic uncertainty in the flux calibration (from the aperture correction above) of the HST/COS spectra: C2650$^{\prime}$/C3615$^{\prime}$$=0.59\pm0.07$ and C2650$^{\prime}$/C4170$^{\prime}$$=2.24\pm0.26$. For HST-1, these two ratios are similarly 0.6 and 2.2, respectively. Thus, the observed Balmer continuum decreases by $\sim60$\%\ from the Balmer series limit to $\lambda \sim 2650$ \AA, constraining $\lambda_{\rm{peak}}$ to the $U$-band, likely near the Balmer series limit. Coincidentally, some RHD model snapshots that reproduce a dominant hot blackbody-like spectrum (e.g., the F13 model at $t=2.2$~s) produce a value of $\lambda_{\rm{peak}}$ that is near $2650$ \AA. We quantitatively evaluate the models in Figure \ref{fig:ultimate} using a new RHD grading scheme, which is the sum of the absolute values of the scores for each spectral flux ratio, divided by the number of flux ratios. The final score represents the average $\sigma$-difference per model comparison measure and is given for each model in the last column of Table \ref{table:modchk}. The F11 RHD model and the 9000 K blackbody are $\gtrsim 10 \sigma$ different per measure, making them poor representations to the data of HST-2. Excluding C3200$^{\prime}$/C3615$^{\prime}$, the final scores are similar for HST-1 (S\#163): 3.5, 11.3, and 14.6 for the three respective models in Table \ref{table:modchk}. The DG CVn superflare multithread (F13) model is a significantly better representation of the observed flare spectra in these HF/GF events. \clearpage \begin{deluxetable}{lccccccccc} \rotate \tabletypesize{\tiny} \tablecaption{Models vs. HST-2 (S\#152-153 times)} \tablewidth{0pt} \tablehead{ \colhead{Model} & \colhead{Source} & \colhead{$\frac{\rm{C}2650\rm{'}}{\rm{C}2820\rm{'}}$} & \colhead{$\frac{\rm{C}3200\rm{'}}{\rm{C}3615\rm{'}}$} & \colhead{$\frac{\rm{C}3615\rm{'}}{\rm{C}4170\rm{'}}$} & \colhead{$\frac{\rm{C}4170\rm{'}}{\rm{C}6010\rm{'}}$} & \colhead{$\frac{H\gamma}{\rm{C}4170\rm{'}}$} & \colhead{$\frac{\rm{C}2650\rm{'}}{\rm{C}3615\rm{'}}$} & \colhead{$\frac{\rm{C}2650\rm{'}}{\rm{C}4170\rm{'}}$} & \colhead{Final score}} \startdata DG CVn Superflare Multithread (F13) Model & O16, K17, K15 & $-0.1$ & $+1.0$ & $+3.9$ & $+2.4$ & $+7.8$ & $-0.8$ & $+1.5$ & 2.5 \\ RHD F11 $t=2.2$~s & K15 & $+0.3$ & $+1.8$ & $-23.2$ & $+3.7$ & $-34.1$ & $-0.6$ & $-11.1$ & 10.7 \\ 9000 K blackbody & Planck & $-0.8$ & $-1.7$ & $+11.1$ & $-3.7$ & $-65.9$ & $-3.2$ & $+4.1$ & 12.9 \\ \enddata \tablecomments{The values in this table give the quantity $\frac{F_{\lambda,\rm{obs}}-F_{\lambda,\rm{model}}}{\sigma}$ for each model comparison. Positive values indicate that the slopes of the models are not blue enough or that there is not enough line radiation compared to continuum radiation for the H$\gamma$/C4170$^{\prime}$. All models were calculated with the RH code from snapshots from radiative-hydrodynamic simulations using the RADYN code \citep{Carlsson1997, Allred2015, Kowalski2015}, except the blackbody. The error bars for the 9000 K blackbody include an uncertainty of 500 K on the blackbody temperature. Note, the DG CVn Superflare Multithread (F13) Model has a Balmer jump of 2.9 with RH and 3.2 with RADYN. The 9000 K blackbody score would be much worse than the RHD F11 model if the constraints from other Balmer emission lines are included. O16 refers to \citet{Osten2016}, K17 refers to \citet{Kowalski2017B}, and K15 refers to \citet{Kowalski2015}.} \end{deluxetable}\label{table:modchk} \section{Discussion II: Implications for Flare Heating in HF/GF-type events} \label{sec:discussion2} The DG CVn superflare multithread model has the lowest (best) grade among the three models tested here. Future verification of this model for other HF/GF-type events has several interesting implications. For example, bursts of F13 electron beam heating adequately reproduce the line and continuum radiation in HST-1 and HST-2, as in the decay phase of superflares and megaflares \citep{Osten2016, Kowalski2017B}. The high-time cadence light curves of HST-1 and HST-2 show that the impulsive phase spectra include peak times as well as fast and gradual decay radiation when the blackbody temperature often decreases rapidly (K13). Thus, the average of many flare bursts, or many ``threads'' (individual flare loops), at various times in their evolution is a sensible picture based on solar flare observations and modeling \citep{Warren2006}. The hot blackbody-like radiation in the F13 only appears for a small fraction of time that the electron beam impulsively heats the atmosphere. In the multithread (burst average) model, the hot blackbody-like radiation gets diluted, while the rising and decaying radiation in each flare thread dominates the spatially-integrated spectra. This multithread model includes a rather ad-hoc continuum and emission line component (the $t=4$~s snapshot in the F13 model) that represents the gradual decay of previously heated loops long before the burst\footnote{Excluding this ad-hoc spectrum from the multithread model causes the line-to-continuum ratio to be 74 and the Balmer jump ratio to be smaller as well \citep{Kowalski2017B}.}. Better gradual cooling phase RHD models are needed. The interpretation of HST-1 and HST-2 as spatially and temporally averaged F13 beam-heated loops suggests that hot blackbody radiation may exist in HF/GF-type events but is produced for only a very short time and over a very small area in the flare. A hot blackbody function was fitted to the blue-optical in the YZ CMi Megaflare decay phase spectrum \citep{Kowalski2010}, but the fit to the data was recently improved with the DG CVn superflare multithread (F13) model in \citet{Kowalski2017B}. However, the YZ CMi Megaflare decay phase spectrum exhibits\footnote{These spectral quantities become smaller in the Megaflare's secondary events, which are explained as Vega-like-emitting sources \citep[K13 and][]{Kowalski2017B}.} a smaller Balmer jump ratio (2.7) and line-to-continuum ratio (90) compared to HST-1 and HST-2. Although the main times of the flares studied here (S\#163 times for HST-1 and S\#152-153 times for HST-2) include significant amounts of time when the broadband NUV radiation is decaying (Figure \ref{fig:lcfigs}), the Balmer jumps are moderately high ($3-4$) in any other ground-based spectrum over the impulsive phases of these flares. Also, the HST spectrum extracted over the rise phase times of HST-1 (Section \ref{sec:cos_peak_analy}) does not show anything significantly different to suggest that hot blackbody radiation is produced. Compared to HST-1 and HST-2, the HF events in K13 have notably smaller values of H$\gamma$/C4170$^{\prime}$\ $<75$ while they also exhibit stronger evidence of hot blackbody, $T=10,000-12,000$ K continuum radiation from $\lambda=4000-4800$ \AA. An observed (qualitative) relationship between the Balmer line radiation and the red continuum radiation has been established in K13; it appears that HST-1 and HST-2 are consistent with this relationship by having a prominent, redder/flatter continuum which may provide insight into the so-called ``conundruum'' radiation when modeled in detail with RHD simulations. We note that flat red-optical continua and large line-to-continuum ratios are also observed in the decay phase of giant flares (Figure 30 of K13) and megaflares (Figure 31 of K13). We speculate that the hot blackbody-like radiation may be present in the HST-1 and HST-2 events, but a flatter and redder continuum and the Balmer continuum clearly dominate the spectra. \subsection{Future Modeling Directions \& Connection to dG Flares} In Paper II, we will combine constraints from the continuum ratios, line-to-continuum ratios, and the blending of the high-$n$ Balmer series of hydrogen in the HST-1 and HST-2 flares to present new RHD flare models, while also incorporating LTE modeling of the Fe II lines and analysis of the Fe II/C2650$^{\prime}$\ ratios. The Balmer jump ratios must be lower than predicted by models with $T\sim10,000$ K plasma at low continuum optical depth (in the RHD F11 model). Model atmospheres that produce a dense chromospheric compression with a temperature increase above 10,000 K at moderately high column mass of log $m \sim -2.35$ are expected to produce enough hydrogen Balmer bound-free (photoionization) opacity to lower the Balmer jump ratio to a value of $3-4$ \citep{K18}. The electron beam flux regime of $\sim$F12 may help explain the amount of heating at high column mass in these flares, while also favorably resulting in a factor of ten lower return current electric field strength than the multithread F13 model. In particular, using a high low-energy cutoff causes the beam flux to heat high column mass (log $m \sim -2.35$) to temperatures near $T\sim10^4$ K in order to produce wavelength-dependent optical depths (Paper II), which lowers the Balmer jump ratio from the optically thin, $T=10,000$ K value. Recent high spatial resolution data of solar flares sometimes imply F12 electron flux densities \citep[L. Fletcher, priv. communication,][]{Fletcher2007, Krucker2011, Sharykin2017}. The timescales and energies of the two GJ 1243 flare events are strikingly similar to the largest solar flares \citep{Woods2004}; thus, HST-1 and HST-2 may be suitable for establishing connections and differences (e.g., in the slow rise and decay of Ca II K) between dMe and dG flares. Recently, \citet{Namekata2017} compared the durations and energies of flares on GJ 1243 (from \emph{Kepler}) to white-light flares on the Sun (from SDO/HMI). They found that both GJ 1243 and the Sun exhibited similar power-law scalings between flare energy and duration as found in \citet{Maehara2015}, but that for a given duration, the flares on GJ 1243 were 10x larger energy (all bolometric flare energies were calculated assuming a blackbody; see Section \ref{sec:discussion3}). The superflares on rapidly rotating G dwarfs were 1000x larger energy for similar durations as solar flares. We speculate that the \citet{Namekata2017} measure of impulsiveness (location on the duration-energy diagram) may be related to the values of the Balmer jump ratios at peaks of flares and the measures of impulsiveness (from K13) calculated in our work. However, more measurements of the Balmer jump ratio are needed for the flares of GJ 1243 and for solar flares. The recent detection and interpretation of NUV continuum radiation in solar flares using IRIS spectra (near $\lambda=2826$ \AA, roughly similar to our measure of C2820) suggests that the flare intensity can be explained by electron beam models that produce optically thin Balmer continuum radiation \citep{Heinzel2014, Kleint2016}. Even larger Balmer jump ratios are thus expected in solar flares than achieved in the HST-1 and HST-2 events \citep{Kowalski2017A}, but intensity-calibrated spectra with broad wavelength coverage in the $U$-band during solar flares are very rare \citep{KCF15, Ondrej1}. Instrumentation in the near future will likely achieve the precision to measure the spectral shape in the $U$-band and NUV during superflares on rapidly rotating dG stars. \section{Discussion III: Broader Implications of NUV Flare Observations} \label{sec:discussion3} NUV flare spectra are important for input to photochemical modeling of biosignatures and ozone chemistry in planetary atmospheres in the habitable zone of M dwarfs at $\sim0.03-0.1$ a.u. The nearest Earth-mass planet in the habitable zone was recently discovered around the dM5.5e flare star Proxima Centauri \citep{Anglada2016}, but atmospheric evolution studies are not able to accurately account for the NUV flare irradiation due to a lack of data in this range \citep{Ribas2016, Ribas2017}. Instead, photochemical modeling studies usually assume that the gradual phase NUV spectra from the Great Flare (a giant IF-type event) from \citet{HP91} represents all flare phases and all flares \citep{Segura2010, Ranjan2017}. From spectral observations at $\lambda>3500$ \AA, we know that dMe flares exhibit significant inter- and intra-flare variation of the Balmer jump ratio and the blue-to-red optical color temperature (K13, K16). If exoplanet chemistry models need only very coarse spectral resolution and they have measurements of the Balmer jump and impulsiveness index, the $\lambda<3000$ \AA\ flux can be estimated on one-minute timescales using the relationship that the average flare-only specific flux from $\lambda=2510-2841$ \AA\ is approximately equal to C3615$^{\prime}$\ for these types of flares (Figure \ref{fig:ultimate}). Several smaller dMe flares have been observed with spectra in the NUV. The NUV was observed with the HST/STIS during several flares (without ground based spectra) on YZ CMi \citep{Hawley2007}; the strength of the continuum flux relative to the emission lines varied from flare to flare: at $\lambda=2300-3050$ \AA, the percentage of energy in the continuum was typically 70-90\% and 50\% for some smaller events. The fraction of energy in the NUV continuum in our flare events ($2510-2841$ \AA) is also relatively small $\sim60$\%. Thus, a careful consideration is required for using any flare event as a ``template'' for all flares, whether it be for the purposes of photochemistry modeling or comparison to the physical conditions in solar flares. The flare and quiescent spectra in Figure \ref{fig:ultimate} are presented as FITS tables on Zenodo and VizieR, linked to this article. We hope they will be of use to the exoplanet community as an alternative to the Great Flare data of AD Leo, when HF/GF type events are observed, though the rates of such events are not well-quantified. Many studies also use a $T \sim 9000$ K blackbody as a fiducial flare spectrum, either to calculate bolometric stellar flare energies from Kepler \citep[e.g.,][]{Maehara2012}, to calculate bolometric solar flare energies \citep{Kretzschmar2011, Namekata2017}, or to calculate an estimate of the ultraviolet flux for exoplanetary atmosphere modeling \citep[e.g.,][]{Howard2018, Loyd2018}. Our data show that scaling Kepler observations of flares with a $T=9000$ K blackbody to represent the bolometric energy under-estimates the NUV continuum specific flare-only flux at $\lambda < 2840$ \AA\ by a factor of two and underestimates the average $2510-2841$ \AA\ specific flare-only flux by a factor of three. A $\sim$9000 K blackbody has been found to represent the broadband distribution of several large and moderate energy dMe flares \citep{HF92, Hawley2003}, and this has become a widely assumed property of dM flares. We convolve the HST-1 impulsive phase spectrum from Figure \ref{fig:fullsed} and Figure \ref{fig:ultimate} with the broadband filters $UBVR$ from \citet{Bessel2013}, and we fit a blackbody to the filter-weighted specific flux densities and a UV continuum point (C2650$^{\prime}$) following \citet{Hawley2003}. The best-fit blackbody temperature is 9200 K (Figure \ref{fig:convolve}), which demonstrates that spectra are necessary to conclude that hot blackbody radiation dominates the continuum distribution of dM flares \citep{Allred2006}. Accurate absolute $U$-band spectrophotometry is subject to several vagaries associated with incomplete filter coverage near the atmospheric limit and differences in total system (telescope$+$atmosphere) response compared to photometry (e.g., in Hawley et al. 2003). We note that adjusting the $U$-band spectrophotometry in Figure \ref{fig:convolve} by 15\% does not change the inferred blackbody temperature by more than 400 K. Figure \ref{fig:convolve} also shows the convolved fluxes through the Large Synoptic Survey Telescope (LSST) $ugri$ filters\footnote{\url{https://github.com/lsst/throughputs/tree/master/baseline}, using the ``total'' filter profiles that are the predicted total system (including atmospheric) response.}; a blackbody with a temperature of $\sim15,000 - 22,000$ K is (poorly) fit without the continuum point in the UV at $\lambda < 3000$ \AA. This also calls into question the very high blackbody temperatures of $17,000-22,000$ K reported by \citet{Zhilyaev2007} during moderate-amplitude flares on EV Lac using broadband $UBVRI$ photometry. \begin{figure} \begin{center} \includegraphics[scale=0.70]{s163_filters.eps} \caption{HST-1 impulsive phase spectrum (S\#163) from the WHT shown with the C2650$^{\prime}$\ value from HST/COS (\emph{gray}). The filter-weighted specific flux densities are shown through the broadband $UBVR$ filters (\emph{asterisks}) and the LSST $ugri$ filters (\emph{open circles}). The best-fit blackbody function to the C2650$^{\prime}$$+UBVR$ filters is a dashed black line, and the best-fit blackbody function (with very large uncertainties being so far in the Rayleigh-Jeans tail) to the LSST filters is a \emph{solid blue line}. The broadband flux distribution of this HF/GF-type event is fit by a blackbody with $T=9200$ K, but the spectrum (\emph{gray}) is dominated by a flat continuum in the red optical and Balmer continuum radiation in the $U$ band and NUV. } \end{center} \end{figure}\label{fig:convolve} \clearpage \section{Summary \& Conclusions} \label{sec:conclusions} We present data from a multi-wavelength flare campaign to characterize the peak of the white-light flare continuum radiation with simultaneous NUV spectra from HST and optical spectra from ground-based telescopes. We monitored GJ 1243 and observed two events with moderate $U$-band amplitudes ($\Delta U = -1.5$ mags) at peak. This is the second study to present $\lambda < 3000$ \AA\ NUV flare spectra with simultaneous, flux-calibrated Balmer jump spectra. Compared to the spectra of a flare event on AU Mic presented in \citet{Robinson1993, Robinson1995}, our data have much higher spectral resolution at $\lambda<3000$ \AA, higher time-resolution by a factor of twenty, and broader spectral continuum characterization to the $U$-band atmospheric limit and into the red and infrared. In the future, we intend to compare our data to the AU Mic flare, for which there has been little quantitative analysis presented. According to the classification scheme in K13, the photometric and spectral properties of the two events on GJ 1243 observed by HST/COS are most similar to HF/GF-type events, even though the light curves exhibit a fast, impulsive evolution in a by-eye assessment (e.g., Figure \ref{fig:hstlc}). The HF/GF characterization means that their spectra exhibit moderately large Balmer jumps and prominent Balmer line radiation; the impulsive phases actually are relatively long and gradual relative to the peak amplitudes. The goal of our study was to confirm and characterize the extension of the hot $T\sim10^4$ K blackbody-like continuum into the NUV and constrain the white-light peak, but the blue-optical radiation in these two events was rather faint compared to many of the events in K13. Our conclusions are the following: \begin{itemize} \item We detect significant (at signal-to-noise $\ge 20$) broadband NUV continuum increase in the HST spectra. At relatively low ($\sim60$~s) temporal resolution and cadence, the NUV continuum flux in the HST data follows the continuum time-evolution in the blue optical and the NUV corresponding to the $U$-band. All continuum fluxes decay faster than the emission lines relative to the respective peak flux of each measure. Further investigation of this time-decrement with modeling is warranted; however, current RHD simulations exhibit heating for only several seconds whereas these events produce continuum radiation throughout the impulsive phase lasting a few minutes. \item At high-time resolution ($\sim5$~s cadence), the peak-normalized light curves of the $U$-band and NUV are not identical. Further understanding of this effect requires more high-time resolution NUV data of all flare types (IF, HF, and GF) for robust confirmation that is independent of uncertainties in peak light curve normalization. From GALEX (photometry), the FUV exhibits faster timescales than the $U$-band \citep{Hawley2003} and the NUV in GALEX \citep{Robinson2005, Welsh2006}. We speculate that there may be a common physical origin for the shorter peak-normalized decay rates at shorter ultraviolet continuum wavelengths in dMe flares. \item High-time resolution at 5~s cadence is critical for identifying the fast and slow decay phases of flares (for some IF-type events, even higher time resolution is necessary; K16). The impulsive phases of the flares of GJ 1243 in the short cadence Kepler data may be unresolved by a factor of ten or more. Higher time resolution than a 5~s cadence is preferable, since some impulsive phase spikes in the HST flares (e.g., the spike at the beginning of S\#162 in Figure \ref{fig:lcfigs}(a)) appear unresolved. \item In the NUV, the flare-only spectrum is not a ``scaled-up'' version of the pre-flare spectrum. In other words, there really is significant continuum radiation that appears only during the flare, which is evident from the large variation of the line-to-continuum ratios from quiescence to flaring times. \item From spectra, we have determined that continuum flare-only specific flux near $\lambda \sim 2650$ \AA\ is approximately 60\% of the continuum flare-only specific flux in the $U$-band for the two flares studied here. The value of the continuum peak ($\lambda_{\rm{peak}}$) for HF/GF-type events is thus in the $U$-band near the Balmer series limit in these events. If the pseudo-continuum of merged hydrogen lines just redward of the Balmer limit is due to the dissolved level continuum opacity (to be modeled in detail in Paper II), the value of $\lambda_{\rm{peak}} \sim 3700$ \AA. \item A $T\sim9000$ K blackbody is a poor approximation to the NUV spectra of HF/GF-type events, even though this blackbody temperature fits the general broadband color distribution and narrowband blue-optical continuum windows. There is a significant Balmer continuum contribution in the NUV and $U$-band. A hot blackbody extrapolation from blue-optical wavelengths (longer than the Balmer jump) under-estimates the NUV, $\lambda < 2840$ \AA\ continuum flare-only specific flux by a factor of two. Higher spectral resolution at $\lambda=4000-4800$ \AA\ is needed to characterize the blending of many minor emission lines in HF/GF-type events for more robust comparisons to a $T=9000$ K hot blackbody shape in the blue-optical (since the red-optical is rather flat). The moderate Balmer jump ratios and very large H$\gamma$/C4170$^{\prime}$\ values from the low-resolution spectra of HST-1 and HST-2 rule out the F11 RHD electron beam heating simulations, but higher spectral resolution data would help confirm that the blue continuum in C4170$^{\prime}$\ does also not have significant pseudo-continuum of blended emission lines in these (HF/GF) types of events. \item K13 showed that the slow rise of Ca II K occurs in IF, HF, and GF-type events. It is not known why Ca II weakly responds in the dMe flare impulsive phase, but these new spectral constraints of HF events will help provide new insight: The delayed peak of Ca II K is not related to the presence of spectrally confirmed, energetically dominant hot, blackbody-like radiation. \item With only broadband photometry, such as in future flare detections with the LSST, HF/GF-type events with moderate Balmer jumps may erroneously suggest very hot blackbody temperatures of $T\sim15,000-20,000$ K. \item The HF/GF-type events on GJ 1243 should be modeled as inhomogeneously emitting flare sources. A multithread model approach with F13 beams, previously used to successfully model the decay phase white-light of superflares \citep{Osten2016, Kowalski2017B}, can adequately explain the continuum flux distribution from $2500-2840$ \AA\ and from $\sim3200-4200$ \AA. \end{itemize} The HST-1 and HST-2 events establish a new regime of flare-only continuum flux colors over the impulsive phase of dMe flares with moderate Balmer jump ratios ($3-4$) and relatively flat blue-to-red optical continuum shapes (blue-to-red continuum flux ratios of $1-1.4$). Other moderate-amplitude events, such as the IF4 and IF11 events on YZ CMi from K16 (also discussed in Section \ref{sec:discussion1} here) exhibit significantly different continuum and line-to-continuum ratios calculated from spectra at a similar temporal resolution ($t_{\rm{exp}}\sim60$~s). The value of $\lambda_{\rm{peak}}$ for IF-type events with smaller Balmer jump ratios is a critical parameter for a comprehensive understanding of the heating at high column mass achieved in dMe flares. IF-type events have shorter impulsive phase timescales (and sometimes much more luminous peaks). Even with low-to-moderate signal-to-noise, a strategic use of COS/G230L based on our results could provide representative constraints on $\lambda_{\rm{peak}}$ in the impulsive phase of giant impulsive-type events, such as the Great Flare of AD Leo. \acknowledgments We thank an anonymous referee for a critical reading of the paper and helpful comments and suggestions. AFK acknowledges support from University of Maryland GPHI Task-132, HST GO 13323, NASA Exoplanet Science Institute (NASA/Keck time), an appointment to the NASA Postdoctoral Program at the NASA's Goddard Space Flight Center, administered by Universities Space Research Association (previously by the Oak Ridge Associated Universities) under contract with NASA. AFK thanks Joel Allred and Mats Carlsson for many helpful discussions about RADYN, Han Uitenbroek for helpful discussions about the RH code, and Pier-Emmanuel Tremblay for the hydrogen broadening profiles. AFK thanks James Davenport for observations from the ARCSAT 0.5-m and for providing a flare rate for GJ 1243, Nicola Gentile Fusillo for observations from the Isaac Newton Telescope, Mihalis Mathioudakis for helpful discussions about the ULTRACAM data, Lyndsay Fletcher for helpful discussions about the interpretation of the HST data, Lucianne Walkowicz for helpful feedback on the initial HST proposal, and Lucia Kleint for helpful discussions on comparing to blackbodies. We thank R. O. Parke Loyd and the STScI help desk for helpful feedback on the jump in the dispersion solution in the COS spectra. AFK also thanks the Keck Observatory support astronomer Luca Rizzi for assistance with the observations. IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for this work was provided by NASA through grant number Guest Observer 13323 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. Based on observations obtained with the Apache Point Observatory 3.5-m telescope, which is owned and operated by the Astrophysical Research Consortium. The 2.3-m Aristarchos telescope is operated on Helmos Observatory by the Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
2,869,038,155,731
arxiv
\section{Introduction} \input{tex/intro} \section{Approach} \input{tex/approach} \vspace{-0.2cm} \section{Experiment} \input{tex/experiment} \vspace{-0.2cm} \vspace{-0.1cm} \section{Conclusion} \input{tex/conclusion} \bibliographystyle{IEEEbib} \subsection{Theoretical Analysis} \vspace{-0.2cm} Given an well-trained black-box model $g(\cdot)$, the target is crafting adversarial example $x_{adv}$ for $x$ (with ground-truth label $y$), as well the expection that $x_{adv}$ is closer $x$ as possible. Due to the limited knowledge about the black-box model $g(\cdot)$, a proxy model $f(\cdot)$ is trained to simulating the black-box model. Subsequently, the attackers can craft $x_{adv}$ on the proxy model with white-box attacks as follows: \begin{equation} \begin{split} x_{adv} = {Proj}_{x,\epsilon}\{x + \nabla_{x} L(f(x), y)\}, \end{split} \end{equation} where ${Proj}_{x,\epsilon}$ is project function that is used to project the input into the $\epsilon$-ball around $x$, $L(\cdot,\cdot)$ is a common loss function, and $\epsilon$ is given by attackers to control the similar between $x_{adv}$ and $x$. It is noted that the above process generally repeats multiple times. But, to simplify the deductions and symbols, we employ a single-time process to analyze, and the conclusion can be easily generalized to the iteration form. \par Apparently, attackers are desired to make $f(x)=g(x)$, or $\nabla_{x} L(f(x), y) = \nabla_{x} L(g(x), y)$, as possible, thereby achieving the optimal (white-box) attack performance. Unfortunately, there is a significant divergence between the proxy model and the black-box model due to their own unique architecture, resulting in $f(x) \neq g(x)$. Next, we explore the source of the divergence and utilize it to guide designing the model architecture with minimal model-cross discrepancy. Without loss of generality, it is supposed that $f$ and $g$ have the same number of layers to simplify the deduction. Let $f=f_l \circ f_{l-1} \cdots \circ f_1$ and $g=g_l \circ g_{l-1} \cdots \circ g_1$ where $f_{i}$ and $g_{i}$ can be regarded as i-th layer of $f$ and $g$ and $\circ$ denotes the composite operation, and then there is: \begin{small} \begin{equation} \begin{split} \nabla_{x}L(f(x),y) = \frac{\partial L}{\partial O^{proxy}_l} \cdot \frac{\partial O^{proxy}_l}{\partial O^{proxy}_{l-1}} \cdots \frac{\partial O^{proxy}_1}{\partial x}, \end{split} \label{proxy_equ} \end{equation} \end{small} \vspace{-0.2cm} \begin{small} \begin{equation} \begin{split} \nabla_{x}L(g(x),y) = \frac{\partial L}{\partial O^{black}_l} \cdot \frac{\partial O^{black}_l}{\partial O^{black}_{l-1}} \cdots \frac{\partial O^{black}_1}{\partial x}, \end{split} \label{black_equ} \end{equation} \end{small} where $O_i^{proxy}$ and $O_i^{black}$ is the output of i-th layer of $f$ and $g$ to $x$. Note that $\nabla_{x}L(f(x),y)=\nabla_{x}L(g(x),y)$ is established if the each term in the Equation \ref{proxy_equ} is exactly equal to the corresponding term in the Equation \ref{black_equ}. Therefore, the divergence can be considered as the accumulation of the discrepancy between corresponding terms in Equation \ref{proxy_equ} and Equation \ref{black_equ}. Besides, it is well-known that DNNs enjoy identical or similar low-level feature extractors, which suggests $f_i \approx g_i$ when $i$ is small than a certain number. Moreover, the high-level feature extractors that different models learned are always model-specified. Hence, the poor transferability is heavily induced by the divergence between the late layers of the proxy model and the black-box model. Next, we will elaborate on how to relieve the problem. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{images/design.png} \caption{sketch map of MMA construction.} \label{model_process} \end{figure} \vspace{-0.2cm} \subsection{Multi-Track model architecture} \vspace{-0.2cm} For the convenience of discussion, let \begin{equation} \begin{split} \nabla_{x}L(f(x),y) = f_{high} \cdot f_{low}, \\ \nabla_{x}L(g(x),y) = g_{high} \cdot g_{low}, \end{split} \end{equation} where $f_{high},f_{low}$ and $g_{high},g_{low}$ correspond to the high-level and low-level of $f$ and $g$ derivative vectors. The desired goal is making $f_{high}=g_{high}$, but it is difficult to navigate under the condition without any information about the black-box model. Therefore, the only leaving solution is increasing the domination of $f_{low}$ in $\nabla_{x}L(f(x),y)$ so that the nagative impact of $f_{high}$ can be narrowed. However, before presenting our solution, we have to highlight that, excessively increasing the domination of $f_{low}$ does not yield growing benefits to the transferability of AEs. Simply speaking, the model almost only relies on low-level features when $f_{low}$ dominates $\nabla_{x}L(f(x),y)$. Furthermore, it implies the poor performance of the model\footnote{Overwhelming successes of deep learning are established by its outstanding capacity for learning high-level semantic features.}, and thus the model does not learn the same low-level feature extractors as the black-box model, i.e., $f_{low}=g_{low}$, which breaking the supposed precondition. In short, the optimal solution is maintaining the performance of the model while removing as high-level feature extractors as possible. \par With the objective of shrinking the negative impact from late layers, the most intuitive solution is evenly splitting the original network into a handful of small networks (as shown in Figure \ref{model_process}). These small networks have fewer high-level layers than the original network, and thereby low-level feature extractors enjoy a greater proportion in the entire network. Unfortunately, the solution suffers from two serious drawbacks. On the one hand, induced by small model capacity, small networks have to perform poorly, which suggests that the crafting AEs are with ill transferability according to before conclusion. On the other hand, due to without any prior knowledge, it is difficult to directly determine the exact size of the small models with the best attack performance. The potential manner is manually-tuning (enumeration, a brute-force method) which still is pretty cumbersome and maybe unbearable to its cost. The drawbacks produce the quandary of how to achieve the optimal balance between learning low-level feature extractors well and decreasing the disturbance of high-level feature extractors. \par To solve the dilemma, we integrate these small models into multi-track model architecture, which can adaptively adjust the model size with comparable overhead to the original network. In MMA, small models is no longer a separate network from each other; on the contrary, as shown in Figure \ref{model_process}, the network placed in the back is allowed to enjoy the feature maps of the networks placed in the front. In this way, each small model (in MAA) essentially represents a trade-off point between learning low-level shared features and introducing the disturbance of high-level exclusive features. Then, the small model with the best attack performance or the entire model can be adopted as the proxy model, rather than manually and carefully adjusting the size of the proxy network and retraining it, which is fairly cumbersome and inelegant. Finally, we highlight the followed merits of MMA compared to the original network: (1) MMA is a simple yet effective model architecture designing where AEs are more transferable by adaptively adjusting the size of the model, with comparable overhead to the original network; (2) MMA is a generic technology that can be integrated into any existing technology to further enhance the transferability of AEs. \subsection{Experiment Setup} We examine the performance of MMA with different sizes in CIFAR-10, compared with three widely-used baseline model architectures, namely MobileNet, GoogleNet, and ResNet18. All proxy models trained for 30 epoch using Momentum optimizer with 0.01 learning rate, 0.9 cumulative factor, and $1 \times 10^{-4}~L_2$ weight decay in the training set of CIFAR-10. For attack setting, we set $\epsilon$ to 0.1 and employ FGSM (one-step attack) with step size 0.1 and BIM (iteration version of FGSM) with 10 iterations and 0.01 step size ($= \frac{\epsilon}{10}$). Moreover, MobileNet, GoogleNet, and ResNet18 were also adopted as black-box models for attacking. To quantify the transferability of resultant AEs, we adopt attack success rate (ASR), the misclassified rate of AEs on the target model, as the metric. Besides, as shown in Figure \ref{model_process}, the head $i$ denotes the $i$-th output of the MMA and denotes the ensemble output of all outputs of the MAA. \begin{table}[!h] \centering \caption{Transfer results of single-Step attacks.} \resizebox{0.5\textwidth}{!}{% \begin{tabular}{@{}c|ccc|ccc@{}} \toprule \multirow{2}{*}{Proxy Model} & \multicolumn{3}{c|}{Fixed hyperparameters} & \multicolumn{3}{c}{Tuned (Best) results} \\ \cmidrule(l){2-7} & GoogleNet & MobileNet & ResNet18 & GoogleNet & MobileNet & ResNet18 \\ \midrule MMA (ours) & \textbf{58.07} & \textbf{47.36} & \textbf{50.80} & \textbf{59.33} & \textbf{54.22} & \textbf{53.10} \\ GoogleNet & - & 37.11 & 45.81 & - & 47.06 & 50.78 \\ MobileNet & 40.11 & - & 40.46 & 42.21 & - & 42.77 \\ ResNet18 & 47.60 & 40.49 & - & 47.60 & 42.00 & - \\ \bottomrule \end{tabular} } \label{singlestep_comp} \end{table} \begin{table}[!h] \centering \caption{Transfer results of multi-step attacks.} \resizebox{0.5\textwidth}{!}{% \begin{tabular}{@{}c|ccc|ccc@{}} \toprule \multirow{2}{*}{Proxy Model} & \multicolumn{3}{c|}{Fixed hyperparameters} & \multicolumn{3}{c}{Tuned (Best) results} \\ \cmidrule(l){2-7} & GoogleNet & MobileNet & ResNet18 & GoogleNet & MobileNet & ResNet18 \\ \midrule MMA (ours) & \textbf{78.50} & \textbf{56.12} & \textbf{65.72} & \textbf{79.79} & \textbf{69.64} & \textbf{70.49} \\ GoogleNet & - & 39.16 & 58.38 & - & 54.87 & 65.10 \\ MobileNet & 41.71 & - & 41.70 & 45.76 & - & 46.50 \\ ResNet18 & 70.58 & 52.80 & - & 70.58 & 55.25 & - \\ \bottomrule \end{tabular} } \label{multistep_comp} \end{table} \vspace{-0.2cm} \subsection{Performance Comparison} \vspace{-0.2cm} \begin{figure}[!h] \centering \subfigure[GoogleNet]{\includegraphics[width=0.32\linewidth]{images/onestep_googlenet.png}} \subfigure[MobileNet]{\includegraphics[width=0.32\linewidth]{images/onestep_mobilenet.png}} \subfigure[ResNet]{\includegraphics[width=0.32\linewidth]{images/onestep_resnet.png}} \caption{Single-step attack performances over training epochs. The size of MAA is fixed to $3 \times 4$.} \label{epoch_for_single} \end{figure} \begin{figure}[!h] \centering \subfigure[GoogleNet]{\includegraphics[width=0.32\linewidth]{images/multistep_googlenet.png}} \subfigure[MobileNet]{\includegraphics[width=0.32\linewidth]{images/multistep_mobilenet.png}} \subfigure[ResNet]{\includegraphics[width=0.32\linewidth]{images/multistep_resnet.png}} \caption{Multi-step attack performances over training epochs. The size of MAA is fixed to $3 \times 4$.} \label{epoch_for_multi} \end{figure} \textbf{One-step and multi-step transferability comparison.} To validate the effectiveness of MMA, Table \ref{singlestep_comp} and Table \ref{multistep_comp} reported the performance of different proxy model architectures against three black-box models using FGSM and BIM, respectively. For fixed hyperparameters, the epoch and the size\footnote{The $x \times y$ model size denotes $x$ rows and $y$ columns model architecture where each element is a block. In this paper, the standard residual block of ResNet18 is adopted for fair comparisons.} of MMA are fixed into 30 and $3 \times 4$ while the tuned results denotes the best performance of MMA over $\{2,3,4,5\}\times\{2,3,4,5\}$ sizes and $\{5,10,\cdots,50\}$ epochs (the best performance of other proxy models over $\{5,10,\cdots,50\}$ epochs). Overall, the MMA model architecture consistently yields more transferable AEs. Specifically, for the single-step attack, MMA can notably improve the transferability of AEs by 2.32\% to 17.96\%. Furthermore, such improvement is more striking in multi-step attack settings, where AEs based on MAA can achieve a maximum of 79.79\% ASR (from at least 5\% improvement to 40\% improvement). \par \textbf{Epoch impact.} We are also interested in impact of training epoch to transferability of AEs and Figure \ref{epoch_for_single},\ref{epoch_for_multi},\ref{acc} demonstrated the related results. Firstly, it is observed that the ASR steadily rises in earlier epochs until its peak of just around the 10-th epoch, and then the ASR starts to fluctuate considerably. Meanwhile, the varying trend of accuracy in Figure \ref{acc} also is fairly similar to the varying trend of ASR. In fact, the booming accuracy indicates that the learned low-level feature extractors of the proxy model increasingly align with black-box models, which benefits the transferability of crafted AEs. Moreover, the weak fluctuation of accuracy in late epochs ($\geq 10$) suggests that the model starts to learn exclusive high-level features to enhance the performance of the model, which weakens the transferability of AEs. \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{images/accuracy.png} \caption{Accuracy of proxy models with different epochs. The size of MAA is fixed to $3 \times 4$.} \label{acc} \end{figure} \par \textbf{Model size impact.} As shown in the earlier section, the size of MAA is a critical factor to the transferability of generated AEs, and Figure \ref{height_impact} and Table \ref{width_impact} illustrate the attack performance over varying MAA depths and widths, respectively. The transferability of resultant AEs, as demonstrated in Figure \ref{height_impact}, is reached the crest by increasing the depth to 3, and subsequently, an increase of depth will damage the transferability. The phenomenon is consistent with the earlier deduction that increasing the depth only within a proper range can enhance transferability. In other words, the depth of 3 is the best trade-off point where the model can greatly learn the low-level model-cross shared features while making disturbance of high-level exclusive features minimal, in our cases. Likewise, Table \ref{width_impact} also suggests the same conclusion. \begin{figure} \centering \subfigure[Single-Step]{\includegraphics[width=0.45\linewidth]{images/single_parallel.png}} \subfigure[Multi-Step]{\includegraphics[width=0.45\linewidth]{images/multi_parallel.png}} \caption{Attack performance of MMA over different depths and heads. The training epoch is fixed to 30.} \label{height_impact} \end{figure} \begin{table}[] \centering \caption{Attack performance of MMA over different widths and heads. The training epoch is fixed to 30.} \resizebox{0.40\textwidth}{!}{% \begin{tabular}{@{}c|c|cccccc@{}} \toprule Attack Method & Width & Head 1 & Head 2 & Head 3 & Head 4 & Head 5 & All \\ \midrule \multirow{4}{*}{Single-Step} & 2 & 41.56 & 46.17 & - & - & - & 44.92 \\ & 3 & 42.78 & 47.72 & 48.18 & - & - & 47.26 \\ & 4 & 41.97 & 47.30 & 48.18 & 49.15 & - & 47.58 \\ & 5 & 38.86 & 46.67 & 47.56 & 48.18 & 48.13 & 46.27 \\ \midrule \multirow{4}{*}{Multi-Step} & 2 & 48.74 & 55.77 & - & - & - & 57.55 \\ & 3 & 51.09 & 58.56 & 59.50 & - & - & 61.69 \\ & 4 & 49.73 & 57.08 & 58.77 & 60.61 & - & 62.64 \\ & 5 & 45.08 & 56.17 & 57.96 & 59.59 & 59.19 & 61.87 \\ \bottomrule \end{tabular}} \label{width_impact} \end{table} \begin{figure} \centering \includegraphics[width=0.38\textwidth]{images/time_comparsion.png} \caption{Comparison of costs for launching black-box attacks over different proxy models.} \label{overhead} \end{figure} \par \textbf{Attack cost comparsion.} For practical attacks, the overhead of launching attacks also is crucial, and Figure \ref{overhead} demonstrates the cost to different proxy model architecture. Noting that the sum time of forward and backward propagation not only can measure the training overhead but also can measure the overhead of crafting AEs. In Figure \ref{overhead}, the $3\times 4$ is a critical point that denotes whether the overhead of MMA surpasses other baseline models. We note that in this case, the performance of MMA still significantly surpasses the other baseline models (shown in Table \ref{singlestep_comp} and Table \ref{multistep_comp}). Therefore, the MMA is comprehensively better than other model architectures, i.e., higher attack performance with fewer or the same costs. \subsection{Adversarial Attack} Adversarial attack aims to produce adversarial examples to fool the target model by slightly perturbing images, and existing methods can be generally divided two categories based on the knowledge of attackers: white-box attack and black-box attack. \par \textbf{White-box attack.} In the white-box setting, the attackers commonly adopt an optimization method in input space to increase loss for producing adversarial examples. Besides, such methods also is the foundation of transfer-based attack, and we give a brief review to classic white-box attack methods as following. As the seminal work of this area, FGSM \cite{} perturbs the clean seed images for one step along with the sign of the gradient of the loss. Compared to it, basic iterative method (BIM) \cite{} extends FGSM with multi-step to craft adversarial examples. Realizing the drawback of adversarial examples being easily trapping local minima, projected gradient descent (PGD) \cite{} add random noises into the seed images for escaping the local minima. Carlini and Wagner attack (C\&W) \cite{} designed a brilliant loss function, which successfully broke several defense methods, e.g., defensive knowledge distillation. \par \textbf{Black-box attack.} We mainly review the technologies of increasing transferability of adversarial examples. For transfer-based adversarial attack, attackers launch attacks on the plug-and-play proxy model adopting white-box attack methods, such as FGSM, PGD. Unfortunately, the vanilla method commonly shows lower success due to overfit to specific features of the proxy model, which was revealed by \cite{}. To generate adversarial examples with high transferability, the shared low-level features over diverse model is the critical factor. \par From the model perspective, Ensemble attack (EA), proposed by \cite{}, is a straightforward method, where a adversarial instance is generated by ensembling multiple model. Dispite the EA captures the more low-level features, it suffers from huge computation cost. Another approach of focusing the model is skip gradient method (SGM) \cite{}, where a decay factor is applied into the skip connections for forcing the model to pay more attention in low-level features. \par From the optimizer perspective, \cite{} proposed momentum iterative boosting (MI) that incorporating a momentum term into the attack method (i.e., momentum optimizer) can boost the transferability. Further, \cite{} extended the MI with displacing momentum optimizer as nesterov optimizer. \par From the data perspective, \cite{} proposed diversity input (DI) strategy to mitigate the overfitting problem, where the random transformations is applied to the input images at each iteration. Similar to DI, translation invariant (TI) \cite{} suggested that the crafted adversarial examples should be non-sensitive to the discriminative regions of the proxy model. \subsection{Adversarial Defense} To mitigate the threat of adversarial attacks, various defense methods have been proposed. However, the effectiveness of most of such methods stems from obfuscated gradients (e.g., defensive knowledge distillation), which is easily broken as shown in \cite{} except adversarial training. Till now, adversarial training (AT) is believed to be the most robust and reliable method and its basic idea is feeding the crafted adversarial examples with normal examples into the model for training. The study of adversarial examples mainly lies in two aspects: increasing efficiency of AT \cite{} and enhancing robustness of the model with AT \cite{}.
2,869,038,155,732
arxiv
\section{Introduction} The Beryllium Electron capture in Superconducting Tunnel junctions (BeEST) experiment uses momentum reconstruction of nuclear Electron Capture (EC) decay in $^7$Be to perform a model-independent search for the existence of heavy neutrino mass states \cite{StephanPRL}. For EC decay, the neutrino mass can be accessed by measuring the kinetic energy of the daughter nucleus, \begin{equation} \label{eqn:recoilDaughter} T_D = \frac{Q_{EC}^2-m_\nu^2c^4}{2(Q_{EC}+m_Dc^2)}. \end{equation} Here $Q_{EC}$ is the energy released in the EC decay, $m_\nu$ is the neutrino mass, and $m_D$ is the mass of the daughter atom. The experimental signal shown in Figure \ref{fig:experimentalSpectrum} from the $^7$Be sample has four peaks predicted by the Standard Model (SM) which correspond to the probabilities of a K-shell or L-shell electron being captured, and the probability of decaying into the ground state of $^7$Li or to an excited state. A heavy neutrino signal would appear as an offset spectrum from the active neutrino background at some lower energy and intensity, depending on the mass and mixing angle of the sterile neutrino, respectively. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{Fig1.eps} \hspace{0.02\textwidth} \includegraphics[width=0.45\linewidth]{Fig2.eps} \caption{The measured $^7$Li recoil spectrum (black) shows four peaks for the different $^7$Be decay channels. Fits to the four decay channels and to the observed electron escape tails from the K-Capture peaks are shown (color online). Other fit features not shown. Figure adapted from \cite{StephanPRL} \label{fig:experimentalSpectrum}\vspace*{-6px}} \caption{SRIM simulated ion depth distribution from a 25 keV ion implantation\cite{ZIEGLER20101818} at TRIUMF-ISAC \label{fig:SRIM}} \end{figure} To capture the entire recoil energy, the radioactive $^7$Be is implanted directly into an STJ sensor at 25 keV by the TRIUMF Isotope Separator and Accelerator (ISAC) rare-isotope beam facility in Vancouver, Canada \cite{Dilling2014}. The simulated distribution of ion implantation depths generated by a Stopping and Range of Ions in Matter (SRIM) simulation\cite{ZIEGLER20101818} (Figure \ref{fig:SRIM}) shows that a significant number of the $10^8$ implanted ions reside within only a few nm of the surface of the detector. From those ions, it is expected that the 56 eV Auger electron generated after a K-capture decay could have sufficient energy to escape through the surface of the detector and cause the low-energy tails that we observe in the experimental spectrum in Figure \ref{fig:experimentalSpectrum}. The tails currently limit the sensitivity in the BeEST experiment for low neutrino mass. We are developing a spatially-resolved Monte-Carlo code to characterize these tails and understand the underlying effects. \section{Spatially-resolved Monte-Carlo Simulations of the Energy Relaxation} Our Monte-Carlo simulation of the energy relaxation cascade is based on the Drude model and tracks individual quasiparticles and phonons through the entire process. Each particle travels along a straight line until it interacts, and the two reaction products are emitted in random but opposite directions while conserving energy. Throughout the simulation, the interaction length is determined by an exponential distribution with a mean set by the particle’s mean free path at its current energy. The simulation starts with a single electron with an energy $E = 56$ eV corresponding to the Li KLL Auger electron energy. The electron initially loses energy by exciting other electrons as it travels through the material. We assume a mean free path taken from reference \cite{ziaja2006} that has been determined by a fit to measurements of low-energy electron ionization ranges. For materials without published low-energy measurements, the mean free path is scaled to account for the different electron density. Electrons continue to lose energy by interactions with other electrons until the mean electron-electron scattering rate falls below the phonon emission rate and energy relaxation by phonon emission starts to dominate. The rate at which an electron with energy E emits a phonon with energy $\Omega$ is determined by the material-dependent electron-phonon coupling strength $\alpha^2$ and the available density of states for phonons $F(\Omega)$ and for electrons $Re\left(\frac{E}{(E^2-\Delta^2)^{1/2}}\right)$. The total phonon emission rate is calculated by integration over all phonon energies according to \cite{kaplanPhysRevB.14.4854} \begin{equation} \label{eqn:tauP} \tau_P^{-1}(E) = \frac{2\pi}{\hbar Z_1(0)} \int_0^{E-\Delta} d\Omega \alpha^2F(\Omega) \textnormal{Re}\left[ \frac{E-\Omega}{\left( (E-\Omega)^2-\Delta^2 \right)^{1/2}} \right] \left( 1- \frac{\Delta^2}{E(E-\Omega)} \right). \end{equation} Values of $\alpha^2F(\Omega)$ have been measured for many materials \cite{ATLASkhotkevich_yanson_1995}, and values for the renormalization factor $Z_1(0)$ are taken from \cite{kaplanPhysRevB.14.4854}. The phonon emission rate is converted to a mean free path by assuming all electrons move at the Fermi velocity, and the actual interaction length during the simulations is again determined by an exponential distribution with that mean. The phonon energy $\Omega$ is determined by sampling a distribution given by the integrand of Equation \ref{eqn:tauP}, which ensures the original quasiparticle remains above-gap. To save computation time, the electron-energy-dependent phonon energy distribution is tabulated for each $\alpha^2F(\Omega)$ before the main simulation. This allows the simulation to generate a single random integer and look up the associated phonon energy, linearly interpolating between the table steps of $\Delta/100$. Once the phonon energy $\Omega$ is determined, the electron energy E is reduced by the same amount and the propagation directions for both particles are once again randomized. If the emitted phonons have energies $\Omega > 2\Delta$, they break Cooper pairs according to the electron-phonon coupling strength $\alpha^2$ and the density of states available for the two quasiparticles. Integration over all final electron energies gives a pair-breaking rate of \begin{equation} \label{eqn:tauB} \tau_B^{-1}(\Omega) = \frac{4\pi N_F \alpha^2(\Omega)}{\hbar I} \int_\Delta^{\Omega-\Delta} \frac{dE}{(E^2-\Delta^2)^{1/2}} \frac{E(\Omega-E)+\Delta^2}{\left( (\Omega-E)^2-\Delta^2 \right)^{1/2}}, \end{equation} \noindent where $N_F$ is the electron density of states at the Fermi energy in the normal state and $I$ is the ion density of the material \cite{kaplanPhysRevB.14.4854}. We convert this rate to a mean free path assuming phonon propagation at the speed of sound \cite{steinberg1996equation}. The distribution of quasiparticle energies is determined by sampling the integrand of Equation \ref{eqn:tauB}, and the directions of the resulting quasiparticles are again randomized. Both phonon emission and pair breaking continue until all electrons have relaxed to energies $E < 3\Delta$, and cannot produce an above-gap phonon, and all phonons to energies $\Omega < 2\Delta$ so that the total number of quasiparticles can no longer change. Our simulations go beyond earlier Monte-Carlo simulations \cite{KURAKADO1982275,RANDO1992173,hiller_2001}, in that they include the initial phase of the energy relaxation that is dominated by electron-electron interactions and that they track each individual particle and its position and energy. They also differ from earlier spatial simulations \cite{ZehnderPhysRevB.52.12858}, in that they do not assume a local thermal equilibrium and diffusive energy propagation but follow each particle individually. This is made possible by significant advances in computational power since those publications. The simulations allow implementing a simple model to describe a suspected signal loss mechanism for the BeEST experiment. In this model, any electron that reaches the STJ surface with an energy above the work function leaves the detector and does not contribute to the recorded energy. This is, of course, a simplification of any real surface, where oxides, adsorbates, and imperfections may modify the work function and introduce other loss mechanisms. Nonetheless, it is a starting point to understand the microscopic origin of details in the response function and can be refined as better experimental data become available. \section{Simulation Results} We first test our code by reproducing the results of earlier Monte-Carlo simulations for different superconductors. Surface effects are excluded by making the detector infinitely large. For each material, $10^5$ events were run, each starting with a single electron with an energy between 1 and 56 eV. As expected \cite{KURAKADO1982275}, the cascade statistics show no dependence on the initial electron energy in this range. We then calculate the average energy $\epsilon \equiv \frac{Q}{<N>}$ that is required to produce a single excess quasiparticle and the Fano factor $F \equiv \frac{<(N-<N>)^2>}{<N>}$ that quantifies the fluctuations in the number of quasiparticles. The results agree well with the published values, supporting the simplification of earlier simulations to neglect electron-electron scattering. The electron-phonon coupling function is known experimentally for Ta, Al, and Nb \cite{ATLASkhotkevich_yanson_1995}, and we assume the form of the $\alpha^2F(\Omega)$ function to be quadratic for Hf to extend the simulations to that material, which has potential for future STJ detectors due to its small energy gap of 0.021 meV. In all cases, the simulations confirm the earlier results of $\epsilon \approx 1.7\Delta$ and $F \approx 0.2$ (Table \ref{tab:comp}). \definecolor{Gray}{gray}{0.85} \definecolor{LightCyan}{rgb}{0.88,1,1} \definecolor{LightGreen}{rgb}{0.88,1,0.88} \begin{table}[h] \centering \caption{Comparison of $\epsilon$ and Fano Factor Calculations for Sn, Nb, Ta, Al, and Hf \cite{KURAKADO1982275,RANDO1992173,hiller_2001} \label{tab:comp}} \begin{tabular}{r|cll} Reference & Material & $\epsilon/\Delta$ & F \\ \hline Kurakado\cite{KURAKADO1982275} & Sn & 1.68 & 0.195(1) \\ Rando\cite{RANDO1992173} & Nb & 1.747 & 0.22(1) \\ Hiller\cite{hiller_2001} & Ta & 1.76 & 0.230(5) \\ Hiller\cite{hiller_2001} & Al & 1.71 & 0.216(4) \\ Hiller\cite{hiller_2001} & Nb & 1.71 & 0.214(2) \\ This Work & Ta & 1.78 & 0.233(2) \\ This Work & Al & 1.71 & 0.210(5) \\ This Work & Nb & 1.72 & 0.214(2)\\ This Work & Hf & 1.72 & 0.206(3) \end{tabular} \end{table} To demonstrate the advantages of particle tracking for the BeEST experiment, we simulate the impulse response to electrons from a constant implantation depth in a Ta-based STJ. The electrons have an initial energy of 56 eV corresponding to Li KLL Auger electron energy produced after the K-capture decay of $^7$Be, and they are emitted isotropically. Figure \ref{fig:impulse} (left) shows the response of electrons from an implantation depth of 10 nm, broadened by the STJ detector resolution of 2 eV. The escape tail encompasses 35\% of the total events, and its shape roughly follows an Exponentially-Modified Gaussian (EMG) with a decay scale of $6.33 \pm 0.03$ (stat.) eV. For an implantation depth of 20 nm, the tail contains only 9.0\% of the events and decays with a characteristic energy scale of $2.50 \pm 0.01$ (stat.) eV. In both cases, the tail is offset from the primary peak by the 4.5 eV work function of Ta. Interestingly, the tail shows some fine structure roughly $\sim$10 eV below the primary peak. This fine structure is due to the small integer number of electrons that escape from the STJ, all of which require at least an energy of 4.5 eV and therefore have an energy distribution with a sharp onset. It remains to be seen if this effect is present in actual detectors where different loss mechanisms with different loss and onset energies are likely to be present. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{Fig3.eps} \caption{Impulse response to a 56 eV Auger electron from a fixed 10 nm implantation depth (left) and from an initial depth distribution derived from the SRIM simulation in Figure \ref{fig:SRIM} (right), broadened to match the detector resolution (color online). Note the deviation from an EMG tail shape caused by 2+ electron escape, particularly at $\sim$10 eV below the primary peak. \label{fig:impulse}} \end{figure} The simulation is repeated for an initial electron depth distribution taken from the $^7$Be implantation into Ta at an energy of 25 keV (Figure \ref{fig:SRIM}). The escape tail is then a convolution of the implantation depth distribution with the depth-varying tail lengths and fractions (Figure \ref{fig:impulse}, right). As a result, it can no longer be fit to a single EMG function. Notably, the fraction of events that escape with a large percentage of the initial electron energy is greater than an EMG tail would predict due to the increased fraction of events $\lesssim10$ nm from the surface of the STJ. Above 20 eV, the tail can be fit with the sum of two EMG functions, with characteristic energy scales of $1.62 \pm 0.01$ (stat.) eV and $8.7 \pm 0.1$ (stat.) eV, centered at 51.4 eV and 46.9 eV respectively. The fraction of events in the tail is found to be 4.9\% of the total, reflecting the importance of events close to the surface. \section{Discussion} The simulations suggest that energy deposition close to the detector surface can generate low-energy tails in the response function due to electron escape during the initial relaxation cascade. In some cases, this tail has a roughly exponential shape and can be approximated by an EMG function. This supports earlier analyses where such a shape has been observed \cite{ONeil2020,meganLineShape}. In other cases, details of the escape tail deviate from a simple EMG, especially when the source is distributed over varying depths below the surface. An interesting result is the observation of fine structure in the simulated tails. It is due the small integer number of electrons that escape from the surface and depends on the assumption that there is a single signal loss mechanism with a sharp low-energy cutoff. A single, well-defined work function is unlikely in actual devices due to surface imperfections and oxides, and it remains to be seen if the resulting fine structure will be observed experimentally. In the BeEST sterile neutrino experiment, the source distribution is given by the implantation profile of $^7$Be in Ta-based STJs at an energy of 25 keV (Figure \ref{fig:SRIM}), and the Li KLL Auger electrons emitted after a K-capture decay of $^7$Be are emitted isotropically. Under these conditions, our simulations predict that the escape tail contains 4.9\% of the total number of events. Given the uncertainties of the mean free path and applicability of the Drude model, we consider this fraction of 4.9\% to be consistent with the observed 6.7\% low-energy tail from the first phase of the BeEST experiment (Figure \ref{fig:experimentalSpectrum}) \cite{SpencerPRL}. In our simulations, the characteristic energy scale of the more intense single-electron escape tail is $1.62 \pm 0.01$ (stat.) eV. The longer tail with an energy scale of $8.7 \pm 0.1$ (stat.) eV will only be visible in high-statistics low-background spectra. This is lower than the scales in earlier experiments with AuBi-TES and HgTe-Si microcalorimeters, which range from 10 to 25 eV \cite{ONeil2020,meganLineShape}, but in the same range as the 4.3 eV scale observed in the BeEST experiment \cite{SpencerPRL}. This difference likely reflects the lower electron energies involved in the BeEST experiment, which have shorter mean free paths and are therefore more easily absorbed in the detector \cite{ziaja2006}. Future extensions of our simulations will investigate more closely which parameters of the simulations need to be adjusted to match the experimental data more closely. \section{Conclusions} We are developing spatially-resolved Monte-Carlo simulations of the energy relaxation cascade in superconducting tunnel junction quantum sensors. They include the first stage of the cascade that is dominated by electron-electron scattering. The simulations reproduce the established values of $\epsilon \approx 1.7\Delta$ for the average energy to generate a quasiparticle and the Fano factor $F \approx 0.2$ for different materials. Our initial simulations assume electron escape at the sensor surface for energies above the work function as the only signal loss mechanism. In some cases, they generate an exponentially decaying tail below the primary peak whose shape and magnitude are consistent with earlier experiments. In others, they predict fine structure in the tail due to the small discrete number of emitted electrons. The simulations will be refined as better experimental data become available to assess whether the fine structure can mimic a signal in the BeEST sterile neutrino search. \begin{acknowledgements} This work was supported by the DOE-SC, Office of Nuclear Physics under grant DE-SC0021245 and by the LLNL Laboratory Directed Research and Development program through Grants 19-FS-027 and 20-LW-006. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52- 07NA27344. Data will be made available upon reasonable request. \end{acknowledgements}
2,869,038,155,733
arxiv
\section{Introduction} Physical event detection, such as extreme weather events or traffic accidents have long been the domain of static event processors operating on numeric sensor data or human actors manually identifying event types. However, the emergence of big data and associated data processing and analytics tools and systems have led to several applications in large-scale event and trend detection in the streaming domain \cite{dis_mgmt_hagen,dis_mgmt_imran,dis_mgmt_nagy,dis_mgmt_sakaki,disease_mgmt_hirose,bursty_dis_mgmt,flood_detection}. However, it is important to note that many of these works are a form of retrospective analysis, as opposed to true \textit{real-time} event detection, since they perform analyses on cleaned and processed data within a short-time frame in the past, with the assumption that their approaches are sustainable and will continue to function over time. This is an unrealistic assumption due to the concept drift phenomenon, where real-world data exhibits continuous changes in its distribution. The concept drift phenomenon has been well documented \cite{conc_drift_active_shan,conc_drift_almeida,conc_drift_costa,conc_drift_demello,conc_drift_windows,gama_drift_a,gama_drift_c,gft_fail_b,gft_fail_a,gft_fail_c}. In effect, changes in data distribution render machine learning algorithms obsolete over time and classification models require constant fine-tuning for effective performance. As such, existing big data analytics have focused on larger scale events or trend analysis where learning models can be updated with human feedback. As such, most applications for streaming data rely on non-adversarial assumptions about their data content: \begin{itemize} \item the streaming data is of high quality, with little to no noise; in this case, human labeling is easier and weak-supervision~\cite{weak_supervision} using trend analysis or statistical distributions can be exploited to create new labeled data \item the concept drift direction, type, and scale are known; in effect, some approaches presuppose knowledge of dataset shift, which is not a realistic real-world assumption \item there is immediate and proportional feedback available to perform model correction; the streaming domain's data volume is too large to enable for proportional feedback \item the streaming data exhibits strong-signal characteristics, where the desired event's signals (or features) are well separated from irrelevant signals; in our case, we focus on the weak-signal case where the relevant data is dwarfed by irrelevant data and noise. \end{itemize} We present an system for adapting to real-world evolving data that uses a combination of \textit{corroborative sources} and \textit{probabilistic supporting sources} to perform real-time event detection that avoids deterioration under noisy, drifting conditions. We demonstrate our system in a case study with disaster detection as the physical event of choice; our system is able to detect events under a variety of categories, such as landslides, floodings, wildfires, and earthquakes in real-time. Additionally, our system LITMUS\footnote{A demo is available at: https://grait-dm.gatech.edu/demo-multi-source-integration/}, is drift adaptive and continuously updates itself against adversarial drift without human intervention. Specifically, we address the closed-dataset assumptions described: \begin{itemize} \item we rely on low quality streaming data from social networks such as Twitter and Facebook, which consist primarily of noisy short-text streams \cite{short_text}\cite{short_text_sriram} with large amounts of misinformation and disinformation \item we do not pre-suppose drift; instead we assume unknown and unbounded concept drift due in part to lexical diffusion \cite{lex_diff} and random shifts in user behavior \item we do not rely on human feedback for our system's learning model updates due to its infeasibility in the streaming domain; manually labeling of even 0.01\% of streaming web data from Twitter ($>$500M samples per day) will require more than 20 workers to work continuously for 8 hours each day \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{figures/tweethisto} \caption{Most events have 1 post associated with them. More than 95\% of event are detected with less than 10 posts per event. Event detection confidence is built over time as more posts are discovered.} \label{fig:tweethisto} \end{figure*} \item we demonstrate efficacy with weak-signal events with an abundance of irrelevant data and noise - our disaster dataset is an ongoing collection of live social and news feeds, and even with keyword search and filtering on disaster type, almost 94\% of data are drifting noise with time-varying characteristics that must be eliminated with fast-updating learning models \end{itemize} We make the distinction between \textit{weak-signal} and \textit{strong-signal} events as follows: strong-signal events have signals (or features) that are easily separable; for example, the earthquake detection approach in\cite{dis_mgmt_sakaki} relies on the fact that each earthquake is followed by several hundreds or thousands of tweets. Similarly, Google Flu Trends, another example of event detection that deteriorated due to concept drift ~\cite{assed,gft_fail_a,gft_fail_b,gft_fail_c}, focused on flu detection by matching search terms across the entire United States. We focus on weak-signal events such as landslides, floodings, and wildfires: these events have the same real-world impact int terms of damages and costs; however they are numerous and each instance of an event is lost in streaming noise. We show in Figure~\ref{fig:tweethisto} the relation between our event detection to the number of tweets per event: most events are associated with a single tweet or two tweets - a far cry from the hundreds or millions of social sensors used by ~\cite{dis_mgmt_sakaki} and \cite{gft_fail_c}. We perform detection in these noisy and drifting conditions, where our approach outperforms static models by over 350\% in event detection under drifting conditions. We present the following contributions: \begin{itemize} \item We propose a system for end-to-end event detection using a combination of \textit{corroborative sources} and \textit{probabilistic supporting sources} \item We implement a collaborative teamed-classifier approach for physical event detection that performs continuous learning and adaptation without human intervention. Our approach is able to detect concept drift and perform the appropriate training data generation, labeling, and model fine-tuning to prevent classifier deterioration without any bottleneck from human labelers or fine-tuners \item We demonstrate the efficacy of our system on weak-signal events with significant amounts of noise and concept drift. A demo is available on LITMUS at $$\text{https://grait-dm.gatech.edu/demo-multi-source-integration/}$$ \end{itemize} \section{Related Work} \subsection{Concept Drift} Recent approaches for drift adaptation usually use synthetic data to validate procedures \cite{conc_drift_active_shan,conc_drift_almeida,conc_drift_costa,conc_drift_demello}. Synthetically generated data is perturbed to include specific, known forms of drift, such as gradual, cyclic, or sudden drift. Under these constraints, there exist several mechanisms for concept drift adaptation with physical sensors containing numeric data. \textbf{Windowing} is a common technique for adaptation that uses multiple sliding windows over time. This approach uses several data memories, or windows of different lengths over an incoming data stream; each window has its own classifier. The \textbf{SAM-KNN} algorithm uses nearest neighbor approach to select the window closest to a new data sample for classification \cite{samknn}. Nested windows are used in \cite{conc_drift_windows} to obtain multiple training sets over the same data that each exclude a region of the data space. \textbf{Adaptive Random Forests} augment the traditional random forest classifier with a built-in explicit drift detector (requiring labels). Drift detection leads to forest pruning in the ensemble to remove tress that have poor performance on the drifted data. The pruned forest is subsequently updated with new weak classifiers to complete the ensemble \cite{arf}. The \textbf{Knowledge Maximized Ensemble} (KME) uses both off-the-shelf and their own drift detectors to recognize multiple forms of drift simultaneously. Models are updated when enough training data is collected and removed if they perform poorly on subsequent drifted data \cite{kme}. Most methods approach concept drift with an eye towards detection and subsequent normalization. Updating or rebuilding a machine learning model facing drift involves two bottlenecks in the classification pipeline: data labeling and model training; data labeling is the greater challenge due to its oracle requirements. Such wait-and-see models that perform corrections once errors have been detected entail periodic performance degradation before they are corrected with model updates; this may be infeasible in mission-critical applications. Active learning strategies counteract this bottleneck in part \cite{conc_drift_active_shan}; the trade-off is between highly accurate models and clustered, knowledge-agnostic representations that consider data on distance without subject matter expertise. \subsection{Physical Event Detection} Earthquake detection using social sensors was initially proposed in \cite{dis_mgmt_sakaki}. There have also been attempts to develop physical event detectors for other types of disasters, including flooding \cite{flood_detection}, flu \cite{gft_fail_a, gft_fail_b, gft_fail_c}, infectious diseases \cite{dis_mgmt_hagen}, and landslides \cite{litmus_a}. In most cases, the works focus on large-scale disasters or health crises, such as earthquakes, hurricanes \cite{dis_mgmt_thom}, and influenza that can be easily verified and have abundant reputable data.. Our application is general purpose, as it can handle small-scale disaster such as landslides and large-scale disasters. The existing approaches also assume data without concept drift. However such assumptions, made in Google Flu Trends (GFT) \cite{gft_fail_a,gft_fail_b,gft_fail_c} degrade in the long term. GFT was originally created to complement the CDC’s flu tracking efforts by identifying seasonal trends in the flu season ~\cite{gft_fail_c}. Failure to account for seasonal changes in event characteristics led to increasing errors over the years, and by 2013, GFT missed the trends by 140\%. This error has been attributed to exclusion of new data from CDC, changes in the underlying search data distribution itself, and cyclical data artifacts ~\cite{gft_fail_c,gft_fail_a, gft_fail_b}. \section{Data} Our system uses a combination of \textit{corroborative sources} and \textit{probabilistic supporting sources}. We first make the distinction between the two before describing our data collection process. \paragraph{Corroborative source} We define a corroborative sources as a dedicated physical or web sensors that provides annotated physical event information that can be crawled or scraped. Such physical sensor data is often structured, e.g. government agency reports about disasters. Web-based corroborative sources include news articles which are often tagged with keywords and due to their fact-based nature, inherently included misinformation checking. However, corroborative source latency in information availability makes them unsuited for real-time physical event detection; since corroborative sources provide event conformation after their own corroboration, there are delays in information dissemination. Such sources also do not have global or dense coverage due to funding limits. \paragraph{Probabilistic supporting source} We consider any source without corroboration a probabilistic supporting source due to the inherent uncertainty. These correspond to classifier predictions in a traditional ML environment. In our approach, we use an array of probabilistic supporting sources to more confidently predict events in the absence of corroborative sources. Additionally, our systems monitors these probabilistic supporting sources continuously for performance deterioration due to drift, and performs classifier updates and fine-tuning using data from corroborative sources. As such, we call this combination of two types of sources the \textbf{teamed-classifier} approach. \subsection{Corroborative Sources} In our case study of disaster detection, our corroborative sources are extensible based on the domain. We use a combination of physical and web sensors as our corroborative sources. As an example of corroborative source latency and limited coverage, the LITMUS system previously relied primarily on USGS landslide reports~\cite{litmus_a}. Since USGS no longer provides any landslide reports for such disasters, thee LITMUS system must compensate with other corroborative sources such as rainfall and earthquake data from USGS. We also use NOAA landslide predictions that are provided in high rainfall regions. Since these physical sensors do not have dense, global coverage, we also use web-based corroborative sources such as news articles crawled from aggregators (Google News and Bing News APIs). We adapt the news streaming and processing approach from \cite{assed} for data collection. \subsection{Probabilistic Supporting Sources} Our probabilistic supporting sources are a group of temporally evolving machine learning classifiers trained to classify events from short-text streams from Twitter, Facebook, and other social networks. In contrast to an ensemble approach, we only use the classifiers that are most effective on a data item; due to concept drift, we keep a history of classifiers at different points of training over time as well as each classifier's performance to better identify high quality classifiers at any time. The raw social sensor data is streamed from Twitter and Facebook, with web crawlers leveraged for the latter to improve retrieval efficiency. LITMUS performs metadata processing using the streaming and extraction approach in \cite{assed}. We note that even with keyword filtering (e.g. \textit{landslide}, \textit{mudslide}, \textit{rockslide} for landslides, and \textit{flood} and \textit{rain} for flooding), over 90\% of the streamed data are not relevant to our desired disaster events. In each case, there is linguistic noise that hides the true events: \begin{itemize} \item \textit{Landslide} refers to both the disaster event and election events. Additionally, there is a song with the same name by the \textit{Fleetwood Mac} band. \item \textit{Mudslide} refers to both the disaster event and an alcoholic cream drink \item \textit{Flood} is used to describe flooding events in conjunction with the more idiomatic usage \item \textit{Rain} similarly is used for heavy rain events, light rain events, and idiomatic usage, the latter of which is more common (e.g. \textit{raining on their parade}) \end{itemize} We show some examples of correctly detected events from streaming web data in Figure~\ref{fig:floodingtweet} and ~\ref{fig:landslidetweet}. We also show examples of false positives due to linguistic noise in Figure~\ref{fig:badtweet}. Additionally, as we showed in Figure~\ref{fig:tweethisto}, most events have only one or two tweets or Facebook posts associated with them, requiring stronger detection capabilities for real-time event detection than retrospective trend-detection approaches in\cite{dis_mgmt_sakaki}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/floodingtweet} \caption{Flooding events detected by LITMUS} \label{fig:floodingtweet} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/landslidetweet} \caption{Landslide events detected by LITMUS} \label{fig:landslidetweet} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/badtweet} \caption{False positive event detection due to linguistic noise. Over time, LITMUS learns to adapt to such noise.} \label{fig:badtweet} \end{figure} \section{System Architecture} We first describe our general system overview. We will then cover technical details about the integration of corroborative and probabilistic sources, as well as the unsupervised drift detection and adaptation algorithms we use. Finally, we'll cover our system implementation. \subsection{System Overview} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/systemoverview} \caption{General system overview for corroborative and probabilistic source integration. We perform classification on corroborative sources using their own annotations and store them in a Corroborative Events database. Streaming Data is classified into relevant and irrelevant classes using teamed classifiers that combine corroborative events and probabilistic supporting sources operating on the streaming data. The detected events are stored in the Integrated Knowledgebase.} \label{fig:systemoverview} \end{figure} We show a general system overview in Figure~\ref{fig:systemoverview}. The \textbf{teamed classifier} is dynamically constructed weighted ensemble of classifiers that are most relevant for a given data point. Classifiers consist of machine learning models trained on subsets of streaming data collected since system inception, as well as spatio-temporal filters based on corroborative events. The latter uses the following intuition: if an event is detected from corroborative sources, then any streaming data point that exists in the same spacio-temporal coordinates as the corroborative event can be automatically labeled as a relevant, or conversely, an irrelevant data point (see Figure~\ref{fig:spatiotemporal}). \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/spatiotemporal} \caption{We can automatically label some of the streaming data points using corroborative events: streaeming data points (small circles) in the same spatio-temporal coordinates as a corroborative event (green zones) get the same label as the corroborative event. For example, if we get corroboration of a landslide event in Austin, Texas, then any tweet or Facebook post during the same time and location mentioning landslides is more likely about the disaster event as opposed to election landslides or the song \textit{Landslide}.} \label{fig:spatiotemporal} \end{figure} \subsubsection{Corroborative Classification} We perform corroborative classification using annotations provided by corroborative sources themselves. As an example, NOAA provides landslide predictions in high rainfall regions. We take high probability predictions as ground truth for landslide events in the future, since NOAA predictions are high confidence sources (as defined in \cite{assed}) and can corroborate events detected in streaming data. Similarly, NOAA rainfall data covers flooding during extreme climate events. We use these annotations as ground truth corroborative events. \subsubsection{Streaming Data Classification} We use an array of machine learning models for streaming data classification. Given a short-text stream from the raw stream, we select the top $k$ most relevant classifiers from set of all classifiers stored in LITMUS; we empirically set $k=5$. Each classifier is weighted based on its relevancy to the data point (we cover this weighting scheme in the next section) and the dynamically created ensemble is used for the streaming data point classification. \subsection{Teamed Classifier Selection} We determine relevancy of a classifier to a data point using its performance on similar data points. This requires two steps: (i) drift detection to identify changes in the data distribution, and therefore, distance between data points, and (ii) classifier generation and selection in case of drift detection. In the second step, we perform classifier generation if the drift has not been seen, e.g. gradual or flash drift as described in \cite{gama_drift_a}. We perform classifier selection if the drift has been seen before, as in the case of cyclic or periodic drift \cite{gama_drift_a}. \subsubsection{Drift detection} We considered recent works on novelty detection or out-of-distribution detection \cite{oodd1,oodd2,oodd3,oodd4}. In our weak-signal focus, such approaches are not suitable where most of the samples are noise. We also need to address virtual concept drift, where the distribution of both relevant and irrelevant points changes without changing the decision boundary itself. Under virtual drift, it is sufficient to fine tune a classifier instead of rebuilding it, which is a more expensive step. Since concept drift affects the underlying data distribution, our drift detection approach uses the Kullback-Leibler divergence test on two distribution windows - the set of data points a classifier is trained on and the current streaming window of incoming data points. Comparison of the two distributions yields the distribution divergence metric, which we use as distance between a data point and a classifier. We perform the comparison on a high-density band of points for each window, with the band defined as follows: let $D'$ be the points ${x_1,x_2,...,x_i}$ in window $w'$, with a mean (or centroid) $\bar{D'}=N^{-1}\sum_N x_i$ Then, let $f_D(x)$ be the continuous density function of any $D$, where we estimate it on the distribution of distances of any point in $D$ from its centroid $\bar{D}$ normalized to $[0,1]$. Then, the $\Delta$-density band of $D$, with $\Delta\in[0,1]$, is a band around the centroid that contains $\Delta$ probability mass of the data window; e.g. if $\Delta=0.6$, the the $\Delta$-band contains 60\% of the points in $D$. We consider this as a banded region $[\delta_l, \delta_h]$, where $0\leq\delta_l < \delta_h\leq 1$, and calculate the region bounds as: \begin{equation} \int_{\delta_l}^{\delta_h}f_D(x)dx=\Delta \end{equation} The intuition for using bands, as opposed to a spherical region is related to the curse of dimensionality in high dimensional data. Note that for some set of points in high-dimensional space, the volume of the unit hypersphere tends to zero\footnote{$V(d)=0.5^d\pi^{0.5d}/\Gamma(0.5d + 1)$}. So the majority of these points occur near the corners of the hypersphere. The $\Delta$-band then becomes a region around the centroid, where the hyperspherical region of radius $\delta_l$ (lower bound of $\Delta$) around the centroid is mostly empty. We approximate the $\Delta$-band of our data using $\mathcal{N}(\mu,\sigma^2)$, where $\mu, \sigma$ can be estimated based on the empirical observations in Figure~\ref{fig:rhobands}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/rhobands} \caption{We show the difference in distance distribution ltiple windows. Each window is a set of 3000 data points that are relevant to disasters. The dark blue bars represent the window under consideration, while the pink bars are the prior window. In each case, there is enough divergence between points to constitute concept drift. Additionally, the distribution itself can be estimated using $\mathcal{N}(\mu,\sigma^2)$} \label{fig:rhobands} \end{figure} Then, with the $\Delta$-bands of two windows (current streaming window and classifier window), we can measure the relevancy of a classifier with the Kullback-Leibler (KL) divergence as follows. The standard KL metric is shown in Eq~\ref{eq:klmetric}, where the prior $P_A$ and posterior $P_B$ each model the data point $x_i$. The classifier window $w_C$ is the prior, and the current streaming window $w_S$ is the posterior. \begin{equation} D_{KL}(P_A||P_B)=-\sum_{x_i\in X}P_A(x_i)\log(P_B(x_i)/P_A(x_i)) \label{eq:klmetric} \end{equation} Then, let $x'_A=d(x_i,C_A)$, where $d$ is a distance metric and $C_A$ is the centroid of A. We obtain $x'_A$ and $x'_B$ from the prior and live distributions of $w_M$ and $w_S$, respectively, where each $x'$ is the distance of the data point from the centroids. We make the approximation $P_A(x')=\min(P(x'))$ if $P_A(x')=0$ to avoid KL discontinuity when $P_A(x')=0$. Each data point is a short-text string. We use word2vec to encode the string to $\Re^300$, and use Cosine Similarity as our distance metric since it is a more effective metric for word2vec. We can then use the divergence as a drift detector and classifier evaluator. We allow a smoothing period between windows to incorporate the new stream after drift detection. For each data point, we add it to the current window and update the window's $\Delta$-band. We then compare it to the window during the smoothing period to measure the divergence, where significant divergence indicates drift is occurring. In the case of drift, we create a new window and generate and update classifiers. In the absence of drift, we measure the distance between the data point and the centroids of all classifiers to obtain the top $k$ classifiers for dynamic ensemble creation. \subsubsection{Classifier Generation and Update} If drift is detected, we require changes to the classifiers that model the drifting data. We also generate new classifiers for the drifted data to avoid relying only on old classifiers that include drifted and old knowledge in their parameters. We can model any learning model or classifier $M$ as a mapping $f_M:\mathcal{M}\rightarrow\mathcal{Y}$ from the training and testing data ($\mathcal{M}$) to their respective class labels $\mathcal{Y}$. Here each $\mathcal{M}$ specifies a region in the data space. The traditional learning methods have characterized $\mathcal{M}$ as representative of the universe of data points (see \textbf{Related Work} section). This assumption is not suitable for the streaming data with noise and drift, where the training data distribution in one window may be different from the distribution in another window (see Figure ~\ref{fig:rhobands}). S any window contains only a subset of the data.\ We address this assumption without detection and generation approach, where we build a continuously evolving set of mappins, or classifiers, from the data space to labels. With drift detection, whenever the distribution of a region in the data space changes, we change the classifier associated with it. If a new distribution is discovered (i.e. new points do not belong in any $\Delta$-band in the mappings database), then we generate a new mapping for that region. We use the following algorithm for classifier generation and update. \begin{algorithm}[h] \caption{Updating existing classifiers and generating new classifiers} \label{alg:virtualdrift} \begin{algorithmic}[1] \STATE $\mathtt{Parameters}$: $d$ (the distance metric, e.g. $\mathtt{CosineSimilarity}$); $k$-model selection policy $S_k$; $\lambda$ \STATE $\mathtt{Inputs}$: $N$ Current models $\{M\}^N$, new data point $x_i$ \STATE $\{M\}^k = S_k(x_i)$ \STATE $mem\_xi = \mathtt{False}$ \FOR{$M_j \in\{M\}^m$} \STATE \COMMENT{$D_{M_j}$ is the training data of model $M_j$, with $\Delta$-band $[\delta_l^j, \delta_h^j]$} \STATE $d'_{x_i} = d(x_i, D_C^j)$ \COMMENT{Distance to centroid} \STATE \COMMENT{Check if inside $\Delta$-band} \IF {$\delta_l^j < d'_{x_i} < \delta_h^j$} \STATE $\mathcal{D}_{M_j} = \mathcal{D}_{M_j} \cup x_i$ \COMMENT{Add point to model's data} \STATE $\mathtt{Update(}M_j\mathtt{)}$ \COMMENT{Update classifier if $x_i$ is labeled by corroborative sources} \STATE $mem\_xi = \mathtt{True}$ \COMMENT{Flag to indicate data point has an associated region} \ENDIF \IF {$\delta_h^j \geq d'_{x_i} < \lambda$} \STATE $\mathcal{D}_{M_j} = \mathcal{D}_{M_j} \cup x_i$ \ENDIF \ENDFOR \IF {\NOT $mem\_xi$} \STATE $D_G = D_G\cup x_i$ \ENDIF \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:virtualdrift} covers model updates. The parameters are the distance metric $d$, a model selection policy $S_k$, and a generalization parameter $\lambda$. The model selection policy $S_k:x_i\rightarrow \{M\}^k$ selects the $k$-best models to classify $x_i$. Some examples of an ensemble selection policy include: set of all \textit{recent} models (where \textit{recent} indicates models created in the prior drift detection update step); high performing models over the entire set of models; high performing \textit{recent} models; $k$-nearest models based on distance between data point and centroids of the model's data window; or nearest $\Delta$-band models where only models whose $\Delta$-band contains $x_i$ are considered. For each point, we identify the $\Delta$-band the point belongs to (Lines 7-9). If an $x_i$ does not belong to a $\Delta$-band, we check if it belongs in a generalization band around the $\Delta$-band in Line 14, where we consider the region $[\delta_h^j,\lambda]$ just outside the $\Delta$-band. If an $x_i$ does not belong in any $\Delta$-band, we add it to the general memory $\mathcal{D}_G$ in Line 19, which we use to train new classifiers. The general memory are regions of the data space not yet seen in any existing model's data; it is used to create new classifiers when drift is detected using Eq~\ref{eq:klmetric}. When drift is detected, we address it for each model by using the data in its respective data (updated in Lines 10 and 15). \subsection{Implementation} We now describe the implementation of the teamed-classifier drift adaptive system, as described in Figure~\ref{fig:localsystem}. The drift adaptive system accepts two input streams - the real-time data stream (streaming data in Figure~\ref{fig:systemoverview}) and the delayed feedback labeled stream (corroborative events in Figure~\ref{fig:systemoverview}). \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/localsystem} \caption{The drift adaptive system takes streaming data and labeled data, the latter being corroborative events obtained from corroborative sources. We use the corroborative events to evaluate existing classifier and fine-tune them continuously. The streaming data is used for real-time dense global prediction.} \label{fig:localsystem} \end{figure} \paragraph{Real-time stream} We use the classifier selection and drift detection approach described in the previous approach for the real-time stream to deliver predictions: \begin{itemize} \item \textbf{Model selection}: We examine several model selection policies and use the $k$-nearest approach to select the models whose centroids are closest to a given data point. \item \textbf{Ensemble creation}: The $k$-models selected in the prior step are weighted on their performance $\omega$ on their datasets, multiplied by the distance: $w_k=\omega\cdot d(x_i,D_{M_k})$. The weights are normalized using the softmax function. \item \textbf{Prediction}: The dynamically generated ensemble's predictions are sent to the \textbf{Integrated Knowledgebase} in Figure~\ref{fig:systemoverview}. \end{itemize} Simultaneously, we perform continuously perform classifier maintenance using delayed feedback from corroborative events (we call this delayed feedback since corroborative events are far slower than the real-time stream). \begin{itemize} \item \textbf{Evaluate}: We can retroactively assign labels to real-time data points using corroborative events with the spatio-temporal assigmnent approach from Figure~\ref{fig:spatiotemporal}. We then use this label assignment to evaluate classifiers on performance. \item \textbf{Drift Detection}: Performance degradation entails explicit concept drift detection \cite{gama_drift_a}. In conjunction with the unsupervised drift detection with the KL metric in Eq~\ref{eq:klmetric}, we identify drift windows to generate new data memories (see Algorithm~\ref{alg:virtualdrift}) \item \textbf{Learning/Updates}: Drift in existing data memories entails model update and new model generation on the corroboratively labeled data. New data regions (the general memory $\mathcal{D}_G$) discovered in the previous window are used exclusively for new mode creation, since they have no existing models to update. \end{itemize} \section{Evaluation} We will first describe some drift charateristics of our data. Then we will cover further system implementation and neural network classifier details. We will briefly cover accuracy results on individual windows. Finally we will describe our end-to-end system, with a demo available at \textit{https://grait-dm.gatech.edu/demo-multi-source-integration/}. \subsection{Drift Characteristics} Since our physical event detection data is a raw real-time stream from social sensor sites, we face significant noise and drift in our data. If we rely on authoritative corroboration for the streaming data, we lose valuable time in event detection; as such, our system must be capable of adapting to such noise and drift continuously without human intervention. We have covered the approach in the prior section. We show the almost continuous drift for our data in Figure~\ref{fig:acrosswindows}. Each window is 3000 data points, and we show only a subset of windows. The red points are from the prior window, and the blue points are from the current window. For each window, we use t-SNE to embed both previous window and current window to 2D, and display them on the same plot. Some windows (Window 8, Window 9) do not show significant drift, with current window data occupying mostly the same space as previous window. However, other windows (Window 1, 4, 5, 6, 7) show more significant drift. Each point is a word2vec embedding of the associated string from social sensor. In each window image, blue points are positive samples in the current window, red points are negative samples in the current window, green points are positive samples in the previous window, and yellow points are negative samples in the previous window. We have noted in our paper the difficulty in automatically labeling negative samples, due to coverage considerations. As such, there is lower density of negative samples throughout compared to positive samples, creating a class imbalanced problem that adds to the existing drift and noise challenges. \begin{figure*} \begin{center} \includegraphics[width=.75\textwidth]{figures/driftacrosswindows} \end{center} \caption{Drift across multiple windows shown with t-SNE embedding. The axes represent raw componnt t-SNE scores and do not have semantic meaning other than distance between features.} \label{fig:acrosswindows} \end{figure*} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/badperf} \caption{We test several classifiers outside our drift adaptive system. In each case, performance drops across a few months compared to the baseline accuracy. The Decision Tree, which relies on boundary conditions around features, suffers the most due to changes in feature distribution.} \label{fig:badperf} \end{figure} We also evaluate performance of a non-drift adaptive system on the drifting data, shown in Figure~\ref{fig:badperf}. We use a variety of linear classifiers such a SGD, Random Forest, Logistic Regression, Naive Bayes, and Decition Trees, along with neural networks. For linear classifiers, we perform grid search to obtain the best hyperparameters. We use an ensemble of text-classification networks \cite{fasttext,convclass,convclass2} for the neural network (using TensorFlow). As we note, each classifier suffers significant performance drops. We find this performance drop is due to a few factors: \begin{itemize} \item The social sensor data is noisy and has low context, yielding poor initial performance. As such, classifiers fail to generalize due to the variability in text streams from different regions or demographic groups (this diffusion is covered in part in \cite{lex_diff}). \item Heuristic or simple filtering rules are lacking; it is difficult to adapt heuristics to memetic changes (e.g. the word \textit{flood} and \textit{death} can be a good heuristic for the disaster event; however more recently, they have been used in conjunction with controversial political language, skewing the social sensor data) \begin{table*}[] \centering \label{tab:perf} \caption{We show performance across multiple windows, and compare the baseline performance against our adaptive system. We find that in each window, our adaptive system excels against the baseline. We use the corroborative events to retroactively label points whev available, and use thse to perform classifier fine-tuning and updates as described in the prior section. We show that even with $<$5\% of labeled points, e are able to continuously improve performance from the static to the adaptive method.} \begin{tabular}{|l|rr|rr|rr|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Window}}} & \multicolumn{2}{c|}{\textbf{Performance}} & \multicolumn{2}{c|}{\textbf{Statistics}} & \multicolumn{2}{c|}{\textbf{Improvement}} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c}{Static} & \multicolumn{1}{c|}{Adaptive} & \multicolumn{1}{c}{Unlabeled} & \multicolumn{1}{c|}{Corroborative} & \multicolumn{1}{c}{\% Labeled} & \multicolumn{1}{c|}{Improvement} \\ \hline Baseline & 0.91 & 0.97 & NA & NA & NA & NA \\ 1 Mo & 0.70 & 0.88 & 7205 & 189 & 2.62\% & 125.5\% \\ 2 Mo & 0.57 & 0.90 & 14245 & 106 & 0.74\% & 159.2\% \\ 3 Mo & 0.58 & 0.90 & 4867 & 193 & 3.97\% & 156.7\% \\ 4 Mo & 0.70 & 0.88 & 15847 & 249 & 1.57\% & 126.1\% \\ 5 Mo & 0.38 & 0.86 & 7084 & 885 & 12.49\% & 225.7\% \\ 6 Mo & 0.75 & 0.99 & 4873 & 223 & 4.58\% & 132.0\% \\ \hline \end{tabular} \end{table*} \item The raw stream data covers millions of true physical events, where our desired class (disaster, specifically individual disasters such as flooding, landslides, wildfires) consists of a fraction of samples. Further, it is difficult to use trend analysis tools to perform detection since each instance of an event is a weak-signal event, with only 1-2 posts associated with it. \end{itemize} \subsection{Performance} We implement the end-to-end drift adaptive system described in Figure~\ref{fig:localsystem} and evaluate its performance across windows. The system as described integrates two streams: corroborative sources (i.e. news articles) and probabilistic supporting sources (i.e. predictions from ML classifiers) to deliver real-time predictions. Our system is not a retrospective trend analysis system such as the earthquake detector in \cite{dis_mgmt_sakaki}; rather, it is a continuously evolving, real-time system. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/corroborative} \caption{Only a fraction of points can be labeled with corroborative events (log-scaled y-axis).} \label{fig:corroborative} \end{figure} We have noted that corroborative events can be used to automatically label social sensor posts. We find that while this is true, they account for only a fraction of all social posts; their delay and lack of dense, global coverage prevents their use as a reliable oracle for labels. We show the difference between the raw stream and oracle-labeled points in Figure~\ref{fig:corroborative}, where often, less than 1\% of points could be so labeled. The remaining need to be processed with the drift adaptive system. We show performance evaluation in Table 1, where it is clear our drift adaptive system exceeds the static performance. \subsection{End-to-end system} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/demo} \caption{A screenshot of the LITMUS landslide detection system demo incorporating our drift adaptive system.} \label{fig:demo} \end{figure} We show in Figure~\ref{fig:demo} a screenshot of our end-to-end system available at \textit{https://grait-dm.gatech.edu/demo-multi-source-integration/}. Our system is resilient to drift, as we showed in Table~\ref{tab:perf}, and continues to function at high accuracy over six years after inception without any human intervention. We also show an example of a detected flooding event in Figure~\ref{fig:event}, where the drift adaptive system has identified several flooding events in the UK. We also show below the map the collection of social posts that contributed to the event detection. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/event} \caption{Flood event detection in the LITMUS platform. The highlighted event is detected only with social posts.} \label{fig:event} \end{figure} \section{Conclusions} We have described an end-to-end drift-adaptive system for true physical event detection. Our approach integrates corroborative sources and probabilistic supporting sources to perform real-time physical event detection. Furthermore, our approach is able to adapt to the concept drift phenomena without sacrificing performance and without human labeling bottleneck as required in traditional drift adaptation techniques. We have implemented our system as a disaster detection application, with an online demo. Our approach does not make any limiting assumptions about its data, and performs detection in adversarial conditions where: (i) the data is noisy and drifting, (ii) the drift type is unknown and unbounded, (iii) feedback is limited and may not be available in most cases, and (iv) the events exhibit weak-signal characteristics. Our system is able to main high accuracy ($\sim$90\% f-score) across multiple time windows without human intervention to perform fine-tuning or updates. Our next steps include developing a management interface for our drift adaptive system to better examine system components and perform log analytics to improve real-time performance and scalable prediction delivery. \bibliographystyle{IEEEtran}
2,869,038,155,734
arxiv
\section{Introduction} QED in two dimensions is a useful toy model to gain an understanding of the theory at finite temperature and chemical potential~\cite{Sachs:1993zx,Sachs:1995dm,Sachs:1991en}. In particular, the physics at zero temperature is interesting since one can study a system that can exist in several phases. The theory at zero temperature is governed by two degrees of freedom, often referred to as the toron variables in a Hodge decomposition of the U(1) gauge field on a $l\times \beta$ torus, where $l$ is the circumference of the spatial circle and $\beta$ is the inverse temperature. Integrating over the toron fields projects on to a state with net zero charge~\cite{Gross:1980br} and therefore there is no dependence on a flavor-independent chemical potential~\cite{Narayanan:2012du}. The dependence on the isospin chemical potential for the two-flavor case was studied in~\cite{Narayanan:2012qf} and we extend this result to the case of $f$ flavors in this paper. After integrating out the toron variables, the dependence on the $(f-1)$ traceless\footnote{linear combinations that are invariant under uniform (flavor-independent) shifts} chemical potential variables can be written in the form of a $(2f-2)$-dimensional theta function. As the same gauge field (toron variables, in particular) couples to all flavors, this theta function has a non-trivial Riemann matrix. The resulting phase structure at zero temperature is quite intricate since it involves minimization of a quasi-periodic function over a set of integers. Here, we summarize the results of \cite{Lohmayer2013}, where we derive the theta-function representation of the partition function and work out in great detail the two-dimensional phase structure for the three-flavor case and the three-dimensional phase structure for the four-flavor case. \section{Partition function} Consider $f$-flavored massless QED on a finite torus with spatial length $l$ and dimensionless temperature $\tau=\frac l\beta$. All flavors have the same gauge coupling $\frac{e}{l}$ where $e$ is dimensionless. Let \begin{equation} \mbox{$\bm{\mu}$}^t = \begin{pmatrix} \mu_1 & \mu_2 & \cdots & \mu_f\cr \end{pmatrix} \end{equation} be the flavor-dependent chemical potential vector. The partition function factorizes into bosonic and toronic parts~\cite{Sachs:1991en,Narayanan:2012qf}, $Z(\mbox{$\bm{\mu}$},\tau,e) = Z_b(\tau,e) Z_t(\mbox{$\bm{\mu}$},\tau)$. As we will only consider ourselves with the physics at non-zero chemical potential, we focus on the toronic part \begin{align} Z_t(\mbox{$\bm{\mu}$},\tau) &= \int_{-\frac{1}{2}}^{\frac{1}{2}} dh_2 \int_{-\frac{1}{2}}^{\frac{1}{2}} dh_1\, \prod_{i=1}^f g(h_1,h_2,\tau,\mu_i)\,,\\ g(h_1,h_2,\tau,\mu) &= \sum_{n,m=-\infty}^\infty \exp \left[ -\pi\tau \left[\left(n+ h_2 -i \frac{\mu}{\tau}\right)^2 +\left(m + h_2 -i \frac{\mu}{\tau}\right)^2\right] +2\pi i h_1 \left(n -m\right)\right]\nonumber \end{align} and perform the integration over the toronic variables, $h_1$ and $h_2$. \subsection{Multidimensional theta function} As derived in~\cite{Lohmayer2013}, the toronic part of the partition function has a representation in the form of a $(2f-2)$-dimensional theta function: \begin{equation} Z_t(\mbox{$\bm{\mu}$},\tau)=\frac{1}{\sqrt{2\tau f}} \sum_{{\bm n}=-\infty}^\infty \exp \left[ -\pi\tau \left({\bm n}^t T^t +\frac{i}{\tau} {\bm s}^t\right) \begin{pmatrix} \bar\Omega & {\bm 0} \cr {\bm 0} & \bar\Omega\cr \end{pmatrix} \left( T {\bm n} + \frac{i}{\tau}{\bm s}\right) \right]\label{maineqn} \end{equation} where ${\bm n}$ is a $(2f-2)$-dimensional vector of integers. The $(2f-2)\times (2f-2)$ transformation matrix $T$ and the $(f-1)\times (f-1)$ matrix $\bar\Omega$ are given by \begin{align} T &= \begin{pmatrix} 1 & 0 & \cdots & 0 & 0\cr 0 & 1 & \cdots & 0 & 0\cr 0 & 0 & \ddots & 0 & 0\cr 0 & 0 & \cdots & 1 & 0\cr -1 & -1 & \cdots & -1 & f\cr \end{pmatrix}\,,\qquad\qquad \bar\Omega = \begin{pmatrix} 1 - \frac{1}{f} & -\frac{1}{f} & \cdots & -\frac{1}{f} \cr - \frac{1}{f} & 1-\frac{1}{f} & \cdots & -\frac{1}{f}\cr \vdots & \vdots & \ddots & \vdots \cr - \frac{1}{f} & -\frac{1}{f} & \cdots & 1-\frac{1}{f}\cr \end{pmatrix}\,. \end{align} The dependence on the chemical potentials comes from \begin{equation}\label{eq:s} {\bm s}^t = \begin{pmatrix} \bar \mu_2 & \bar \mu_3 & \cdots & \bar \mu_f & -\bar \mu_2 & -\bar \mu_3 & \cdots & -\bar \mu_f \cr \end{pmatrix} \end{equation} where we have separated the chemical potentials into a flavor-independent component $\bar \mu_1 = \sum_{i=1}^f \mu_i$ and $(f-1)$ traceless components $\bar \mu_k = \mu_1-\mu_k$ for $2\leq k \leq f$. \subsection{Particle number } We define particle numbers $N_i$, $\bar N_k$ corresponding to the chemical potentials $\mu_i$, $\bar \mu_k$, resp., as \begin{align} N_i(\mbox{$\bm{\mu}$},\tau) = \frac{\tau}{4\pi}\frac{\partial}{\partial \mu_i} \ln Z_t(\mbox{$\bm{\mu}$},\tau)\,,\qquad \bar N_k(\mbox{$\bm{\mu}$},\tau) = N_1(\mbox{$\bm{\mu}$},\tau)-N_k(\mbox{$\bm{\mu}$},\tau) \quad\text{for}\ 2\leq k \leq f\,. \end{align} In the infinite-$\tau$ limit, the infinite sums in Eq.~\eqref{maineqn} are dominated by ${\bm n}={\bm 0}$ which results in \begin{align} \bar N_k(\mbox{$\bm{\mu}$},\infty) = \bar \mu_k \qquad\text{for}\ 2\leq k \leq f\,.\label{numdeninf} \end{align} Since the partition function is independent of $\bar \mu_1$, $\bar N_1(\mbox{$\bm{\mu}$},\tau)=\sum_{i=1}^f N_i(\mbox{$\bm{\mu}$},\tau)=0$ for all $\tau$. \subsection{Zero-temperature limit}\label{sec:lowT} In order to study the physics at zero temperature ($\tau\to 0$), we set \begin{equation} \Omega = T^t \begin{pmatrix} \bar\Omega & {\bm 0} \cr {\bm 0} & \bar\Omega\cr \end{pmatrix} T. \end{equation} Then we can rewrite (\ref{maineqn}) using the Poisson summation formula as \begin{equation} Z_t(\mbox{$\bm{\mu}$},\tau) = \frac{1}{\sqrt{2\tau f}\tau^{f-1}} \sum_{{\bm k}=-\infty}^\infty \exp \left[ -\frac{\pi}{\tau}\left( {\bm k}^t \Omega^{-1} {\bm k} -2 {\bm k}^t T^{-1} {\bm s}\right) \right] \label{zffinal} \end{equation} with \begin{equation} \frac{1}{\Omega} = \begin{pmatrix} 2 & 1 & \cdots & 1& 1 & 0 & 0 & \cdots & 0 & 1\cr 1 & 2 & \cdots & 1& 1 & 0 & 0 & \cdots & 0 & 1\cr \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\cr 1 & 1 & \cdots & 2 & 1 & 0 & 0 & \cdots 0 & 0 & 1\cr 1 & 1 & \cdots & 1 & 2 & 0 & 0 & \cdots 0 & 0 & 1\cr 0 & 0 & \cdots & 0 & 0 & 2 & 1 & \cdots & 1 &1 \cr 0 & 0 & \cdots & 0 & 0 & 1 & 2 & \cdots &1 &1 \cr \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \cr 0 & 0 & \cdots & 0 &0 & 1 & 1 & \cdots & 2 & 1 \cr 1 & 1 & \cdots & 1 &1 & 1 & 1 & \cdots & 1 & 2-\frac{2}{f} \cr \end{pmatrix}\label{oinverse}\,, \end{equation} where the block in the upper left corner has dimensions $(f-1)\times(f-1)$ and the second block on the diagonal has dimensions $(f-2)\times(f-2)$\,. For fixed $\bar \mu_k$, the partition function in the zero-temperature limit is determined by minimizing the term $ {\bm k}^t \Omega^{-1} {\bm k} -2 {\bm k}^t T^{-1} {\bm s}$ in the exponent in Eq.~\eqref{zffinal} over the set of integers ${\bm k} \in \mathbb{Z}^{2f-2}$. Assuming in general that the minimum is $M$-fold degenerate, let $S=\{{\bm k}^{(i)}\}_{i=1,\ldots,M}$, ${\bm k}^{(i)}\in \mathbb{Z}^{2f-2}$, label these $M$ minima. Then \begin{align} \bar N_j(\mbox{$\bm{\mu}$},0) &=\frac1{2M} \sum_{i=1}^M \left(\sum_{l=1}^{f-1} k^{(i)}_l - \sum_{l=f}^{2f-3} k^{(i)}_l+k^{(i)}_{j-1} -k^{(i)}_{f+j-2}\right)\,,\qquad 2\leq j\leq f-1\,,\\ \bar N_f(\mbox{$\bm{\mu}$},0) &=\frac1{2M} \sum_{i=1}^M \left(\sum_{l=1}^{f-1} k^{(i)}_l - \sum_{l=f}^{2f-3} k^{(i)}_l+k^{(i)}_{f-1}\right)\,. \end{align} If the minimum is non-degenerate (or if all ${\bm k}^{(i)}$ individually result in the same $\bar N_j(\mbox{$\bm{\mu}$},0) $'s), the particle numbers $\bar N_j(\mbox{$\bm{\mu}$},0) $ assume integer or half-integer values at zero temperature. Since ${\bm k}\in \mathbb{Z}^{2f-2}$ and we only have $(f-1)$ independent $\bar N_j(\mbox{$\bm{\mu}$},0) $ (with $\bar N_1(\mbox{$\bm{\mu}$},\tau)=0$ for all $\tau$), there are in general many possibilities to obtain identical particle numbers from different $\bm k$'s. The zero-temperature phase boundaries in the $(f-1)$-dimensional space of traceless chemical potentials $\bar \mu_{2,\ldots,f}$ are determined by those $\bm{\bar \mu}$'s leading to degenerate minima with different $\bm{\bar N}$'s (for individual ${\bm k}^{(i)}$'s). Phases with different particle numbers will be separated by first-order phase transitions. While it is straightforward\footnote{ It is possible to perform certain orthogonal changes of variables in the space of traceless chemical potentials and obtain expressions equivalent to (\ref{zffinal}) that are more convenient to deal with numerically when tracing the phase boundaries. Such equivalent expressions for the case of $f=3$ and $f=4$ are provided in \cite{Lohmayer2013}. } to numerically determine the phase boundaries at zero temperature (by numerically searching for the minimum), the resulting phase structure turns out to be quite intricate (see details below). Consider the system at high temperature with a certain choice of traceless chemical potentials which results in average values for the traceless particle numbers equal to the choice as per (\ref{numdeninf}). The system will show typical thermal fluctuations as one cools the system but the thermal fluctuations will only die down and produce a uniform distribution of traceless particle numbers if the initial choice of traceless chemical potentials did not lie at a point in the phase boundary. Tuning the traceless chemical potentials to lie at a point in the phase boundary will result in a system at zero temperature with several co-existing phases. In other words, the system will exhibit spatial inhomogeneities. \subsection{Quasi-periodicity} Changing variables ${\bar \mu_{k+1}}'= \bar \mu_{k+1} + m_{k}-\frac f2 m_{f-1}+\sum_{i=1}^{f-1} m_i$ for $1 \leq k \leq f-1$ with $m_i\in \mathbb{Z}$ for all $1\leq i \leq f-1$ and $m_{f-1}f/2 \in \mathbb{Z}$, one can show that the partition function is quasi-periodic, resulting in \cite{Lohmayer2013} \begin{align}\label{eq:shift-N} \bar N_{k+1}(\mbox{$\bm{\mu}$}',\tau) = \bar N_{k+1}(\mbox{$\bm{\mu}$},\tau) + m_k -\frac{f}{2} m_{f-1} +\sum_{i=1}^{f-1} m_i, \end{align} which is the same as the shift in $\bar\mu$. \section{Results}\label{results} \subsection{Phase structure for \bm{$f=2$}} We partially reproduce the results of \cite{Narayanan:2012qf} in this subsection. From Eq.~\eqref{zffinal} for $f=2$, we obtain \begin{align} \bar N_2 = \frac{\sum_{k=-\infty}^\infty k e^{-\frac{\pi}{\tau}\left(k- \bar\mu_2\right)^2}}{\sum_{k=-\infty}^\infty e^{-\frac{\pi}{\tau}\left(k-\bar\mu_2\right)^2}}\,. \end{align} The quasi-periodicity under $\bar\mu_2' = \bar\mu_2 + m_1$ ($m_1\in\mathbb{Z}$) is evident. For small $\tau$, the dominating term in the infinite sum is obtained when $k$ assumes the integer value closest to $\bar\mu_2$. Therefore, $\bar N_2(\bar \mu_2)$ approaches a step function in the zero-temperature limit (for plots, see \cite{Narayanan:2012qf} and \cite{Lohmayer2013}). At zero temperature, first-order phase transitions occur at all half-integer values of $\bar \mu_2$, separating phases which are characterized by different (integer) values of $\bar N_2$. If a system at high temperature is described in the path-integral formalism by fluctuations (as a function of the two Euclidean spacetime coordinates) of $\bar N_2$ around a half-integer value, the corresponding system at zero temperature will have two coexisting phases (fluctuations are amplified when $\tau$ is decreased). On the other hand, away from the phase boundaries, the system will become uniform at $\tau=0$ (fluctuations are damped when $\tau$ is decreased). For visualizations, see \cite{Lohmayer2013}. \subsection{Phase structure for \bm{$f=3$}} We determine the phase boundaries, separating cells with different $(\bar N_2,\bar N_3)$ as described in Sec.~\ref{sec:lowT}. As mentioned in Sec.~\ref{sec:lowT}, it is also instructive to use a different coordinate system for the chemical potentials, obtained from $(\mu_1,\mu_2,\mu_3)$ by an orthonormal transformation: \begin{align} \begin{pmatrix} \tilde \mu_1 \\ \tilde \mu_2 \\ \tilde \mu_3 \end{pmatrix} = \begin{pmatrix} \frac 1{\sqrt{3}} & \frac 1{\sqrt{3}} & \frac 1{\sqrt{3}} \\ \frac 1{\sqrt{2}} & -\frac {1}{\sqrt{2}} & 0 \\ \frac 1{\sqrt{6}} & \frac 1{\sqrt{6}} & -\frac 2{\sqrt{6}} \\ \end{pmatrix} \begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3 \end{pmatrix} \,. \end{align} We denote the corresponding particle numbers by $\tilde N_2$ and $\tilde N_3$. An alternative representation of the partition function, which simplifies the determination of vertices in terms of the coordinates $\tilde \mu_i$, is given in \cite{Lohmayer2013}. In these coordinates, the phase structure is symmetric under rotations by $\pi/3$ and composed of two types of hexagonal cells, a central regular hexagon is surrounded by six smaller non-regular hexagons, which are identical up to rotations. Figure~\ref{fig:f3-phases} shows the phase boundaries at zero temperature in both coordinate systems. From Eq.~\eqref{eq:shift-N} we see that the boundaries in the $(\bar \mu_2,\bar \mu_3)$ plane are periodic under shifts by integer multiples of $(2,1)$ and $(1,-1)$. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{f3-phasestructure-mubar}\qquad\qquad \includegraphics[width=0.4\textwidth]{f3-phasestructure-mutilde} \caption{Phase boundaries at zero temperature for $f=3$ in the $\bar \mu$ plane (left) and the $\tilde \mu$ plane (right).} \label{fig:f3-phases} \end{figure} All $\bar \mu$'s inside a given hexagonal cell result in identical $\bar N$ as $\tau\to 0$, given by the coordinates of the center of the cell. For example, $\bar \mu$'s in the central hexagonal cell lead to $\bar N_{2,3}=(0,0)$ at $\tau=0$, the six surrounding cells are characterized by $\bar N_{2,3}=\pm(1,\frac 12)$, $\bar N_{2,3}=\pm(\frac 12,1)$, and $\bar N_{2,3}=\pm(-\frac 12,\frac 12)$. Every vertex is common to three cells. The coordinates of the vertices between the central cell and the six surrounding cells are $\pm(\frac 23, \frac 23)$, $\pm(0,\frac 23)$, $\pm(\frac 23,0)$, $\pm(1,1)$, $\pm(0,1)$, $\pm(1,0)$. First-order phase transitions occur between neighboring cells with different particle numbers $\bar N_{2,3}$ at $\tau=0$. At the edges of the hexagonal cells, two phases can coexist, and at the vertices, three phases can coexist at zero temperature. In analogy to the two-flavor case, a high-temperature system with small fluctuations (as a function of Euclidean spacetime) of $\bar \mu_{2,3}$ can result in two or three phases coexisting or result in a pure state as $\tau\to 0$ depending on the choice of $\bar\mu_{2,3}$ \subsection{Phase structure for \bm{$f=4$}} We use Eq.~\eqref{zffinal} to identify the phase structure in the $(\bar \mu_2,\bar \mu_3, \bar \mu_4)$ space, which is divided into three-dimensional cells characterized by identical particle numbers $\bar N_{2,3,4}$ at zero temperature. At the boundaries of these cells, multiple phases can coexist at zero temperature. We find different types of vertices (corners of the cells), where four and six phases can coexist. At all edges, three phases can coexist. As in the three flavor case, we observe that the phase structure exhibits higher symmetry in coordinates $\tilde \mu$ which are related to $\mu$ through an orthonormal transformation. A particularly convenient choice for $f=4$ turns out to be given by \begin{align}\label{eq:trafo-f4} \begin{pmatrix} \tilde \mu_1 \\ \tilde \mu_2 \\ \tilde \mu_3 \\ \tilde \mu_4 \end{pmatrix} = \frac 12 \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \otimes \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3 \\ \mu_4 \end{pmatrix} \end{align} since the phase structure then becomes periodic under shifts parallel to the coordinate axes. The explicit form of an alternative representation of the partition function in these coordinates is given in \cite{Lohmayer2013}. At zero temperature the $\tilde \mu_{2,3,4}$ space is divided into two types of cells (see Fig.~\ref{fig:f4-cells} for visualizations). We can think of the first type as a cube (centered at the origin, with side lengths 1 and parallel to the coordinate axes) where all the edges have been cut off symmetrically. The original faces are reduced to smaller squares (perpendicular to the coordinate axes) with corners at $\tilde \mu_{2,3,4}=(\pm \frac 12, \pm \frac 14, \pm \frac 14)$ (permutations and sign choices generate the six faces). This determines the coordinates of the remaining 8 corners to be located at $(\pm \frac 38, \pm \frac 38, \pm \frac 38)$. The shift symmetry tells us that these ``cubic'' cells are stacked together face to face. The remaining space (around the edges of the original cube) is filled by cells of the second type (in the following referred to as ``edge'' cells), which are identical in shape and are oriented parallel to the three coordinate axes. \begin{figure}[htb] \centering \includegraphics[width=0.3\textwidth]{f4-cell-1.png}\hfill \includegraphics[width=0.3\textwidth]{f4-cell-2.png}\hfill \includegraphics[width=0.3\textwidth]{f4-cell-4.png} \caption{Cells defining the zero-temperature phase structure for $f=4$ in the $\tilde \mu$ coordinates as described in the text. The left figure shows the central ``cubic'' cell, the figure in the center a single ``edge'' cell. The right figure shows the cubic cell together with all 12 attaching edge cells.} \label{fig:f4-cells} \end{figure} This leads to different kinds of vertices (at the corners of the cells described above) where multiple phases can coexist at zero temperature. There are corners which are common points of two cubic and two edge cells (coexistence of 4 phases, for example at $(\pm \frac 12, \pm \frac 14, \pm \frac 14)$), there are corners which are common points of one cubic and three edge cells (coexistence of 4 phases, for example at $(\pm \frac 38, \pm \frac 38, \pm \frac 38)$), and there are corners which are common points of six edge cells (coexistence of six phases, for example at $\tilde \mu_{2,3,4}=(\pm \frac 12,\pm \frac 12,\pm \frac 12)$. Any edge between two of these vertices is common to three cells. \subsection{Phase structure for \bm{$f>4$}} One can use the multidimensional theta function to study the phase structure when $f>4$ but visualization of the cell structure becomes difficult. Nevertheless, it is possible to provide examples for the coexistence of many phases. While for $f=5$, we find only up to $5 \choose 2$ coexisting phases, we find up to $6 \choose 3$ coexisting phases for $f=6$ (for example at $\bar \mu_{2,\ldots, 6}=(1,\frac{1}{2},0,0,0)$). We also find up to $8\choose 4$ coexisting phases for $f=8$ (for example at $\bar \mu_{2,\ldots, 8}=(1,1,1,1,1,1,0)$). This leads us to conjecture that the maximal number of coexisting phases is given by $f \choose{ \lfloor f/2 \rfloor} $, increasing exponentially for large $f$.
2,869,038,155,735
arxiv
\section{Introduction} \label{intro} It has been widely known that giant molecular clouds (GMCs) have complex and hierarchical structures that can be divided into substructures of clouds, clumps, and cores \citep{1999ASIC..540....3B}. The clumps and cores could be gravitationally unstable \citep{1987ApJ...319..730S, 2001ApJ...551..852H} and evolve into protostars. A particular issue in sub-millimeter astronomy is the identification of clumps and cores. The traditional clumps(cores) identification method is to find compact and bright sources in observational datasets by eyes. In this case, subjective biases are evident as each person could perceive the data differently and thus identify different clumps and extract different parameters. As the datasets become larger or the clumps are more crowded, the traditional method is more inefficient or incompetent in the detection of clumps(cores). Several common algorithms have been used to identify clumpy structures in molecular clouds, such as GaussClumps, ClumpFind, FellWalker, Reinhold, and Dendrograms. Except for Dendrograms, these algorithms are included within CUPID \citep{2007ASPC..376..425B} \footnote{\url{http://starlink.eao.hawaii.edu/starlink/CUPID}}. GaussClumps is the oldest algorithms in automated clump identification \citep{1990ApJ...356..513S}. It was first applied in the M17 molecular cloud and then was frequently performed in other molecular clouds \citep{1998A&A...338..262S,2009MNRAS.395.1805D,2009MNRAS.395.1021L}. The GaussClumps algorithm fits the 3D molecular line data with Gaussian ellipsoids (or ellipses for 2D column density maps) around the local maxima. The resulting ellipsoids(or ellipses) are recognized as clumps in the observational data. This process is repeated until the termination criteria are met. The output clumps may overlap in the GaussClumps algorithm. For this reason, each input pixel is not simply assigned to a single clump (as what is done in algorithms such as FellWalker or ClumpFind), and the total flux in the fitted Gaussians may exceed the real flux in the input data. The GaussClumps algorithm can only fit a strict elliptic shape and it does not allocate flux to a clump at large distance from the peak. ClumpFind is the most widely used algorithm for molecular gas clump identification. \citet{1994ApJ...428..693W} developed this algorithm and applied it to detect the compact structures in the Rosette molecular cloud. In brief, this algorithm locates the peak position by determining the highest value in the array. Then the process descends down from the peak pixel with a certain interval (ClumpFind.DeltaT). If no new independent maximum is found within an intensity interval, the process continues into lower intensity intervals until a new local maximum is found or it reaches the specified minimum contour level (ClumpFind.TLow). Clumps with brightness below this level are ignored as they are assumed to be noise. When an area contains multiple clumps then the pixels in that area are divided between the clumps, with the association of each pixel being given to the closest clump. The association is determined by the distance of the pixel to the boundary of an assigned clump according to a friends-of-friends algorithm. The ClumpFind algorithm was re-written with some minor adjustments in 2006, so it has the ClumpFind1994 and ClumpFind2006 versions (set by ClumpFind.IDLAlg). The ClumpFind algorithm is found to be sensitive to the input parameters ClumpFind.DeltaT and ClumpFind.TLow \citep{2009A&A...497..399K,2009ApJS..182..131R,2009ApJ...699L.134P}. A large ClumpFind.DeltaT parameter tends to find the large and bright structures but miss the clumps with low brightness. If a small ClumpFind.DeltaT value is provided, increased false clumps are identified due to the noise spike. \citet{2006ApJ...638..293E} performed investigations into the detection completeness of the ClumpFind algorithm. They found that ClumpFind tends to interpolate clumps and break the bright source into multiple clumps. The Fellwalker algorithm was developed specifically for CUPID to address some of the problems associated with ClumpFind. It was developed and fully described in \citet{2015A&C....10...22B}. Unlike other algorithms, this algorithm firstly defines a minimum level (FellWalker.Noise) to ignore the influence of the noise spike. Then the process ascends the steepest route until reaching a peak, which provides the certain way of reaching the peak along the greatest ascending gradient. Sometimes this process may be affected by noise spikes and thus FellWalker checks the extended surrounding (FellWalker.MaxJump) pixels to see if there is a pixel in the surrounding with greater value. For any pixel on the minimum level, a path from this pixel to the nearby maximum value based on this ascending method can be found. All routes that meet at the same maximum point are classified as a clump. This process is analogous to a fell-walker ascending a hill by following the steepest ascent line as its name suggests. The Reinhold algorithm was developed by Kim Reinhold and included within CUPID. This algorithm converts the original two or three-dimensional data arrays into one-dimensional arrays. Then it identifies the highest value in all one-dimensional arrays. If the peak value is below the defined minimum then the algorithm decides that there is no real peak in that array. If the peak value is above the minimum then the program goes from this peak in both directions along the array until it reaches a pixel that fulfills the criteria for being an edge pixel. The data arrays are re-combined into the original two or three-dimensional arrays with the clump edges now determined, which produces a number of ring or shell-like structures which outline the clumps. Basically, the algorithm looks for edges of clumps, the clumps determined are therefore more susceptible to noise and need to be cleaned up. The Dendrograms algorithm was first demonstrated in the structural analysis of molecular clouds by \citet{2008ApJ...679.1338R}. This algorithm presents an analytic technique aimed to characterize the hierarchical structure in molecular gas and relate it to the star formation process. Its principal advantage is using standard molecular line analysis techniques to characterize the branches in a dendrogram and simultaneously provide the measurement of various properties for structures in a large range of physical scales. The smallest structure which is described as leaves in Dendrograms can be recognized as clumps. In addition, Dendrograms is a reduction of the structure in a data set to its essential features. Three parameters (min\_value, min\_delta, and min\_npix) would limit the output results of the clump identification. It is the newest algorithm compared with the above other algorithms for structure identification but has been performed more than one hundred times \citep{2009Natur.457...63G,2019MNRAS.483.5135W}. So far, many automated algorithms have been widely used for clump identification, and the principles are different. Different algorithms could make bias results in both clump identification and extraction of parameters such as size, line width, temperature, and mass \citep{2004PASA...21..290S,2010MNRAS.402..603C,2010MsT..........1W}. However, it is still not clear which algorithm has the best performance in the aspects of completeness, false detection probability, and accuracy of physical parameters of the clump identification. Simulated test is needed before applying these algorithms in observational data. In this work, we mainly focus on testing the completeness and accuracy of the physical parameters of the clumps identified using the above six algorithms (including two versions of ClumpFind) and present comparisons between them. The method is described in Section \ref{method} and the results are presented in Section \ref{result}. We discuss the bias of the algorithms in estimation of the virial parameter and the performances of algorithms in identifying clumps in the Rosette molecular cloud in Section \ref{discussion} and make a summary in Section \ref{summary}. \section{Method} \label{method} Mass-size relations describe the relationship between the mass and spatial scale of clumps. The mass contained within radius $r$ is usually described with a power law $m(r) \sim r^{-k}$ \citep{1981MNRAS.194..809L,2007ARA&A..45..565M,2010ApJ...712.1137K,2010ApJ...716..433K}. This relation can be explained by a power law density profiles of the molecular clumps: $\rho(r) \sim r^{-p}$ \citep{2011MNRAS.416..783P}. Previous studies found that $1.5 \leq p \leq 2$ \citep{1993A&A...278..238H,2000A&A...357..637H,2002ApJ...566..945B,2002A&A...389..603F,2002ApJS..143..469M}. However, it has been found that a single power law density profile cannot fit the emission from starless cores and that an inner flattening portion is always needed to reproduce the observational data \citep{1994MNRAS.268..276W,1996A&A...314..625A,2000A&A...361..555B,1999ApJ...515..265A}. Considering the power law behavior for large $r$ and the central flattening at small $r$, \citet{2002ApJ...569..815T} adopted the following analytic density profile for molecular clumps, \begin{equation} \label{eqa1} \rho(r) \sim \frac{1}{1+(r/r_0)^p} \end{equation} Here, $r_0$ is the radius of the flat region ($2r_0$ is the FWHM), $r =(s^2 + z^2)^{1/2}$, $s$ is the projected distance from the clump center and $z$ is the length along the line of sight. In this case, the column density distribution of the clump obeys \begin{equation} \label{eqa3} N(s) \sim \int \frac{1}{1+(s/r_0)^p}\, dz \end{equation} \citet{2010ApJ...716..433K} found that if the index of density profile is $p$, the index of column density profile can be approximated as $p-1$ when $s>>r_0$. Then the radial column density profile follows \begin{equation} \label{eqa4} N(s) \sim \frac{1}{1+(s/r_0)^{p-1}} \end{equation} \citet{2018A&A...614A..83J} have studied the column density structures of the Galactic Cold Cores. They found that the radial column density profiles of these cores follow power law distributions with the indexes of about 1. Therefor, we adopt a column density profile of Equation \ref{eqa4} with $p-1=1$ for clumps in this study. The third dimension of observational data stands for the velocity. An optically thin spectrum of a clump probes the velocity distribution of the molecules. Indeed, the velocity profile of an optically thin molecular line emission in observations generally follows the Gaussian distribution. Therefor, the brightness distribution over the voxels of our simulated clumps is chosen to obey the form \begin{equation} \label{eqa2} T(s,v) = N(s) \times \frac{1}{\sqrt{2\pi}\sigma}exp(\frac{-(v-v_0)^2}{2\sigma^2})= \frac{N_0}{\sqrt{2\pi}\sigma} \times \frac{1}{1+(s/r_0)^{1.0}} \times exp(\frac{-(v-v_0)^2}{2\sigma^2}) \end{equation} where $\frac{N_0}{\sqrt{2\pi}\sigma}$ represents the peak brightness at the center of the clump. We created three-dimensional arrays that contain clumps and background noise. The positions of the clumps are designed to distribute randomly. To avoid that the clumps are located at the edges of the arrays, the centers of the simulated clumps are distributed at least 15 pixels from the edges. The brightness profiles of the clumps are of the form of Equation \ref{eqa2} and we assume that $\sigma=r_0$ (in the unit of voxel). Thus, the FWHM(v) in the velocity dimension is equal to $\sqrt{8 \rm ln 2}r_0$, which is similar to the radial size of the clump ( FWHM(s) = $2r_0$ ). In the following analysis, the input size of the simulated clump is represented by the spatial FWHM size (FWHM(s) = $2r_0$). When we perform the identification of the compact sources in observational data, the FWHM, $\Delta$V, and the peak value of the clumps are naturally limited to the instrument resolution and the sensitivity of the observation. In our test, we set the FWHM and $\Delta$V to be 2 pixels and 2 channels, respectively, for all the six algorithms. The minimum peak value parameter is set to be 5 times the one sigma noise level and the number of voxels of an output clump is required to be above 16. We use other default input parameters in CUPID for all the six algorithms, so that we can determine the advantages and disadvantages of different algorithms. If the clumps identified by the algorithm are too far from the real situation, even if the results can be improved by adjusting the parameters later, it is clear that the algorithm is too sensitive to the parameters. Considering that it is a hard challenge for automated algorithms to identify clumps that are small, weak, or crowded, we created clumps with these characteristics in the test. Then the above six algorithms (GaussClumps, ClumpFind1994, ClumpFind2006, Fellwalker, Reinhold, and Dendrograms) are applied to identify the clumps. The configuration parameters of each algorithm are displayed in Appendix \ref{appendix} and their performance are presented in Section \ref{test1}-\ref{test3}. \section{Results} \label{result} \subsection{Test 1: Performance of the Algorithms in Identifying Clumps of Different Sizes} \label{test1} \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{fig1_1000clumps_size.eps} \caption{Simulated clumps with clump sizes (FWHM) distributing randomly from 1 to 11 pixels.} \label{fig:size} \end{figure} In order to investigate the performance of different algorithms in identifying clumps of different sizes, we generated 1000 clumps with clump sizes (FWHM = $2r_0$) distributing randomly from 1 to 11 pixels. The data are designed to be 1000$\times$1000$\times$1000 array (Figure \ref{fig:size}). In this case, very few clumps would overlap. The above six algorithms are applied to identify the clumps in the simulated data so that the performance of each algorithm can be unbiasedly estimated. In test 1 we focus on the performance of the algorithms to detect different sizes of clumps. To reduce the influence of peak brightness, the peak brightness of the clumps is fixed to be 10 times the one sigma noise value. We present the completeness and accuracy of the parameters of the six algorithms in Section \ref{completeness1} and \ref{accuracy1}. \subsubsection{Completeness of the Algorithms} \label{completeness1} \begin{figure} [!htb] \centering \subfloat[]{\label{fig:right_size} \includegraphics[width=0.4\textwidth]{fig2_right_size.eps}} \subfloat[]{\label{fig:repeat_size} \includegraphics[width=0.4\textwidth]{fig2_repeat_size.eps}}\\ \subfloat[]{\label{fig:false_size} \includegraphics[width=0.4\textwidth]{fig2_false_size.eps}} \subfloat[]{\label{fig:mark_size} \includegraphics[width=0.4\textwidth]{fig2_mark_size.eps}} \caption{Frequency distributions of the correct identifications (a), repeated identifications (b), erroneous identifications (c), and score points (d) of the six algorithms in identifying clumps with different FWHM sizes. ClumpFind1994, ClumpFind2006, and Dendrograms score points lower than the range to display in panel (d).} \label{fig:completeness_size} \end{figure} When estimating the advantages and disadvantages of an algorithm, the primary criterion is the completeness of the output clumps. Here the term "completeness" means a high percentage of correctly identified clumps, and at the meantime a low percentage of false or repeated identification. When the spatial scale of an output clump is smaller than the simulated clump along all the three axes, then we mark this clump as a correctly identified clump. Figure \ref{fig:right_size} shows the numbers of correct identifications for different algorithms when identifying clumps of different sizes. It can be seen that when the sizes of clumps are smaller than 2 pixels, most of the algorithms perform poorly. As clumps become larger, the numbers of correct identifications of the algorithms gradually increase. ClumpFind1994, ClumpFind2006, Fellwalker, and Dendrograms perform well when the sizes of clumps are larger than 3 pixels, where the correct rate can reach more than 90$\%$. Gaussclumps exhibits lower correct rate compared to its performance presented in Section \ref{test2}. One possible reason is that the brightness profile of simulated clumps obeys power law in the spatial dimensions.When the number of consecutive failures of fitting succeeds the designated one, which is appointed by parameter GaussClumps.MaxSkip, the iterative fitting process of Gaussclumps is terminated. When a simulated clump is identified more than once, the clump closest to the simulated position is marked as a matching output and the other clumps are recorded as repeated results. Figure \ref{fig:repeat_size} shows the number of repeated identification for different algorithms. The percentages of repeated results for all algorithms are lower than 10$\%$ when the sizes of the input clumps are smaller than 2 pixels. Figure \ref{fig:false_size} shows the number of erroneous identifications for the six algorithms. Fellwalker and Gaussclumps almost never erroneously identify the fluctuation of noise to be a clump. As the clumps become more extended, the numbers of repeated and erroneous identifications of the Dendrograms, ClumpFind1994, and ClumpFind2006 gradually increase. In order to estimate the comprehensive performance of each algorithm, we establish a simple scoring mechanism. The algorithm scores 1 point when it correctly identifies a simulated clump, scores -1 when it outputs a false result, and score -0.5 when it finds duplicate clumps. We show the score for each algorithm in Figure \ref{fig:mark_size}. As shown in the scoring results, Fellwalker exhibits the best performance compared with the other algorithms. ClumpFind1994, ClumpFind2006, and Dendrograms receive low marks because they conduct a lot of repeated and erroneous identifications. \subsubsection{Accuracy of Retrieved Parameters} \label{accuracy1} When an algorithm is used to automatically search for the clumps, the output parameters of the clumps will be used to calculated the physical parameters, so accurately reproducing the clump parameters is an important aspect of an algorithm. In order to compare the comprehensive performance of each algorithm in the aspect of accuracy of retrieved parameters, we calculated the average deviation of the position (E($|\Delta$X$|$)), size (E($\Delta$S)), velocity dispersion (E($\Delta$V)), peak brightness (E($\Delta$I)), and total flux (E($\Delta$flux)) of the output results of each algorithm. The E($|\Delta$X$|$), E($\Delta$S), E($\Delta$V), and E($\Delta$I) are obtained through subtracting the output parameters from the input ones and then averaging over the correctly identified clumps. The E($\Delta$flux) is the ratio between the deviation of the total flux and the input flux sum averaged over the correctly identified clumps. \begin{figure} [!htb] \centering \subfloat[]{\label{fig:} \includegraphics[width=0.4\textwidth]{fig3_E_X_size.eps}} \subfloat[]{\label{fig:E_s_size} \includegraphics[width=0.4\textwidth]{fig3_E_S_size.eps}}\\ \subfloat[]{\label{fig:} \includegraphics[width=0.4\textwidth]{fig3_E_I_size.eps}} \subfloat[]{\label{fig:E_sum_size} \includegraphics[width=0.4\textwidth]{fig3_E_Sum_size.eps}} \caption{The mean error of position (a), size (solid line) and velocity dispersion (dash line) (b), peak brightness (c), and total flux (d) of algorithms in identifying clumps of different FWHM sizes.} \label{fig:E_size} \end{figure} The average deviations in clump position, size, peak brightness, and the total flux as a function of input clump size are shown in Figure \ref{fig:E_size}. It can be seen that when the size of clumps are smaller than 5 pixels, the average error in clump position for Dendrograms is about 0.5 pixel, while the deviations for other algorithms are more than 0.5 pixels. The average deviation in clump position shows no trend with the size of clumps for all algorithms. The average errors in clump size and velocity dispersion for each algorithm gradually increase as the size of clumps increases. As shown in Figure \ref{fig:E_s_size}, the average error in clump velocity dispersion is generally less than the error in size for most of the algorithms. However, the relative errors of the size are similar to the errors of velocity dispersion (see Table \ref{tab3}). The largest errors in clump size, velocity dispersion, and peak brightness for all algorithms are about -9 pixels (ClumpFind2006), -3 pixels (Gaussclumps), and 2.5 times of the noise (Fellwalker), respectively. As the clump becomes larger, the error in peak brightness decreases to about 1.7 times the noise for all algorithms. Except for Fellwalker, all algorithms return clump sizes that are smaller than the input clump sizes. The clump peak brightness retrieved by all algorithms is higher than the input parameters. The total flux of a clump is the sum of brightness at all pixels within the boundary of the clump. For an optically thin clump, this parameter is proportional to the clump mass. As shown in Figure \ref{fig:E_sum_size}, Fellwalker exhibits the best performance compared to the other algorithms in the aspect of total flux, with about 60$\%$ of the total flux of the simulated clump being retrieved. For the other algorithms, the output total fluxes are lower than 40$\%$ of the simulated clumps. A reasonable explanation of this large error in clump total flux is the large error in the clump size, i.e., only a small fraction of the clump total flux is counted. Another possible reason is the omission of part of the flux by the algorithms. When the algorithms perform clump identification, the voxels with brightness below a designated value, which is adopted to be 3 times the noise level in our tests, are considered to be the noise and are ignored. \subsection{Test 2: Performance of the Algorithms with Data of Different Signal-Noise Ratios} \label{test2} \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{fig4_1000clumps_snr.eps} \caption{Simulated clumps with the signal-to-noise ratios distributing randomly from 1 to 21.} \label{fig:100cores_snr} \end{figure} In addition to the fact that the size of the input clumps has a significant impact on the clump identification results, the performance of an algorithm in identifying clumps with data of different peak brightness is also an important issue. In the 1000$\times$1000$\times$1000 array, we generated 1000 clumps with signal-to-noise ratios (SNR) distributing randomly from 1 to 21 (Figure \ref{fig:100cores_snr}). As in Section \ref{test1}, the six algorithms are used to search the simulated clumps to estimate the performance of different algorithms. We set the size of the clumps (FWHM) to be 5 pixels. In Section \ref{completeness1} it can be seen that when the size of clumps is large than 5 pixels, most of the algorithms perform well in the aspect of completeness. In current test the influence of size and crowdedness are reduced as much as possible. We focused on the performances of the algorithms when the SNRs of the data are changed. \subsubsection{Completeness of the Algorithms} \label{completeness2} \begin{figure} [!htb] \centering \subfloat[]{\label{fig:right_snr} \includegraphics[width=0.4\textwidth]{fig5_right_snr.eps}} \subfloat[]{\label{fig:repeat_snr} \includegraphics[width=0.4\textwidth]{fig5_repeat_snr.eps}}\\ \subfloat[]{\label{fig:false_snr} \includegraphics[width=0.4\textwidth]{fig5_false_snr.eps}} \subfloat[]{\label{fig:mark_snr} \includegraphics[width=0.4\textwidth]{fig5_mark_snr.eps}} \caption{Frequeny distributions of the correct identifications (a), repeated identifications (b), erroneous identifications (c), and the score points (d) of the six algorithms in identifying clumps with data of different signal-noise ratios.} \label{fig:true_repeat_snr} \end{figure} Figure \ref{fig:right_snr} shows the number of correct identifications for all algorithms. It can be seen that all algorithms cannot detect clumps with peak brightness around the noise level. When the SNR of clumps is less than 3, all algorithms exhibit poor performance. When the SNR reaches 5, the numbers of correct identifications for all the six algorithms except Gaussclumps and Reinhold are all higher than 75$\%$. Reinhold and Gaussclumps exhibit lower completeness than the other algorithms when SNR is lower than 5. However, as the SNR increases, the accuracy of Reinhold and Gaussclumps increases. The accuracy of Gaussclumps reaches more than 90$\%$ when the SNR reaches 11. The accuracy of Reinhold reaches 20$\%$ only when the SNR is as large as 17. Therefore, Reinhold and Gaussclumps are only suitable for searching for clumps with high brightness. Figure \ref{fig:repeat_snr} shows the number of repeated identifications for different algorithms. The ClumpFind2006 and ClumpFind1994 algorithms exhibit the highest repetitive rates. Duplicate identifications of Fellwalker, Reinhold, and Gaussclumps are fewer than other algorithms. The numbers of erroneous identifications of different algorithms are presented in Figure \ref{fig:false_snr}. Surprisingly, as the SNR of the simulated clumps increases, the numbers of false clumps returned by ClumpFind1994 and ClumpFind2006 increase. The other algorithms almost do not erroneously count the fluctuation of the noise as a clump. As in Section \ref{completeness1}, we establish a simple scoring mechanism to evaluate the overall performance of each algorithm. The scores of the algorithms are shown in Figure \ref{fig:mark_snr}. It can be seen that Fellwalker, Dendrograms, and Gaussclumps are the best algorithms. ClumpFind1994 and ClumpFind2006 score low due to their many false identifications. \subsubsection{Accuracy of Retrieved Parameters} \label{accuracy2} \begin{figure} [!htb] \centering \subfloat[]{\label{fig:} \includegraphics[width=0.4\textwidth]{fig6_E_X_snr.eps}} \subfloat[]{\label{fig:} \includegraphics[width=0.4\textwidth]{fig6_E_S_snr.eps}}\\ \subfloat[]{\label{fig:} \includegraphics[width=0.4\textwidth]{fig6_E_I_snr.eps}} \subfloat[]{\label{fig:E_sum_snr} \includegraphics[width=0.4\textwidth]{fig6_E_Sum_snr.eps}} \caption{The average error of position (a), size (solid line) and velocity dispersion (dash line) (b), peak brightness (c), and total flux (d) of six algorithms in identifying clumps with data of different signal-to-noise ratios.} \label{fig:E_snr} \end{figure} We calculated the average errors of the position (E($|\Delta$X$|$)), size (E($\Delta$S)), velocity dispersion (E($\Delta$V)), peak brightness (E($\Delta$I)), and total flux (E(flux)) for each algorithm. These average errors as a function of different SNRs are shown in Figure \ref{fig:E_snr}. It can be seen that as the SNR of the clumps increases, the errors in the clump position and peak brightness gradually decrease. The average errors in clump size for each algorithm gradually increase as the SNR of clumps increases, while the average errors in clump velocity dispersion remain nearly constant. The overall performance of Fellwalker in reproducing the clump parameters is still good. As shown in Figure \ref{fig:E_sum_snr}, Fellwalker exhibit the best performance among the six algorithms in the aspect of total flux, with more than 40$\%$ of the total flux being extracted to output when the SNR reaches 20. Reinhold exhibits the biggest total flux deviation compared to the others. Fellwalker and Dendrograms perform better than other algorithms in retrieving the parameters. Most of the algorithms return a smaller size and velocity dispersion and a higher peak brightness than the simulated data, and the total fluxes of the output are always lower than the simulated clumps. \subsection{Test 3: Performance of the Algorithms in Identifying Clumps with Different Crowdedness} \label{test3} It has long been realized that automated algorithms tend to interpolate clumps at various peak values and break the bright sources into multiple clumps \citep{2006ApJ...638..293E}. Therefore, the automated algorithms are susceptible to the crowdedness of the clumps. In order to investigate the performance of different algorithms in the identification of clumps with different crowdedness, we create 100 clumps in 200$\times$200$\times$200, 150$\times$150$\times$150, and 100$\times$100$\times$100 arrays, respectively (Figure \ref{fig:100cores}). In the 200$\times$200$\times$200 array, few clumps are overlapped. In the 150$\times$150$\times$150 array some clumps overlaps at their edges. In the most crowded case (100$\times$100$\times$100 ), many clumps overlap. We identify the clumps with the six algorithms so that their performance can be unbiasedly estimated. We set the peak brightness of the clump to be 10 times the noise value, and the size to be 5 pixels. In this case, the influences of brightness and size are reduced as much as possible. We focus on the performance of the algorithms with different crowdedness. The completeness and accuracy of retrieved parameters from all the six algorithms are presented in Section \ref{completeness3} and \ref{accuracy3}. \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{fig7_core_200.eps} \includegraphics[width=0.3\textwidth]{fig7_core_150.eps} \includegraphics[width=0.3\textwidth]{fig7_core_100.eps} \caption{Distributions of 100 simulated clumps in 200$\times$200$\times$200 (left), 150$\times$150$\times$150 (middle), and 100$\times$100$\times$100 arrays (right), respectively.} \label{fig:100cores} \end{figure} \subsubsection{Completeness of the Algorithms} \label{completeness3} \begin{figure} [!htb] \centering \subfloat[]{\label{fig:right} \includegraphics[width=0.4\textwidth]{fig8_right.eps}} \subfloat[]{\label{fig:repeat} \includegraphics[width=0.4\textwidth]{fig8_repeat.eps}}\\ \subfloat[]{\label{fig:false} \includegraphics[width=0.4\textwidth]{fig8_false.eps}} \subfloat[]{\label{fig:mark} \includegraphics[width=0.4\textwidth]{fig8_mark.eps}} \caption{Frequency distributions of the correct (a), repeated (b), erroneous identifications (c), and the score points (d) of the six algorithms in identifying clumps with different crowdedness.} \label{fig:true_repeat} \end{figure} Figure \ref{fig:right} shows the numbers of correct identifications of different algorithms. It can be seen that as the clumps become more crowded, the completeness of Fellwalker gradually decreases. ClumpFind1994, ClumpFind2006, Dendrograms, and GaussClumps perform better than Reinhold and Fellwalker in the aspect of completeness. In the case of [150, 150, 150], some clumps overlap at their edges and the brightness and boundary of the clumps are influenced by the ambient clumps. Accordingly, ClumpFind1994, ClumpFind2006, Dendrograms, and GaussClumps get more output numbers, repeated, and erroneous identifications than in the [200, 200, 200] arrays. Due to that many clumps overlap or merge into a new clump in the most crowded ([100, 100, 100]) case, the accuracy of all the algorithms fall down to $20\%-100\%$. Figure \ref{fig:repeat} shows the numbers of repeated identifications of different algorithms in the detection of clumps. Although ClumpFind1994 and ClumpFind2006 present good performance in accuracy, at the same time they repeatedly identified $600\%-1200\%$ clumps. This may be due to the fact that the ClumpFind algorithm tends to break the bright sources into multiple clumps \citep{2006ApJ...638..293E}. Fellwalker does not produce duplicate clumps, which is an important advantage compared to other algorithms. Figure \ref{fig:false} shows the numbers of erroneous identifications of different algorithms. It can be seen that $2000\%-9000\%$ false clumps are outputted from ClumpFind1994 and ClumpFind2006. Fellwalker almost never identifies the fluctuation of the noise to be a clump. \subsubsection{Accuracy of Retrieved Parameters} \label{accuracy3} The average deviations of the corresponding retrieved parameters from the six algorithms in different crowdedness are shown in Figure \ref{fig:E}. It can be seen that in the most sparse case ([200, 200, 200]), the best algorithm for extracting the position parameter is Gaussclumps, with Dendrograms and ClumpFind2006 being the next. As the clumps get more crowded, the average errors in position and peak brightness gradually increase. Gaussclumps performs well in size and intensity extraction. For the peak brightness parameter, all the algorithms return values higher than the simulated data. The largest deviation in peak brightness among the six algorithms is about 8 times the one sigma noise which occurs in the most crowded case. In the most crowded case ([100, 100, 100]), the output total flux of Fellwalker, Gaussclumps, and ClumpFind1994 exceeds the input data, which may be caused by the merge of multiple clumps into a new clump. \begin{figure} [!htb] \centering \subfloat[]{\label{fig:right_threshold} \includegraphics[width=0.4\textwidth]{fig9_E_X.eps}} \subfloat[]{\label{fig:repeat_threshold} \includegraphics[width=0.4\textwidth]{fig9_E_S.eps}}\\ \subfloat[]{\label{fig:false_threshold} \includegraphics[width=0.4\textwidth]{fig9_E_I.eps}} \subfloat[]{\label{fig:mark_threshold} \includegraphics[width=0.4\textwidth]{fig9_E_Sum.eps}} \caption{The average error of position (a), size (solid line) and velocity dispersion (dash line) (b), peak brightness (c), and total flux (d) of six algorithms in identifying clumps with different crowdedness.} \label{fig:E} \end{figure} \section{Discussion} \label{discussion} \subsection{The Bias of the the Algorithms in Estimation of the Virial Parameter} \label{discussion1} The clumps which are in gravitationally unstable will collapse and are expected to evolve into protostars. The virial parameter $\alpha_{vir}$, which is defined as $\alpha_{vir} = 5\sigma^2 R_c/(GM_c) \sim E_{kin}/E_{g}$ \citep{1992ApJ...395..140B}, is a crucial parameter to understand the dynamics of a clump. Here, $E_{kin}$ and $E_{g}$ are the kinematic and gravitational energy of the clump, respectively. $M_c$ and $R_c$ indicate the mass and radius of the clump, $G$ is the gravitational constant, and $\sigma$ is the velocity dispersion. In the absence of external pressure or magnetic fields for an isothermal sphere, the clump will collapse if $\alpha_{vir} < 1$ \citep{2014prpl.conf..149T}. When the region is under external pressure, this pressure will work towards compressing the clump, and it could collapse even if $\alpha_{vir} > 1$. Several surveys revealed a relationship between $\alpha_{vir}$ and the mass of the clumps, indicating that the massive clumps tend to be more gravitationally unstable \citep{2014MNRAS.443.1555U,2018MNRAS.473.1059U,2018MNRAS.477.2220T}. However, the $\alpha_{vir}$ derived from the six algorithms can be influenced by the bias of the returned parameters. Here, we assume that the mass of the clump is proportional to the total flux of the clump ($M_c \sim flux$). The ratio between the output virial parameter ($\alpha_{out}$) derived from the six algorithms and the input virial parameter ($\alpha_{in}$) is displayed in Figure \ref{fig:vir_para}. \begin{figure} [!htb] \centering \subfloat[]{\label{fig:vir_size} \includegraphics[width=0.4\textwidth]{fig10_vir_size.eps}} \subfloat[]{\label{fig:vir_snr} \includegraphics[width=0.4\textwidth]{fig10_vir_snr.eps}}\\ \subfloat[]{\label{fig:vir} \includegraphics[width=0.4\textwidth]{fig10_vir.eps}} \caption{The ratio between the output virial parameter ($\alpha_{out}$) derived from the six algorithms and the input virial parameter ($\alpha_{in}$) for clumps with different size (a), SNR (b), and crowdedness (c).} \label{fig:vir_para} \end{figure} As shown in Figure \ref{fig:vir_para}, the six algorithms almost return a virial parameter close to the input virial value. The output virial parameter shows no trend with the size, SNR, and crowdedness of the clumps. ClumpFind1994 and Dendrograms return more accurate virial parameter than the other algorithms. \subsection{The Performance of the Algorithms in Identifying Clumps in the Rosette Molecular Cloud} \label{discussion2} Using the PMO-13.7m millimeter telescope at Delingha in China, \citet{2018ApJS..238...10L} have conducted a large-scale simultaneous survey of $^{12}$CO, $^{13}$CO, and C$^{18}$O J=1-0 emission toward the Rosette molecular cloud (RMC) region with a sky coverage of $3.5 \times 2.5$ square degrees (Figure \ref{fig:3rosette}). The spatial pixel of the FITS data cube has a size of 30$\arcsec$ $\times$ 30$\arcsec$ and the effective spectral resolution is 61.0 kHz, corresponding to a velocity resolution of 0.16 km s$^{-1}$ at the 115 GHz frequency of the $^{12}$CO $J=1-0$ line. The sensitivity of the observation is estimated to be around 0.5 K for the $^{12}$CO $J=1-0$ emission and around 0.3 K for the $^{13}$CO and C$^{18}$O $J=1-0$ emission. We apply the six algorithms in identifying clumps in the RMC, and the results are presented in Figures \ref{fig:rosette}-\ref{fig:hist_rosette}. \begin{figure} [!htb] \centering \subfloat[]{\label{fig:12rosette} \includegraphics[width=0.3\textwidth]{fig11_Rosette_U_m0_-2_30.eps}} \subfloat[]{\label{fig:13rosette} \includegraphics[width=0.3\textwidth]{fig11_Rosette_L_m0_3_26.eps}} \subfloat[]{\label{fig:18rosette} \includegraphics[width=0.3\textwidth]{fig11_Rosette_L2_m0_3_19.eps}} \caption{Maps of $^{12}$CO emission intensity integrated from -2 km s$^{-1}$ to 30 km s$^{-1}$ (a), $^{13}$CO emission intensity integrated from 3 km s$^{-1}$ to 26 km s$^{-1}$ (b), and C$^{18}$O emission intensity integrated from 3 km s$^{-1}$ to 19 km s$^{-1}$ (c). For details, see \citet{2018ApJS..238...10L}.} \label{fig:3rosette} \end{figure} \begin{figure} [!htb] \centering \subfloat[]{\label{fig:total_rosette} \includegraphics[width=0.4\textwidth]{fig12_total.eps}} \subfloat[]{\label{fig:time_rosette} \includegraphics[width=0.4\textwidth]{fig12_time_cost.eps}}\\ \caption{Left: the output number for each algorithm in identifying clumps of $^{12}$CO, $^{13}$CO, and C$^{18}$O emission. Right: the CPU consuming time of the six algorithms.} \label{fig:rosette} \end{figure} Figure \ref{fig:total_rosette} presents the output number for each algorithm in identifying clumps of $^{12}$CO, $^{13}$CO, and C$^{18}$O emission. It shows that ClumpFind2006 and ClumpFind1994 get the most clumps in the RMC. As shown in the above simulated tests (Section \ref{result}), ClumpFind2006 and ClumpFind1994 always get much more repeated and erroneous identifications than the number of input simulated clumps, producing more output clumps than other algorithms. Reinhold gets the least clumps in the RMC due to that it can only find clumps when the peak brightness is higher than 12 times the noise level (see Section \ref{result} and Figure \ref{fig:peak_rosette}). Gaussclumps gets relatively fewer clumps of $^{12}$CO, for which one possible reason is that $^{12}$CO emission is usually optically thick and the velocity profiles of $^{12}$CO clumps do not obey Gaussian profile well. The iterative fitting process of Gaussclumps is terminated when more than designated consecutive clumps cannot be fitted with Gaussian profile successfully. For Dendrograms, it rarely finds false clumps in the simulated data (see Section \ref{result}). However, we find that more than 100 false clumps are identified by Dendrograms, but these clumps are distributed at the edge of observational $^{12}$CO, $^{13}$CO, and C$^{18}$O data arrays. The numbers shown in Figure \ref{fig:total_rosette} and Table \ref{tab1} do not include the false clumps located at the edges of th data arrays. Figure \ref{fig:time_rosette} presents the computer CPU consuming time for the six algorithms in identifying $^{12}$CO, $^{13}$CO, and C$^{18}$O clumps. Reinhold takes the least time and the least clumps are identified. Among the six algorithms, Fellwalker is the most efficient algorithm in terms of the number of identified clumps and CPU time. Due to the iterative Gaussian fitting process, the time cost by Gaussclumps is much more than the other algorithms. The dominant frequency of the computer CPU in our testing is 2.2 GHz. The memory size of the computer is 16 GB and the memory speed is 1600 MHz. \begin{table}[!htb] \bc \caption[]{Cross-matching of $^{12}$CO, $^{13}$CO, and C$^{18}$O clumps}\label{tab1} \setlength{\tabcolsep}{1pt} \scriptsize \begin{tabular}{lcccccccccccc} \hline\noalign{\smallskip} \hline\noalign{\smallskip} Algorithm && $^{12}$CO && $^{13}$CO && C$^{18}$O &&& $^{13}$CO/$^{12}$CO &&& C$^{18}$O/$^{13}$CO \\ \hline\noalign{\smallskip} ClumpFind1994 && 5653 && 969 && 44 &&& 76$\%$ &&& 55$\%$ \\ ClumpFind2006 && 8080 && 1480 && 75 &&& 74$\%$ &&& 37$\%$ \\ Gaussclumps && 646 && 611 && 4 &&& 67$\%$ &&& 100$\%$ \\ Fellwalker && 1243 && 370 && 22 &&& 81$\%$ &&& 91$\%$ \\ Reinhold && 247 && 68 && 0 &&& 15$\%$ &&& - \\ Dendrograms && 2726 && 727 && 36 &&& 92$\%$ &&& 0$\%$ \\ \noalign{\smallskip}\hline \end{tabular} \ec \tablecomments{0.6\textwidth}{Columns 2-4 give the output number of the clumps. Columns 5-6 give the percentage of the $^{13}$CO clumps that coindcide with $^{12}$CO clumps and the percentage of the C$^{18}$O clumps that coindcide with $^{13}$CO clumps, respectively.} \end{table} Table \ref{tab1} presents the cross-matching results between $^{12}$CO, $^{13}$CO, and C$^{18}$O clumps. Due to the difference in optical depth between the $^{12}$CO, $^{13}$CO, and C$^{18}$O emission, it is usually expected that $^{13}$CO clumps have good association with $^{12}$CO clumps and C$^{18}$O clumps in the same way have good association with $^{13}$CO clumps. From Table \ref{tab1} it can be seen that the $^{13}$CO and C$^{18}$O clumps from Gaussclumps and Fellwalker exhibit the best association. However, Gaussclumps identifies only four C$^{18}$O clumps, which is much fewer than the number identified by eyes. For ClumpFind1994, ClumpFind2006, Reinhold, and Dendrograms, the association between the $^{13}$CO and C$^{18}$O clumps is relatively low, which implies that their identifications deviate somewhat from the actual situation. \begin{figure} [!htb] \centering \subfloat[]{\label{fig:size_rosette} \includegraphics[width=0.4\textwidth]{fig13_size_hist.eps}} \subfloat[]{\label{fig:deltv_rosette} \includegraphics[width=0.4\textwidth]{fig13_sv_hist.eps}}\\ \subfloat[]{\label{fig:peak_rosette} \includegraphics[width=0.4\textwidth]{fig13_peak_hist.eps}} \subfloat[]{\label{fig:sum_rosette} \includegraphics[width=0.4\textwidth]{fig13_sum_hist.eps}} \caption{Distributions of the size (a), velocity dispersion (b), peak brightness (c), and total flux (d) of the $^{13}$CO clumps in the RMC derived from the six algorithms. The dashed line indicates the probability distribution of the pixel peak brightness of the RMC.} \label{fig:hist_rosette} \end{figure} Figure \ref{fig:hist_rosette} shows the distributions of the size, velocity dispersion, peak brightness, and total flux of $^{13}$CO clumps, respectively. It can be seen that the clump size identified by Fellwalker is larger than that from the other algorithms. The most likely reason is that Fellwalker returns the larger and more accurate clump size than the other algorithms (see Section \ref{result}). The clump velocity dispersion identified by Dendrograms is significantly larger than that from the other algorithms. As shown in Figure \ref{fig:peak_rosette}, most of the clumps identified by Gaussclumps and Reinhold have higher peak brightness than the clumps from the other algorithms, which is consistent with the test results in Section \ref{test2} that only clumps with high brightness can be identified by Gaussclumps and Reinhold. The distribution of the total flux of $^{13}$CO clumps is presented in Figure \ref{fig:sum_rosette}. Due to that Fellwalker returns a larger and more accurate clump total flux than the other algorithms, Gaussclumps and Reinhold tend to miss the clumps with low brightness, it can be seen that the total flux extracted by Reinhold, Gaussclumps, and Fellwalker are higher than that by the other algorithms. \section{Summary} \label{summary} Using simulated clumps, we have tested the performance of the GaussClumps, ClumpFind1994, ClumpFind2006, Fellwalker, Reinhold, and Dendrograms algorithms in identifying clumps. We focus on the performance of each algorithm in terms of completeness and parameter extraction. We generated the simulated clumps in three-dimensional arrays with background noise. The brightness profiles of the clumps are of the form $T(s,v) = \frac{N_0}{\sqrt{2\pi}\sigma} \times \frac{1}{1+(s/r_0)^{1.0}} \times exp(\frac{-(v-v_0)^2}{2\sigma^2})$. The simulated clumps are designed to vary in size, brightness, and crowdedness in order to investigate the performance of the six algorithms in these aspects. For the six algorithms, the minimum FWHM and $\Delta$V of the identifying clumps are set to be 2 pixels and 2 channels, respectively. The minimum peak value parameter is set to be 5 times the one sigma noise level and the number of voxels of an output clump is required to be above 16. We summarize our results as follows, 1. In the aspect of detection completeness, Fellwalker, Dendrograms, and Gaussclumps are the first, second, and third best algorithms, respectively. The numbers of correct identifications of the six algorithms gradually increase as the size and SNR of the simulated clumps increase and they decrease as the crowdedness increases. The repetitive and erroneous rates of ClumpFind increase as the clump size and SNR increase. Reinhold is only suitable for searching for clumps with peak brightness (SNR) higher than 10. The general performances of the six algorithms are summarized in Table \ref{tab2}. \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table}[!htb] \bc \caption[]{General Performance of the Algorithms in Completeness}\label{tab2} \setlength{\tabcolsep}{1pt} \scriptsize \begin{tabular}{lcccccccc} \hline\noalign{\smallskip} \hline\noalign{\smallskip} Algorithm && Correct Rate && Repetitive Rate && Erroneous Rate \\ \hline\noalign{\smallskip} ClumpFind1994 && high (100$\%$) && very high ($<10000\%$) && very high ($<15000\%$)\\ ClumpFind2006 && high (100$\%$) && very high ($<12000\%$) && very high ($<20000\%$)\\ Gaussclumps && intermediate ($10\%-100\%$) && low ($<10\%$) && intermediate ($<70\%$) \\ Fellwalker && high ($90\%-100\%$) && very low (0$\%$) && very low (0$\%$)\\ Reinhold && low ($<10\%$ when clump peak brightness (SNR) lower than 15) && very low (0$\%$) && very low (0$\%$)\\ Dendrograms && high ($100\%$) && high ($<1000\%$) && high ($<2000\%$)\\ \noalign{\smallskip}\hline \end{tabular} \ec \end{table} 2. In the aspect of the accuracy of retrieved parameters, the average errors in clump parameters gradually increase as the clump size, SNR, and crowdedness increase. The average errors of the algorithms in extracting parameters of the clumps in Section \ref{test1}-\ref{test3} are presented in Table \ref{tab3}. As Table \ref{tab3} shows, the algorithms best performing in extracting parameters are Dendrograms in retrieving clump position ( E($|\Delta$X$|$)=$0.4$ pixels ) and peak brightness ( E($\Delta$I)=$1.3$ RMS), Fellwalker in size ( E($\Delta$S)=$-7\%$ ) and total flux ( E($\Delta$flux) =$-19\%$), and ClumpFind1994 in velocity dispersion (E($\Delta$V)=$-38\%$). All in all, Fellwalker, Dendrograms, and Gaussclumps exhibit better performance in extracting clump parameters than the other algorithms. Except for Fellwalker, the other algorithms exhibit significant deviation in extracting the total flux of clumps. \begin{table}[!htb] \bc \caption[]{Average Errors of the Algorithms in Extracting Parameters}\label{tab3} \setlength{\tabcolsep}{1pt} \scriptsize \begin{tabular}{lcccccccc} \hline\noalign{\smallskip} \hline\noalign{\smallskip} Algorithm & ~~~E($|\Delta$X$|$) && ~~~E($\Delta$S) && ~~~E($\Delta$V) && ~~~E($\Delta$I) &~~~E($\Delta$flux) \\ & ~~~(pixels) && && && ~~~(rms) & \\ \hline\noalign{\smallskip} ClumpFind1994 & $1.1$ && $-38\%$ &&$-38\%$ && $1.3$ & $-70\%$\\ ClumpFind2006 & $1.1$ && $-57\%$ &&$-40\%$ && $1.3$ & $-83\%$\\ Gaussclumps & $1.1$ && $-65\%$ &&$-58\%$ && $1.5$ & $-78\%$\\ Fellwalker & $0.9$ && $-7\%$ &&$-43\%$ && $1.8$ & $-19\%$\\ Reinhold & $1.8$ && $-88\%$ &&$-53\%$ && $2.9$ & $-93\%$\\ Dendrograms & $0.4$ && $-25\%$ &&$-42\%$ && $1.3$ & $-65\%$\\ \noalign{\smallskip}\hline \end{tabular} \ec \tablecomments{0.6\textwidth}{The average errors of the algorithms in extracting parameters are derived from the (1000+1000+300) clumps in Section \ref{test1}-\ref{test3}. Column 2 and 5 represent the average deviation of the position (E($|\Delta$X$|$)) and peak brightness (E($\Delta$I)). Column 3, 4, and 6 give the relative errors of the size (E($\Delta$S)), velocity dispersion (E($\Delta$V)), and total flux (E($\Delta$flux)).} \end{table} 3. The ratios between the output virial parameter ($\alpha_{out}$) derived from the six algorithms and the input virial parameter ($\alpha_{in}$) show no trend with the size, SNR, and crowdedness of the clumps. For the simulated clumps, the six algorithms almost return virial parameters similar to the input virial parameters ($\alpha_{out}/\alpha_{in} = 0.5 - 1.5$). 4. When applying the six algorithms to clump identification for the RMC, Gaussclumps, ClumpFind1994, ClumpFind2006, Fellwalker, and Reinhold exhibit performance that is consistent with the results from the simulated test. Dendrograms finds more than 100 false clumps at the edge of observational data arrays. \normalem \begin{acknowledgements} We would like to thank the MWISP members for the helpful discussions. We thank the anonymous referee for valuable comments and suggestions that helped to improve this paper. This work is supported by the National Key R\&D Program of China (NO. 2017YFA0402701). C. Li acknowledges supports by NSFC grants 11503086 and 11503087. \end{acknowledgements} \clearpage
2,869,038,155,736
arxiv
\section{Introduction} In 2006 Mats Boij and Jonas S\"oderberg conjectured a beautiful structure theorem on the cone of Betti tables of graded modules over the polynomial ring $R=k[x_1,\dots,x_n]$ with standard grading $\deg(x_i)=1$. They described the cone in terms of the extremal rays and gave an algorithm to decompose any finitely generated Cohen-Macaulay module in a positive rational combination of some tables generating the extremal rays, calling these tables \emph{pure}. The existence of pure Betti tables were proved in char$(k)=0$ by Eisenbud, Fl{\o}ystad and Schreyer in 2007. The full conjecture was subsequently proven in arbitrary characteristic by Eisenbud and Schreyer by providing a connection between Betti tables of modules over $R$ and cohomology tables of special vector bundles over $\mathbb{P}^{n-1}$, calling these vector bundles \emph{supernatural}. There is a recent comprehensive survey by Gunnar Fl{\o}ystad \cite{Fl}.\\ A natural question is what happens in the non-standard graded case. A research group at a summer-school in Snowbird, UT in 2010 \cite{Ba} investigated the case $R=k[x,y]$ with $\deg (x)=1$ and $\deg (y)=2$ and they realized that even in this very special case of non-standard grading it was not possible to describe the extremal rays by similar easy to define "\emph{pure}" tables.\\ Thus, the next step is to look at a coarser invariant of graded modules: the Hilbert function. In the standard graded case, Mats Boij and Greg Smith described the cone of Hilbert functions of modules of dimension $d$ finitely generated in degree 0, which coincides with the Hilbert polynomial in degrees larger than a fixed $a$ by the extremal rays as well as by the supporting hyperplanes, see \cite{Bo} and \cite{BS}.\\ In the present article, we look at a similar cone for a simple non-standard graded case. Indeed, we consider artinian graded modules generated in degree 0 over $R=k[x,y]$, where $\deg(x)=1$ and $\deg(y)=n$ for some $n \in \mathbb{N}$. We can specify the extremal rays by a recursive structure (see theorem (\ref{extr})), which provides us an algorithm to decompose any h-vector of the mentioned modules in a positive rational combination of generators of the extremal rays. \section{Describing the Cone} Let $R=k[x,y]$ be the graded ring with\,$\deg(x)=1$\,and\,$\deg(y)=n, \,n \geq 1$, where $k$ is a field of characteristic zero. For an $\mathbb{N}$-graded $R$-module $M=\bigoplus_{i\geq 0} M_i$ the Hilbert function $h_M:\mathbb{N} \rightarrow \mathbb{N}$ is defined by $h_M(i):= \dim _kM_i.$ As we are only looking at artinian modules, the Hilbert function has only finitely many nontrivial values and it therefore coincides with the h-vector of $M$. For more details about Hilbert functions and h-vectors see \cite{Br}. If we allow generators of our module to be in arbitrary degrees, there is no restriction on the possible h-vectors, in fact they would cover the whole positive orthant, so we concentrate on modules generated in degree 0. The set of h-vectors, more generally Hilbert functions, naturally forms a semigroup: $h_{M\oplus N}(i)=h_M(i)+h_N(i)$. Therefore it makes sense to look at the cone of h-vectors, denoted by $$\mathbb{H} := \mathrm{cone}\{\text{h-vector of artinian }\, R\!-\!\text{modules gen. in} \deg \text{ }0\}\subseteq \bigoplus_{j \in \mathbb{N}} \mathbb{Q}.$$ We want to make our h-vectors live in a finite dimensional vectorspace. Therefore we usually work with limited degrees: $\mathbb{H}$$(d):= \mathbb{H}\cap \mathbb{Q}^{d+1}$. We freely identify $(h_0,\ldots,h_e,0,\ldots)$ with $(h_0,\ldots,h_e)$. Because of the additivity of the Hilbert function, we can write any h-vector $h \in \mathbb{H}$ of a module as a sum of h-vectors of $R$-algebras, which can be identified with quotients $R/I$ with $I$ an homogeneous ideal. In the sequel we need the notion of lex-segment ideals, which uses the lexicographic order. \begin{Def} The \emph{lexicographic} order of monomials $x^{a_1}y^{b_1}$, $x^{a_2}y^{b_2}\in k[x,y]$ (with arbitrary $\mathbb{N}$-grading) is defined as $$x^{a_1}y^{b_1}<_{lex}x^{a_2}y^{b_2}\;:\Leftrightarrow \;a_1>a_2\;\text{ or }\;a_1=a_2\text{ and }b_1>b_2.$$ \end{Def} \begin{Def} Let $I$ be a monomial ideal in $R=k[x,y]$ with $\deg(x)=1$\,and\, $\deg(y)=n,\;\; I_d$ the group of homogeneous elements of degree $d$ in $I$. \\ We call $I$ a \emph{lex-segment ideal} iff for every monomial $x^ay^b\in I_d$ the monomials $x^{a+n}y^{b-1},\ldots,x^{d-n}y$, $x^d$ belong to $I_d$. \end{Def} \pagebreak The following theorem due to G. Dalzotto and E. Sbarra states that $\mathbb{H}$ is generated by the h-vectors of the $R$-algebras $R/I$, where $I$ is a lex-segment ideal. \begin{thm}[\cite{DS}, Theorem 4.16]\label{Mac} Let $R=k[x,y]$, $I$ be a homogeneous ideal in $R$ with $\deg(x)=1$ and $\deg(y)=n$, $n\in\mathbb{N}$. There exists a unique lex-segment ideal $L$ such that $h_{R/I}(t) = h_{R/L}(t)$ for any $t\in\mathbb{N}.$ \end{thm} The lex-segment ideals are monomial ideals and there is a very nice and convenient way of illustrating them as staircases in the bivariate case, a more general description can be found in \cite{St}. \begin{exa}\label{bsp1} Let $\deg(y)=2$ and $I=\langle x^6,x^2y,xy^2,y^3 \rangle$. Then the lattice points in the non-shaded area form a $k$-basis of $R/I$. \unitlength1cm \begin{picture}(5,3.2) \put(1.5,0){\begin{tikzpicture} \fill [gray!20] (4.6,0)--(3,0)--(3,.5)--(1,.5)--(1,1)--(.5,1)--(.5,1.5)--(0,1.5)--(0,2)--(4.6,2)--(4.6,0); \draw [-latex] (0,0)--(5,0); \draw [-latex] (0,0)--(0,2.5); \draw (4.8,-.2) node{x} (-.2,2.2) node{y}; \draw (3,0)--(3,.5)--(1,.5)--(1,1)--(.5,1)--(.5,1.5)--(0,1.5); \filldraw[black](0,0) circle (1.5pt) (0,.5) circle (1.5pt) (0,1) circle (1.5pt) (.5,0) circle (1.5pt) (.5,.5) circle (1.5pt)(1,0) circle (1.5pt)(1.5,0) circle (1.5pt)(2,0) circle (1.5pt)(2.5,0) circle (1.5pt); \end{tikzpicture} } \end{picture} Drawing for every generator a box marked with the corresponding degree, we get a more simple diagram: \begin{picture}(5,2.3) \put(2,0.3){\begin{tikzpicture} \draw (0,0)rectangle(.5,.5) (.5,0)rectangle(1,.5) (1,0)rectangle(1.5,.5) (1.5,0)rectangle(2,.5) (2,0)rectangle(2.5,.5) (2.5,0)rectangle(3,.5) (0,.5)rectangle(.5,1) (.5,.5)rectangle(1,1) (0,1)rectangle(.5,1.5); \draw (.25,.25) node{0} (.75,.25) node{1} (1.25,.25) node{2} (1.75,.25) node{3} (2.25,.25) node{4} (2.75,.25) node{5} (.25,.75) node{2} (.75,.75) node{3} (.25,1.25) node{4}; \end{tikzpicture} } \end{picture} The h-vector of this module is $h=(1,1,2,2,2,1)$, but there exist other monomial ideals with the same h-vector for the quotient: \begin{picture}(5,3.7) \put(2,2.75){$\langle x^4,xy^2,y^3 \rangle$} \put(2,.5){ \begin{tikzpicture} \draw (0,0)rectangle(.5,.5) (.5,0)rectangle(1,.5) (1,0)rectangle(1.5,.5) (1.5,0)rectangle(2,.5) (0,.5)rectangle(.5,1) (.5,.5)rectangle(1,1) (1,.5)rectangle(1.5,1) (1.5,.5)rectangle(2,1) (0,1)rectangle(.5,1.5); \draw (.25,.25) node{0} (.75,.25) node{1} (1.25,.25) node{2} (1.75,.25) node{3} (.25,.75) node{2} (.75,.75) node{3} (1.25,.75) node{4} (1.75,.75) node{5} (.25,1.25) node{4}; \end{tikzpicture}} \put(6.2,2.7){$\langle x^4,x^3y,x^2y^2,y^3 \rangle$} \put(6,0){ \begin{tikzpicture} \draw (0,0)rectangle(.5,.5) (.5,0)rectangle(1,.5) (1,0)rectangle(1.5,.5) (1.5,0)rectangle(2,.5) (0,.5)rectangle(.5,1) (.5,.5)rectangle(1,1) (1,.5)rectangle(1.5,1) (0,1)rectangle(.5,1.5) (.5,1)rectangle(1,1.5); \draw (.25,.25) node{0} (.75,.25) node{1} (1.25,.25) node{2} (1.75,.25) node{3} (.25,.75) node{2} (.75,.75) node{3} (1.25,.75) node{4} (.25,1.25) node{4} (.75,1.25) node{5}; \draw (-.1,-.1) .. controls (.5,-.5) and (1.5,-.5)..(2.1,-.1) ..controls (2.7,.3) and (2.7,1.2)..(2.1,1.6); \draw (-.1,-.1) .. controls (-.7,.3) and (-.7,1.2)..(-.1,1.6) ..controls (.5,2) and (1.5,2)..(2.1,1.6); \end{tikzpicture}} \end{picture} The encircled staircase corresponds to the lex-segment ideal with respect to $h$; just stack the boxes as far left as possible. In the sequel we will identify frequently an h-vector with the corresponding staircase diagram of the lex-segment ideal. \end{exa} \pagebreak Given the h-vector $h$ as a sum of h-vectors of $R$-algebras we attach to this decomposition a three-dimensional diagram with boxes nested in a corner as follows: \begin{itemize}[leftmargin=2em] \item[(1)] Tilt the corresponding staircases forward so they are lying flat on the floor. \item[(2)] Blow them up to cubes of height one. \item[(3)] Stack the resulting box diagrams with respect to their degrees. \item[(4)] Some of these boxes may overlap, then drop these boxes down to the next lower box of the same degree. \end{itemize} We call this stack an $h$-$diagram$ corresponding to the h-vector $h$. Note that these diagrams depend on the decomposition of $h$, only the total number of boxes in each degree is fixed by $h$. We demonstrate the construction in an example: \begin{exa} Let $n=3$ and $h=(3,2,1,4,1,0,2,1)=$\\ $=\;(1,1,1,2,0,0,0,0)\;+\;(1,1,0,1,1,0,1,1)\;+\;(1,0,0,1,0,0,1,0)$\\ The corresponding staircases look like this: \unitlength1cm \begin{picture}(11,2) \put(1,.2){ \begin{tikzpicture} \draw (0,0)rectangle(.5,.5); \draw (.5,0)rectangle(1,.5); \draw (1,0)rectangle(1.5,.5); \draw (1.5,0)rectangle(2,.5); \draw (0,.5)rectangle(.5,1); \draw (.25,.25) node{0} (.75,.25) node{1} (1.25,.25) node{2} (1.75,.25) node{3} (.25,.75) node{3}; \end{tikzpicture} } \put(5.6,.2){ \begin{tikzpicture} \draw (0,0)rectangle(.5,.5); \draw (.5,0)rectangle(1,.5); \draw (0,.5)rectangle(.5,1); \draw (.5,.5)rectangle(1,1); \draw (0,1)rectangle(.5,1.5); \draw (.5,1)rectangle(1,1.5); \draw (.25,.25) node{0} (.75,.25) node{1} (.25,.75) node{3} (.75,.75) node{4} (.25,1.25) node{6} (.75,1.25) node{7}; \end{tikzpicture} } \put(9.8,.2){ \begin{tikzpicture} \draw (0,0)rectangle(.5,.5); \draw (0,.5)rectangle(.5,1); \draw (0,1)rectangle(.5,1.5); \draw (.25,.25) node{0} (.25,.75) node{3} (.25,1.25) node{6}; \end{tikzpicture} } \end{picture} Turning the staircases down and blowing them up gives: \begin{picture}(11,2.3) \put(.8,0){ \begin{tikzpicture} \draw (0,0)--(2.5,0); \draw (0,0)--(0,1); \draw (0,0)--(-1,-1); \filldraw [fill=lgr!70,draw=black] (0,.5)--(-.5*.7,.5-.5*.7)--(.5-.5*.7,.5-.5*.7)--(.5,.5)--(0,.5); \filldraw [fill=lgr!70,draw=black] (.5,.5)--(.5-.5*.7,.5-.5*.7)--(1-.5*.7,.5-.5*.7)--(1,.5)--(.5,.5); \filldraw [fill=lgr!70,draw=black] (1,.5)--(1-.5*.7,.5-.5*.7)--(1.5-.5*.7,.5-.5*.7)--(1.5,.5)--(1,.5); \filldraw [fill=lgr!70,draw=black] (1.5,.5)--(1.5-.5*.7,.5-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5)--(1.5,.5); \filldraw [fill=lgr!70,draw=black] (-.5*.7,.5-.5*.7)--(.5-.5*.7,.5-.5*.7)--(.5-.7,.5-.7)--(-.7,.5-.7)--(-.5*.7,.5-.5*.7); \filldraw [fill=dgr, draw=black] (.5-.5*.7,.5-.5*.7)--(.5-.7,.5-.7)--(.5-.7,-.7)--(.5-.5*.7,-.5*.7)--(.5-.5*.7,.5-.5*.7); \filldraw [fill=dgr, draw=black] (2,.5)--(2-.5*.7,.5-.5*.7)--(2-.5*.7,-.5*.7)--(2,0)--(2,.5); \filldraw [fill=gr, draw=black] (-.7,.5-.7) rectangle (.5-.7,-.7); \filldraw [fill=gr, draw=black] (.5-.5*.7,.5-.5*.7) rectangle (1-.5*.7,-.5*.7); \filldraw [fill=gr, draw=black] (1-.5*.7,.5-.5*.7) rectangle (1.5-.5*.7,-.5*.7); \filldraw [fill=gr, draw=black] (1.5-.5*.7,.5-.5*.7) rectangle (2-.5*.7,-.5*.7); \draw (.45-.5*.7,.68-.5*.7) node{\emph{0}}; \draw (.95-.5*.7,.68-.5*.7) node{\emph{1}}; \draw (1.45-.5*.7,.68-.5*.7) node{\emph{2}}; \draw (1.95-.5*.7,.68-.5*.7) node{\emph{3}}; \draw (.45-1*.7,.68-1*.7) node{\emph{3}}; \end{tikzpicture} } \put(5.4,0){ \begin{tikzpicture} \draw (0,0)--(2,0); \draw (0,0)--(0,1); \draw (0,0)--(-1.2,-1.2); \filldraw [fill=lgr!70,draw=black] (0,.5)--(-.5*.7,.5-.5*.7)--(.5-.5*.7,.5-.5*.7)--(.5,.5)--(0,.5); \filldraw [fill=lgr!70,draw=black] (.5,.5)--(.5-.5*.7,.5-.5*.7)--(1-.5*.7,.5-.5*.7)--(1,.5)--(.5,.5); \filldraw [fill=lgr!70,draw=black] (-.5*.7,.5-.5*.7)--(-1*.7,.5-1*.7)--(.5-1*.7,.5-1*.7)--(.5-.5*.7,.5-.5*.7)--(-.5*.7,.5-.5*.7); \filldraw [fill=lgr!70,draw=black] (.5-.5*.7,.5-.5*.7)--(.5-1*.7,.5-1*.7)--(1-1*.7,.5-1*.7)--(1-.5*.7,.5-.5*.7)--(.5-.5*.7,.5-.5*.7); \filldraw [fill=lgr!70,draw=black] (-1*.7,.5-1*.7)--(-1.5*.7,.5-1.5*.7)--(.5-1.5*.7,.5-1.5*.7)--(.5-1*.7,.5-1*.7)--(-1*.7,.5-1*.7); \filldraw [fill=lgr!70,draw=black] (.5-1*.7,.5-1*.7)--(.5-1.5*.7,.5-1.5*.7)--(1-1.5*.7,.5-1.5*.7)--(1-1*.7,.5-1*.7)--(.5-1*.7,.5-1*.7); \filldraw [fill=dgr, draw=black] (1,.5)--(1-.5*.7,.5-.5*.7)--(1-.5*.7,-.5*.7)--(1,0)--(1,.5); \filldraw [fill=dgr, draw=black] (1-.5*.7,.5-.5*.7)--(1-1*.7,.5-1*.7)--(1-1*.7,-1*.7)--(1-.5*.7,-.5*.7)--(1-.5*.7,.5-.5*.7); \filldraw [fill=dgr, draw=black] (1-1*.7,.5-1*.7)--(1-1.5*.7,.5-1.5*.7)--(1-1.5*.7,-1.5*.7)--(1-1*.7,-1*.7)--(1-1*.7,.5-1*.7); \filldraw [fill=gr, draw=black] (-1.5*.7,.5-1.5*.7)rectangle (.5-1.5*.7,-1.5*.7); \filldraw [fill=gr, draw=black] (.5-1.5*.7,.5-1.5*.7)rectangle (1-1.5*.7,-1.5*.7); \draw (.45-.5*.7,.68-.5*.7) node{\emph{0}}; \draw (.95-.5*.7,.68-.5*.7) node{\emph{1}}; \draw (.45-1*.7,.68-1*.7) node{\emph{3}}; \draw (.95-1*.7,.68-1*.7) node{\emph{4}}; \draw (.45-1.5*.7,.68-1.5*.7) node{\emph{6}}; \draw (.95-1.5*.7,.68-1.5*.7) node{\emph{7}}; \end{tikzpicture} } \put(9.6,0){ \begin{tikzpicture} \draw (0,0)--(2,0); \draw (0,0)--(0,1); \draw (0,0)--(-1.2,-1.2); \filldraw [fill=lgr!70,draw=black] (0,.5)--(-.5*.7,.5-.5*.7)--(.5-.5*.7,.5-.5*.7)--(.5,.5)--(0,.5); \filldraw [fill=lgr!70,draw=black] (-.5*.7,.5-.5*.7)--(-1*.7,.5-1*.7)--(.5-1*.7,.5-1*.7)--(.5-.5*.7,.5-.5*.7)--(-.5*.7,.5-.5*.7); \filldraw [fill=lgr!70,draw=black] (-1*.7,.5-1*.7)--(-1.5*.7,.5-1.5*.7)--(.5-1.5*.7,.5-1.5*.7)--(.5-1*.7,.5-1*.7)--(-1*.7,.5-1*.7); \filldraw [fill=dgr, draw=black] (.5,.5)--(.5-.5*.7,.5-.5*.7)--(.5-.5*.7,-.5*.7)--(.5,0)--(.5,.5); \filldraw [fill=dgr, draw=black] (.5-.5*.7,.5-.5*.7)--(.5-1*.7,.5-1*.7)--(.5-1*.7,-1*.7)--(.5-.5*.7,-.5*.7)--(.5-.5*.7,.5-.5*.7); \filldraw [fill=dgr, draw=black] (.5-1*.7,.5-1*.7)--(.5-1.5*.7,.5-1.5*.7)--(.5-1.5*.7,-1.5*.7)--(.5-1*.7,-1*.7)--(.5-1*.7,.5-1*.7); \filldraw [fill=gr, draw=black] (-1.5*.7,.5-1.5*.7)rectangle (.5-1.5*.7,-1.5*.7); \draw (.45-.5*.7,.68-.5*.7) node{\emph{0}}; \draw (.45-1*.7,.68-1*.7) node{\emph{3}}; \draw (.45-1.5*.7,.68-1.5*.7) node{\emph{6}}; \end{tikzpicture} } \end{picture} Stacking these box diagrams with respect to their degrees we get one big stack: \begin{picture}(8,3.3) \put(2.6,0){ \begin{tikzpicture} \draw (0,0)--(4,0); \draw (0,0)--(0,2); \draw (0,0)--(-1.2,-1.2); \filldraw [fill=gr,draw=black] (.5-.5*.7,.5-.5*.7)rectangle ( 1-.5*.7,-.5*.7); \filldraw [fill=lgr!70, draw=black] (1,.5)--(1.5,.5)--(1.5-.5*.7,.5-.5*.7)--(1-.5*.7,.5-.5*.7)--(1,.5); \filldraw [fill=dgr, draw=black] (1.5,.5)--(1.5,0)--(1.5-.5*.7,-.5*.7)--(1.5-.5*.7,.5-.5*.7)--(1.5,.5); \filldraw [fill=gr, draw=black] (1-.5*.7,-.5*.7)rectangle(1.5-.5*.7,.5-.5*.7); \draw (1.45-.5*.7,.68-.5*.7) node{\emph{2}}; \filldraw [fill=lgr!70, draw=black] (1.5,.5)--(2,.5)--(2-.5*.7,.5-.5*.7)--(1.5-.5*.7,.5-.5*.7)--(1.5,.5); \filldraw [fill=dgr, draw=black] (2,.5)--(2,0)--(2-.5*.7,-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5); \filldraw [fill=dgr, draw=black] (.5-1*.7,-1*.7+.5)--(.5-.5*.7,-.5*.7+.5)--(.5-.5*.7,-.5*.7)--(.5-1*.7,-1*.7)--(.5-1*.7,-1*.7+.5); \filldraw [fill=gr, draw=black] (1.5-.5*.7,.5-.5*.7)rectangle(2-.5*.7,-.5*.7); \filldraw [fill=gr, draw=black] (-1*.7,.5-1*.7)rectangle(.5-1*.7,-1*.7); \draw (1.95-.5*.7,.68-.5*.7) node{\emph{3}}; \filldraw [fill=lgr!70, draw=black] (.5,1)--(.5-.5*.7,1-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1)--(.5,1); \filldraw [fill=dgr, draw=black] (1,1)--(1-.5*.7,1-.5*.7)--(1-.5*.7,.5-.5*.7)--(1,.5)--(1,1); \filldraw [fill=gr, draw=black] (1-.5*.7,1-.5*.7)rectangle(.5-.5*.7,-.5*.7+.5); \draw (.95-.5*.7,1.18-.5*.7) node{\emph{1}}; \filldraw [fill=lgr!70, draw=black] (.5-1*.7,1-1*.7)-- (.5-.5*.7,1-.5*.7)-- (1-.5*.7,1-.5*.7)-- (1-1*.7,1-1*.7)-- (.5-1*.7,1-1*.7); \filldraw [fill=dgr, draw=black] (1-.5*.7,1-.5*.7)--(1-1*.7,1-1*.7)--(1-1*.7,.5-1*.7)--(1-.5*.7,.5-.5*.7)--(1-.5*.7,1-.5*.7); \draw (.95-1*.7,1.18-1*.7) node{\emph{4}}; \filldraw [fill=gr, draw=black] (-1.5*.7,1-1.5*.7)rectangle (.5-1.5*.7,.5-1.5*.7) ; \filldraw [fill=lgr!70, draw=black](.5-1.5*.7,1-1.5*.7)-- (.5-1*.7,1-1*.7)-- (1-1*.7,1-1*.7)-- (1-1.5*.7,1-1.5*.7)-- (.5-1.5*.7,1-1.5*.7); \filldraw [fill=dgr, draw=black] (1-1*.7,1-1*.7)--(1-1.5*.7,1-1.5*.7)--(1-1.5*.7,.5-1.5*.7)--(1-1*.7,.5-1*.7)--(1-1*.7,1-1*.7); \filldraw [fill=gr, draw=black] (.5-1.5*.7,.5-1.5*.7)rectangle(1-1.5*.7,1-1.5*.7); \draw (.95-1.5*.7,1.18-1.5*.7) node{\emph{7}}; \filldraw [fill=lgr!70,draw=black] (0,1.5)--(-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5,1.5)--(0,1.5); \filldraw [fill=dgr, draw=black] (.5-.5*.7,-.5*.7+1.5)--(.5,1.5)--(.5,1)--(.5-.5*.7,-.5*.7+1)--(.5-.5*.7,-.5*.7+1.5); \draw (.45-.5*.7,1.68-.5*.7) node{\emph{0}}; \filldraw [fill=lgr!70, draw=black] (-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5-1*.7,-1*.7+1.5)--(-1*.7,-1*.7+1.5)--(-.5*.7,1.5-.5*.7); \filldraw [fill=dgr, draw=black] (.5-.5*.7,-.5*.7+1.5)--(.5-1*.7,-1*.7+1.5)--(.5-1*.7,-1*.7+1)--(.5-.5*.7,-.5*.7+1)--(.5-.5*.7,-.5*.7+1.5); \filldraw [fill=gr, draw=black] (-1*.7,-1*.7+1.5)rectangle(.5-1*.7,-1*.7+1); \draw (.45-1*.7,1.68-1*.7) node{\emph{3}}; \filldraw [fill=lgr!70, draw=black] (-1*.7,1.5-1*.7)--(.5-1*.7,-1*.7+1.5)--(.5-1.5*.7,-1.5*.7+1.5)--(-1.5*.7,-1.5*.7+1.5)--(-1*.7,-1*.7+1.5); \filldraw [fill=dgr, draw=black] (.5-1*.7,-1*.7+1.5)--(.5-1.5*.7,-1.5*.7+1.5)--(.5-1.5*.7,-1.5*.7+1)--(.5-1*.7,-1*.7+1)-- (.5-1*.7,-1*.7+1.5); \filldraw [fill=gr, draw=black] (-1.5*.7,1.5-1.5*.7)rectangle (.5-1.5*.7,1-1.5*.7) ; \draw (.45-1.5*.7,1.68-1.5*.7) node{\emph{6}}; \end{tikzpicture} } \end{picture} Now some of these boxes are not grounded so we let them drop: \begin{figure}[h] \begin{picture}(12,3.1) \put(0.4,-1.2){ \begin{tikzpicture} \draw (0,0)--(4,0); \draw (0,0)--(0,2); \draw (0,0)--(-2.3,-2.3); \filldraw [fill=gr,draw=black] (.5-1.5*.7,.5-1.5*.7)rectangle ( -1.5*.7,-1.5*.7); \filldraw [fill=lgr!70, draw=black](.5-1.5*.7,.5-1.5*.7)-- (.5-1*.7,.5-1*.7)-- (1-1*.7,.5-1*.7)-- (1-1.5*.7,.5-1.5*.7)-- (.5-1.5*.7,.5-1.5*.7); \filldraw [fill=dgr, draw=black] (1-1*.7,.5-1*.7)-- (1-1.5*.7,.5-1.5*.7)--(1-1.5*.7,-1.5*.7)--(1-1*.7,-1*.7)--(1-1*.7,.5-1*.7); \filldraw [fill=gr, draw=black] (.5-1.5*.7,-1.5*.7)rectangle(1-1.5*.7,.5-1.5*.7); \draw (.95-1.5*.7,.68-1.5*.7) node{\emph{7}}; \filldraw [fill=lgr!70, draw=black] (.5-1*.7,.5-1*.7)-- (.5-.5*.7,.5-.5*.7)-- (1-.5*.7,.5-.5*.7)-- (1-1*.7,.5-1*.7)-- (.5-1*.7,.5-1*.7); \filldraw [fill=dgr, draw=black] (1-.5*.7,.5-.5*.7)--(1-1*.7,.5-1*.7)--(1-1*.7,-1*.7)--(1-.5*.7,-.5*.7)--(1-.5*.7,.5-.5*.7); \draw (.95-1*.7,.68-1*.7) node{\emph{4}}; \filldraw [fill=lgr!70, draw=black] (1,.5)--(1.5,.5)--(1.5-.5*.7,.5-.5*.7)--(1-.5*.7,.5-.5*.7)--(1,.5); \filldraw [fill=dgr, draw=black] (1.5,.5)--(1.5,0)--(1.5-.5*.7,-.5*.7)--(1.5-.5*.7,.5-.5*.7)--(1.5,.5); \filldraw [fill=gr, draw=black] (1-.5*.7,-.5*.7)rectangle(1.5-.5*.7,.5-.5*.7); \draw (1.45-.5*.7,.68-.5*.7) node{\emph{2}}; \filldraw [fill=lgr!70, draw=black] (1.5,.5)--(2,.5)--(2-.5*.7,.5-.5*.7)--(1.5-.5*.7,.5-.5*.7)--(1.5,.5); \filldraw [fill=dgr, draw=black] (2,.5)--(2,0)--(2-.5*.7,-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5); \filldraw [fill=gr, draw=black] (1.5-.5*.7,.5-.5*.7)rectangle(2-.5*.7,-.5*.7); \draw (1.95-.5*.7,.68-.5*.7) node{\emph{3}}; \fill[opacity=0.05](0,.5)--(-1.75*.7,.5-1.75*.7)--(3.3-1.75*.7,.5-1.75*.7)--(3.3,.5); \draw [dotted] (2,.5)--(4,.5); \draw [dotted] (-1.5*.7,.5-1.5*.7)-- (-2.5*.7,.5-2.5*.7); \draw (3.55,.25)node{\scriptsize{level 1}}; \filldraw [fill=lgr!70, draw=black] (-1*.7,-1*.7+1)--(.5-1*.7,-1*.7+1)--(.5-1.5*.7,-1.5*.7+1)--(-1.5*.7,-1.5*.7+1)--(-1*.7,-1*.7+1); \filldraw [fill=dgr, draw=black] (.5-1*.7,-1*.7+1)--(.5-1.5*.7,-1.5*.7+1)--(.5-1.5*.7,-1.5*.7+.5)--(.5-1*.7,-1*.7+.5)-- (.5-1*.7,-1*.7+1); \filldraw [fill=gr, draw=black] (-1.5*.7,1-1.5*.7)rectangle (.5-1.5*.7,.5-1.5*.7) ; \draw (.45-1.5*.7,1.18-1.5*.7) node{\emph{6}}; \filldraw [fill=dgr, draw=black] (.5-1*.7,-1*.7+1)--(.5-.5*.7,-.5*.7+1)--(.5-.5*.7,-.5*.7+.5)--(.5-1*.7,-1*.7+.5)--(.5-1*.7,-1*.7+1); \filldraw [fill=lgr!70, draw=black] (.5,1)--(.5-.5*.7,1-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1)--(.5,1); \filldraw [fill=dgr, draw=black] (1,1)--(1-.5*.7,1-.5*.7)--(1-.5*.7,.5-.5*.7)--(1,.5)--(1,1); \filldraw [fill=gr, draw=black] (1-.5*.7,1-.5*.7)rectangle(.5-.5*.7,-.5*.7+.5); \draw (.95-.5*.7,1.18-.5*.7) node{\emph{1}}; \fill[opacity=0.05](0,1)--(-1.75*.7,1-1.75*.7)--(3.3-1.75*.7,1-1.75*.7)--(3.3,1); \draw [dotted] (1,1)--(4,1); \draw [dotted] (-1.5*.7,1-1.5*.7)-- (-2.5*.7,1-2.5*.7); \draw (3.55,.75)node{\scriptsize{level 2}}; \filldraw [fill=lgr!70,draw=black] (0,1.5)--(-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5,1.5)--(0,1.5); \filldraw [fill=dgr, draw=black] (.5-.5*.7,-.5*.7+1.5)--(.5,1.5)--(.5,1)--(.5-.5*.7,-.5*.7+1)--(.5-.5*.7,-.5*.7+1.5); \draw (.45-.5*.7,1.68-.5*.7) node{\emph{0}}; \filldraw [fill=lgr!70, draw=black] (-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5-1*.7,-1*.7+1.5)--(-1*.7,-1*.7+1.5)--(-.5*.7,1.5-.5*.7); \filldraw [fill=dgr, draw=black] (.5-.5*.7,-.5*.7+1.5)--(.5-1*.7,-1*.7+1.5)--(.5-1*.7,-1*.7+1)--(.5-.5*.7,-.5*.7+1)--(.5-.5*.7,-.5*.7+1.5); \filldraw [fill=gr, draw=black] (-1*.7,-1*.7+1.5)rectangle(.5-1*.7,-1*.7+1); \draw (.45-1*.7,1.68-1*.7) node{\emph{3}}; \fill[opacity=0.05](0,1.5)--(-1.75*.7,1.5-1.75*.7)--(3.3-1.75*.7,1.5-1.75*.7)--(3.3,1.5); \draw [dotted] (.5,1.5)--(4,1.5); \draw [dotted] (-1*.7,1.5-1*.7)-- (-2.5*.7,1.5-2.5*.7); \draw (3.55,1.25)node{\scriptsize{level 3}}; \draw [densely dotted] (2,0)--(4,0) (2-.5*.7,-.5*.7)--(4,-.5*.7); \draw (3.3,-.17) node {\scriptsize{first row}}; \draw [densely dotted] (.5-1.5*.7,-1.5*.7)-- (.5-3.3*.7,-3.3*.7); \draw (.2-2.4*.7,-2.4*.7) node{\rotatebox{45}{\scriptsize{first column}}}; \end{tikzpicture} } \end{picture} \caption{An example of an h-diagram} \end{figure} \end{exa} We denote the maximal layers of the h-diagram where no internal stairs occur by levels and count them from the bottom. Later on these levels don't have to be of integer height. We are also talking about rows in the h-diagram meaning the stack of boxes in the rows parallel to the rear wall and columns analogous to the side wall as described in the picture. The staircases are by construction decreasing, hence the boxes in the h-diagram are also decreasing in every row and in every column. Of course, for any such decreasing box diagram we can find a module with corresponding h-vector by identifying every level with the staircase of an ideal times the height and summing up. Considering the cone of h-vectors is equivalent to allowing levels in every rational height. Therefore we get the following proposition: \begin{prop}\label{hcone} Let\, $\deg(y)=n,\; n \in \mathbb{N}$. An element $h=(h_0,\ldots,h_d) \in \mathbb{Q}_{\geq 0}^{d+1}$ \linebreak belongs to the cone $\mathbb{H}$(d) if and only if there exists a decomposition of the components $h_i=\sum_{j=1}^{s_i}h_i^j$\; with \,$h_i^j\! \geq\! 0$\;for all\, $i=0,...,d$,\, $h_i^j\!=\!0$ for $j\! >\! s_i$ \,and \begin{itemize} \item[(1)] $h_i^j \geq h_{i+1}^j$ \hspace{13mm} for all\; $j=1,\ldots,s_d$ and $i=n(j-1),\ldots,d$\; and \\ \vspace{-3mm} \item[(2)] $h_{ni+r}^j \geq h_{n(i+1)+r}^{j+1}$\;\; for all \;$i=0,...,\lfloor \frac{d}{n}\rfloor,\; j=1,...,s_{ni}$ \;and\, $r=0,...,n-1$. \end{itemize} We will call any such decomposition an \emph{h-diagram}. \end{prop} \begin{proof} To show that every element of the cone admits such a decomposition it is enough to show it for the generators since both conditions are additive. Let $h=(1,h_1,\ldots,h_d)$ be a generator of the cone, e.g. the $h$-vector of some $R/I$. By theorem (\ref{Mac}) we may assume that $I$ is monomial, hence we can look at the corresponding staircase. Every box in this staircase marked with $i$ stands for a generator of $R/I$ of degree $i$. Setting $h_i^j=1$ for having a box marked with $i$ in the $j$-th row and $h_i^j=0$ if not we get the desired decomposition. As the staircases as the visualization of ideals are always nested in the corner, saying there are no holes and the boxes are decreasing from left to right, the conditions (1) and (2) are fulfilled. For the other direction we build an h-diagram out of the decomposition of a vector $h \in \mathbb{Q}_{\geq0}^{d+1}$ setting $h_i^j$ boxes in the $j$-th row and the $(i-(j-1)n+1)$-th column of a three-dimensional diagram. Condition (1) ensures that the rows in this diagram are decreasing and condition (2) that the columns are decreasing as well. Cutting this diagram into levels as described before we get in every level $\ell$ a staircase corresponding to an ideal $I_\ell$ blown up to the levelheight $q_\ell$.\\ Let $q$ be a common multiple of $q_\ell$ then $$M= \bigoplus_{levels}qq_\ell\, R/I_\ell$$ is an $R$-module with h-vector $q \cdot h$ of degree $d$ and therefore $h \in \mathbb{H}(d)$.\qedhere \end{proof} \pagebreak Next we want to list the extremal points that are the first integer points on the extremal rays. We denote by $\text{Ex}(d)$ the extremal points of the cone $\mathbb{H}$$(d)$ for a fixed integer $d$.\\ There are some distinguished h-vectors we want to give a special notation. We denote by $s^d=(\underbrace{1,...,1}_{n},2,\ldots,2,3,\ldots\lfloor\frac{d}{n}\rfloor+1)$ the h-vector of length $d+1$ of an R-algebra generated in degree 0 with maximal entries and let $s_m$ be the $m$-th coefficient of $s^d$ for nontrivial entries ignoring the upper $d$ . For $d \in \mathbb{N}$ write $d=n\cdot m +r$ where $r \in \{0,\dots,n-1\}$. We denote by $t^d$ the h-vector of the shape $$(\underbrace{1,\ldots,1}_{r+1},\underbrace{0,\ldots,0}_{n-r-1},1,\ldots,1,0,\ldots,0,\ldots,\underbrace{1,\ldots,1}_{r+1})$$ The parts $1,\ldots,1,0,\ldots,0$ occur $m$ times, therefore $t^d$ has length $d+1$.\\ Note that this is the h-vector of the staircase given by a rectangle of size \linebreak $(r+1)\times (m+1)$, we call $t^d$ the \emph{tower} of degree $d$. In example (\ref{notextr}) we will see that the towers for $d\equiv n-1$ modulo $n$ are decomposable.\\ To write down the extremal points we need a kind of glueing operation: \begin{Def} Let $d=n\cdot m+r$ with $r\in\{0,\ldots,n-1\}$ \;and $h \in \mathbb{H}(n\cdot m-r-3)$. Then we define $$t^d*h:=t^d+(\underbrace{0,\ldots,0}_{r+1},h_0,\ldots,h_{n\cdot m-r-3},\underbrace{0,\ldots,0}_{r+2}),$$ \end{Def} \begin{rem} The definition of the $*$-operation is not as arbitrary as it may look. Stated in terms of staircases, this is just the procedure of taking $t^d$ and glueing $h$ on the right hand side. This is obviously still an h-vector of length $d+1$. \end{rem} \begin{exa} Let $n=3, d=7=3\cdot 2+1$ and $h=(1,1)$.\\We get \;$t^d= (1,1,0,1,1,0,1,1) $\, and \;$t^d*h= (1,1,1,2,1,0,1,1).$ In the language of staircases: \unitlength1cm \begin{picture}(10,1.3) \put(2,0){ \begin{tikzpicture} \draw (0,0)rectangle (.3,.3) (.3,0)rectangle (.6,.3) (0,.3)rectangle (.3,.6) (.3,.3)rectangle (.6,.6) (0,.6)rectangle (.3,.9) (.3,.6)rectangle (.6,.9); \draw (1,.5) node {$*$}; \end{tikzpicture} } \put(3.5,.3){ \begin{tikzpicture} \draw (0,0)rectangle (.3,.3) (.3,0)rectangle (.6,.3); \draw (1.2,.2) node {$=$}; \end{tikzpicture} } \put(5.5,0){ \begin{tikzpicture} \draw (0,0)rectangle (.3,.3) (.3,0)rectangle (.6,.3) (0,.3)rectangle (.3,.6) (.3,.3)rectangle (.6,.6) (0,.6)rectangle (.3,.9) (.3,.6)rectangle (.6,.9) (.6,0)rectangle (.9,.3) (.9,0)rectangle (1.2,.3); \end{tikzpicture} } \end{picture} \end{exa} Now we can state our main result which will be proven in section 3: \begin{thm}\label{extr} Let $\deg(y)=n \in \mathbb{N}$ \, and\, $d=n\cdot m+r$ with $m \geq 0$ and $ r\in \{0,\dots,n-1\}$. The extremal points of \;$\mathbb{H}$(d)\, are given by: \begin{itemize} \item[(0)] For $d \leq n-1: \;\;\text{Ex}(d)=\{h=(1,\dots,1)$ of length $ \leq d+1 \}$. \item[(1)] For $r\in\{0,\ldots,n-2\}: \;\; \text{Ex}(d)=\text{Ex}(d-1)\cup s^{d}\cup t^{d}\cup t^{d}*\text{Ex}(d-2r-3)$. \item[(2)] For $r=n-1: \;\;\text{Ex}(d)=\text{Ex}(d-1)\cup s^{d}$. \end{itemize} \end{thm} \begin{exa} Let $\deg(y)=2$. We look for the extremal points up to degree 6: \unitlength1cm \begin{picture}(11,9) \put(2,0){ \begin{tikzpicture} \draw (-.1,.4)rectangle(.1,.6) (.5,1.2)rectangle(.7,1.4) (.7,1.2)rectangle(.9,1.4) (1.2,2)rectangle(1.4,2.2) (1.4,2)rectangle(1.6,2.2) (1.6,2)rectangle(1.8,2.2) (1.2,2.2)rectangle(1.4,2.4) (1.4,.3)rectangle(1.6,.5) (1.4,.5)rectangle(1.6,.7) (2.1,3)rectangle(2.3,3.2) (2.3,3)rectangle(2.5,3.2) (2.5,3)rectangle(2.7,3.2) (2.7,3)rectangle(2.9,3.2) (2.1,3.2)rectangle(2.3,3.4) (2.3,3.2)rectangle(2.5,3.4) (3.2,4)rectangle(3.4,4.2)(3.4,4)rectangle(3.6,4.2) (3.6,4)rectangle(3.8,4.2)(3.8,4)rectangle(4,4.2)(4,4)rectangle(4.2,4.2) (3.2,4.2)rectangle(3.4,4.4)(3.4,4.2)rectangle(3.6,4.4) (3.6,4.2)rectangle(3.8,4.4) (3.2,4.4)rectangle(3.4,4.6) (3.4,2.7)rectangle(3.6,2.9) (3.6,2.7)rectangle(3.8,2.9) (3.8,2.7)rectangle(4,2.9) (3.4,2.9)rectangle(3.6,3.1) (3.4,3.1)rectangle(3.6,3.3) (3.5,1.5)rectangle(3.7,1.7) (3.7,1.5)rectangle(3.9,1.7) (3.5,1.7)rectangle(3.7,1.9) (3.5,1.9)rectangle(3.7,2.1) (3.6,.2)rectangle(3.8,.4) (3.6,.4)rectangle(3.8,.6) (3.6,.6)rectangle(3.8,.8) (4.5,5.2)rectangle(4.7,5.4) (4.7,5.2)rectangle(4.9,5.4) (4.9,5.2)rectangle(5.1,5.4) (5.1,5.2)rectangle(5.3,5.4) (5.3,5.2)rectangle(5.5,5.4) (5.5,5.2)rectangle(5.7,5.4) (4.5,5.4)rectangle(4.7,5.6) (4.7,5.4)rectangle(4.9,5.6) (4.9,5.4)rectangle(5.1,5.6) (5.1,5.4)rectangle(5.3,5.6) (4.5,5.6)rectangle(4.7,5.8) (4.7,5.6)rectangle(4.9,5.8) (6,6.4)rectangle(6.2,6.6) (6.2,6.4)rectangle(6.4,6.6) (6.4,6.4)rectangle(6.6,6.6) (6.6,6.4)rectangle(6.8,6.6) (6.8,6.4)rectangle(7,6.6) (7,6.4)rectangle(7.2,6.6) (7.2,6.4)rectangle(7.4,6.6) (6,6.6)rectangle(6.2,6.8) (6.2,6.6)rectangle(6.4,6.8) (6.4,6.6)rectangle(6.6,6.8) (6.6,6.6)rectangle(6.8,6.8) (6.8,6.6)rectangle(7,6.8) (6,6.8)rectangle(6.2,7) (6.2,6.8)rectangle(6.4,7) (6.4,6.8)rectangle(6.6,7) (6,7)rectangle(6.2,7.2) (6.2,5.2)rectangle(6.4,5.4) (6.4,5.2)rectangle(6.6,5.4) (6.6,5.2)rectangle(6.8,5.4) (6.8,5.2)rectangle(7,5.4) (7,5.2)rectangle(7.2,5.4) (6.2,5.4)rectangle(6.4,5.6) (6.4,5.4)rectangle(6.6,5.6) (6.6,5.4)rectangle(6.8,5.6) (6.2,5.6)rectangle(6.4,5.8) (6.2,5.8)rectangle(6.4,6) (6.3,4)rectangle(6.5,4.2) (6.5,4)rectangle(6.7,4.2) (6.7,4)rectangle(6.9,4.2) (6.9,4)rectangle(7.1,4.2) (6.3,4.2)rectangle(6.5,4.4) (6.5,4.2)rectangle(6.7,4.4) (6.3,4.4)rectangle(6.5,4.6) (6.3,4.6)rectangle(6.5,4.8) (6.4,2.8)rectangle(6.6,3) (6.6,2.8)rectangle(6.8,3) (6.8,2.8)rectangle(7,3) (6.4,3)rectangle(6.6,3.2) (6.4,3.2)rectangle(6.6,3.4) (6.4,3.4)rectangle(6.6,3.6) (6.5,1.6)rectangle(6.7,1.8) (6.7,1.6)rectangle(6.9,1.8) (6.5,1.8)rectangle(6.7,2) (6.5,2)rectangle(6.7,2.2) (6.5,2.2)rectangle(6.7,2.4) (6.6,.1)rectangle(6.8,.3) (6.6,.3)rectangle(6.8,.5) (6.6,.5)rectangle(6.8,.7) (6.6,.7)rectangle(6.8,.9) (7.7,1.6)rectangle(7.9,1.8) (7.9,1.6)rectangle(8.1,1.8) (7.7,1.8)rectangle(7.9,2) (7.9,1.8)rectangle(8.1,2) (7.7,2)rectangle(7.9,2.2) (7.7,2.2)rectangle(7.9,2.4); \draw [dotted] (.35,0)--(.35,1.5) (1.05,-.3)--(1.05,2.5) (1.95,-.6)--(1.95,3.5) (3.05,-.8)--(3.05,4.8) (4.35,0)--(4.35,5.9) (5.85,0)--(5.85,7.3); \draw (-.2,-.1)node{\tiny{degree 0}} (.2,-.4)node{\tiny{degree 1}} (1,-.6)node{\tiny{degree 2}} (1.8,-.8)node{\tiny{degree 3}}; \draw (3.5,-.8)node{...}; \draw (5,-1) node{degree 6}; \draw [->] (.75,-.38)--(1.05,-.38); \draw [->] (1.65,-.58)--(1.95,-.58); \draw [->] (2.45,-.78)--(3.05,-.78); \draw [->] (6,-1)--(8,-1); \draw [<-] (-.3,-1)--(4,-1); \draw [->] (.25,.5)--(1.2,.5); \draw [->] (1.8,.5)--(3.4,.5); \draw [->] (4.1,.5)--(6.4,.5); \draw [->] (1.1,1.3)--(3.25,1.8); \draw [white, line width=3pt] (1.5,.9)--(1.5,1.9); \draw [->] (1.5,.9)--(1.5,1.9); \draw [->] (2,2.2)--(3.1,2.7); \draw [->] (4.1,1.9)--(6.1,1.9); \draw [->] (4.15,3.05)--(5.9,3.05); \draw [->] (4.3,4.4)--(5.9,5); \draw [->] (3.7,.9)--(3.7,1.4); \draw [->] (3.7,2.2)--(3.7,2.6); \draw [->] (3.7,3.5)--(3.7,3.9); \draw [->] (6.7,1)--(6.7,1.5); \draw [->] (6.7,2.5)--(6.7,2.7); \draw [->] (6.7,3.3)--(6.7,3.9); \draw [->] (6.7,4.5)--(6.7,5.1); \draw [->] (6.7,5.7)--(6.7,6.35); \draw [->] (7,2)--(7.5,2); \draw [->] (7.75,2.5)--(7,3.9); \draw [->] (.1,.7)--(.5,1.1); \draw [->] (.9,1.5)--(1.3,1.9); \draw [->] (1.8,2.4)--(2.2,2.8); \draw [->] (2.8,3.4)--(3.2,3.8); \draw [->] (3.9,4.5)--(4.4,5); \draw [->] (5.1,5.7)--(5.7,6.3); \draw [white, line width=3pt] (.6,.1) ..controls (3,.1) and (3.2,2)..(3.2,3); \draw [white, line width=3pt] (3.2,3) ..controls (3.2,3.9) and (2.2,3.9)..(2,3.9); \draw [white, line width=3pt] (2,3.9) ..controls (.8,3.9) and (-.6,2)..(-.6,.7); \draw [white, line width=3pt] (-.6,.7) ..controls (-.6,.2) and (-.2,.1)..(.6,.1); \draw [white, line width=3pt] (7,1.3) ..controls (8.5,1.3) and (8.5,1.9)..(8.5,2.3); \draw [white, line width=3pt] (8.5,2.3) ..controls (8.5,4) and (7.8,6.15)..(6.7,6.15); \draw [white, line width=3pt] (6.7,6.15) ..controls (5.8,6.15) and (6,5.9)..(6,3.5); \draw [white, line width=3pt] (6,3.5) ..controls (6,1.8) and (6.2,1.3)..(7,1.3); \draw [gr, very thin] (.6,.1) ..controls (3,.1) and (3.2,2)..(3.2,3); \draw [gr, very thin] (3.2,3) ..controls (3.2,3.9) and (2.2,3.9)..(2,3.9); \draw [gr, very thin] (2,3.9) ..controls (.8,3.9) and (-.6,2)..(-.6,.7); \draw [gr, very thin] (-.6,.7) ..controls (-.6,.2) and (-.2,.1)..(.6,.1); \draw [gr, very thin] (7,1.3) ..controls (8.5,1.3) and (8.5,1.9)..(8.5,2.3); \draw [gr, very thin] (8.5,2.3) ..controls (8.5,4) and (7.8,6.15)..(6.7,6.15); \draw [gr, very thin] (6.7,6.15) ..controls (5.8,6.15) and (6,5.9)..(6,3.5); \draw [gr, very thin] (6,3.5) ..controls (6,1.8) and (6.2,1.3)..(7,1.3); \end{tikzpicture} } \end{picture} The encircled parts show the recursive structure of the extremal points by using the $*$-operator. \end{exa} There is a natural partial ordering in $\mathbb{Q}^{d+1}$: $$h \leq g \quad :\Leftrightarrow \quad h_i \leq g_i \;\text{for all}\; i=0,\ldots,d.$$ The chains in this partial ordering offer totally ordered subsets in $\mathbb{Q}^{d+1}$.\\ In fact, this partial ordering harmonizes perfectly with the visualization of the h-vectors of $R$-algebras by staircases of lex-segment ideals by embedding the box-diagrams in each other, illustrated in the previous example by the arrows. Moreover, from any h-diagram we get a stack of staircases embedded consecutively and that involves a chain of h-vectors of $R$-algebras. The decomposition algorithm that we use to prove theorem (\ref{extr}) leads to an h-diagram with staircases of extremal points in each level and as already mentioned this implies a totally ordered chain in the usual partial ordering in $\mathbb{H}$$(d)$. \begin{cor}\label{cor} Every element $h \in$ $\mathbb{H}$$(d)$ \;can be written as $h =\sum_{i \in I} q_i\! \cdot \!v^i,$\, with \linebreak$ v^i \in \text{Ex}(d)$, where the $(v^i)_{i\in I}$ form a totally ordered chain. \end{cor} This decomposition does not have to be unique even with a total order: \begin{exa} Let $n=2, \\ h=(2,1,2,0,1)=(1,1,1,0,1)+(1,0,1,0,0)=t^4*s^0 \, +\,t^2 = $\\ \hspace*{28.7mm} $=(1,1,2,0,1)+(1,0,0,0,0)=t^4*s^1\!+\!s^0.$\\ Both decompositions are totally ordered. \end{exa} \section{Proof of Theorem 2.3.} To prove the theorem we use an algorithm that decomposes any h-vector of an artinian graded module over $R$ finitely generated in degree 0. To get the right intuition we illustrate an example. \begin{exa} Let $\deg(y)=3$ \,and\, $h=(3,3,2,4,2,1,2,1)=\\ =(1,1,1,2,1,1,1,0)\;+\;\;(1,1,0,1,1,0,1,1)\;\;+\;\;(1,1,1,1,0,0,0,0)$\\ \col{Giving every degree its own color the corresponding staircases look like this:} \bw{The corresponding staircases look like this:} \col{ \unitlength1cm \begin{picture}(11,2.25) \put(1.2,.4){ \begin{tikzpicture} \filldraw [fill=plum,draw=black](0,0)rectangle(.5,.5); \filldraw [fill=blueberry,draw=black](.5,0)rectangle(1,.5); \filldraw [fill=eggplant,draw=black](1,0)rectangle(1.5,.5); \filldraw [fill=cranberry,draw=black](1.5,0)rectangle(2,.5); \filldraw [fill=cranberry,draw=black](0,.5)rectangle(.5,1); \filldraw [fill=beans,draw=black] (.5,.5)rectangle(1,1); \filldraw [fill=apricot,draw=black](1,.5)rectangle(1.5,1); \filldraw [fill=corn,draw=black] (0,1)rectangle(.5,1.5); \end{tikzpicture} } \put(5.5,.4){ \begin{tikzpicture} \filldraw [fill=plum,draw=black](0,0)rectangle(.5,.5); \filldraw [fill=blueberry,draw=black](.5,0)rectangle(1,.5); \filldraw [fill=cranberry,draw=black](0,.5)rectangle(.5,1); \filldraw [fill=beans,draw=black] (.5,.5)rectangle(1,1); \filldraw [fill=corn,draw=black](0,1)rectangle(.5,1.5); \filldraw [fill=llemon,draw=black](.5,1)rectangle(1,1.5); \end{tikzpicture} } \put(9.5,.4){ \begin{tikzpicture} \filldraw [fill=plum,draw=black](0,0)rectangle(.5,.5); \filldraw [fill=blueberry,draw=black](.5,0)rectangle(1,.5); \filldraw [fill=eggplant,draw=black] (1,0)rectangle(1.5,.5); \filldraw [fill=cranberry,draw=black] (0,.5)rectangle(.5,1); \end{tikzpicture} } \end{picture} } \bw{ \unitlength1cm \begin{picture}(11,2.25) \put(1.2,.4){ \begin{tikzpicture} \draw (0,0)rectangle(.5,.5) (.5,0)rectangle(1,.5) (1,0)rectangle(1.5,.5) (1.5,0)rectangle(2,.5); \draw (0,.5)rectangle(.5,1) (.5,.5)rectangle(1,1) (1,.5)rectangle(1.5,1); \draw (0,1)rectangle(.5,1.5); \draw (.25,.25) node{0} (.75,.25)node{1} (1.25,.25)node{2} (1.75,.25)node{3}; \draw (.25,.75) node{3} (.75,.75)node{4} (1.25,.75)node{5}; \draw (.25,1.25) node{6}; \end{tikzpicture} } \put(5.5,.4){ \begin{tikzpicture} \draw (0,0)rectangle(.5,.5) (.5,0)rectangle(1,.5); \draw (0,.5)rectangle(.5,1) (.5,.5)rectangle(1,1); \draw (0,1)rectangle(.5,1.5) (.5,1)rectangle(1,1.5); \draw (.25,.25) node{0} (.75,.25)node{1} ; \draw (.25,.75) node{3} (.75,.75)node{4} ; \draw (.25,1.25) node{6} (.75,1.25)node{7}; \end{tikzpicture} } \put(9.5,.4){ \begin{tikzpicture} \draw (0,0)rectangle(.5,.5) (.5,0)rectangle(1,.5) (1,0)rectangle(1.5,.5); \draw (0,.5)rectangle(.5,1); \draw (.25,.25) node{0} (.75,.25)node{1} (1.25,.25)node{2}; \draw (.25,.75) node{3}; \end{tikzpicture} } \end{picture} } In this case we get as an $h$-diagram: \unitlength1cm \col{ \begin{picture}(8,4) \put(2,0){ \begin{tikzpicture} \draw(0,0)--(3,0); \draw(0,0)--(0,2); \draw(0,0)--(-1.5,-1.5); \filldraw [fill=plum,draw=black](0,1.5)--(-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5,1.5)--(0,1.5); \filldraw [fill=cranberry,draw=black] (-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5-1*.7,-1*.7+1.5)--(-1*.7,-1*.7+1.5)--(-.5*.7,1.5-.5*.7); \filldraw [fill=cranberry,draw=black] (.5-.5*.7,-.5*.7+1.5)--(.5-1*.7,-1*.7+1.5)--(.5-1*.7,-1*.7+1)--(.5-.5*.7,-.5*.7+1)--(.5-.5*.7,-.5*.7+1.5); \filldraw [fill=cranberry,draw=black] (-1*.7,-1*.7+1.5)rectangle(.5-1*.7,-1*.7+1); ; \filldraw [fill=corn,draw=black] (-1*.7,-1*.7+1)--(.5-1*.7,-1*.7+1)--(.5-1.5*.7,-1.5*.7+1)--(-1.5*.7,-1.5*.7+1)--(-1*.7,-1*.7+1); \filldraw [fill=corn,draw=black] (.5-1*.7,-1*.7+1)--(.5-1.5*.7,-1.5*.7+1)--(.5-1.5*.7,-1.5*.7+.5)--(.5-1*.7,-1*.7+.5)-- (.5-1*.7,-1*.7+1); \filldraw [fill=corn,draw=black] (-1.5*.7,1-1.5*.7)rectangle (.5-1.5*.7,.5-1.5*.7) (.5-1.5*.7,.5-1.5*.7)rectangle ( -1.5*.7,-1.5*.7) ; \filldraw [fill=llemon,draw=black] (.5-1.5*.7,.5-1.5*.7)-- (.5-1*.7,.5-1*.7)-- (1-1*.7,.5-1*.7)-- (1-1.5*.7,.5-1.5*.7)-- (.5-1.5*.7,.5-1.5*.7); \filldraw [fill=llemon,draw=black] (1-1*.7,.5-1*.7)-- (1-1.5*.7,.5-1.5*.7)--(1-1.5*.7,-1.5*.7)--(1-1*.7,-1*.7)--(1-1*.7,.5-1*.7); \filldraw [fill=llemon,draw=black] (.5-1.5*.7,-1.5*.7)rectangle(1-1.5*.7,.5-1.5*.7); \filldraw [fill=beans,draw=black](.5-1*.7,1-1*.7)-- (.5-.5*.7,1-.5*.7)-- (1-.5*.7,1-.5*.7)-- (1-1*.7,1-1*.7)-- (.5-1*.7,1-1*.7); \filldraw [fill=beans,draw=black] (1-.5*.7,1-.5*.7)--(1-1*.7,1-1*.7)--(1-1*.7,.5-1*.7)--(1-.5*.7,.5-.5*.7)--(1-.5*.7,1-.5*.7); \filldraw [fill=beans,draw=black] (1-1*.7,1-1*.7)rectangle(.5-1*.7,.5-1*.7); \filldraw [fill=apricot,draw=black](1-1*.7,.5-1*.7)-- (1-.5*.7,.5-.5*.7)-- (1.5-.5*.7,.5-.5*.7)-- (1.5-1*.7,.5-1*.7)-- (1-1*.7,.5-1*.7); \filldraw [fill=apricot,draw=black] (1.5-.5*.7,.5-.5*.7)--(1.5-1*.7,.5-1*.7)--(1.5-1*.7,-1*.7)--(1.5-.5*.7,-.5*.7)--(1.5-.5*.7,.5-.5*.7); \filldraw [fill=apricot,draw=black] (1.5-1*.7,.5-1*.7)rectangle(1-1*.7,-1*.7); \filldraw [fill=blueberry,draw=black] (.5,1.5)--(.5-.5*.7,1.5-.5*.7)--(1-.5*.7,1.5-.5*.7)--(1,1.5)--(.5,1.5); \filldraw [fill=blueberry,draw=black] (1,1.5)--(1-.5*.7,1.5-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1)--(1,1.5); \filldraw [fill=blueberry,draw=black] (.5-.5*.7,1.5-.5*.7)rectangle(1-.5*.7,1-.5*.7); \filldraw [fill=eggplant,draw=black] (1,1)--(1.5,1)--(1.5-.5*.7,1-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1); \filldraw [fill=eggplant,draw=black] (1.5,1)--(1.5,.5)--(1.5-.5*.7,.5-.5*.7)--(1.5-.5*.7,1-.5*.7)--(1.5,1); \filldraw [fill=eggplant,draw=black] (1-.5*.7,.5-.5*.7)rectangle(1.5-.5*.7,1-.5*.7); \filldraw [fill=cranberry,draw=black] (1.5,.5)--(2,.5)--(2-.5*.7,.5-.5*.7)--(1.5-.5*.7,.5-.5*.7)--(1.5,.5); \filldraw [fill=cranberry,draw=black] (2,.5)--(2,0)--(2-.5*.7,-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5); \filldraw [fill=cranberry,draw=black] (1.5-.5*.7,.5-.5*.7)rectangle(2-.5*.7,-.5*.7); \end{tikzpicture} } \end{picture} } \bw{ \begin{picture}(8,3.5) \put(2,0){ \begin{tikzpicture} \draw(0,0)--(3,0); \draw(0,0)--(0,2); \draw(0,0)--(-1.5,-1.5); \filldraw [fill=lgr,draw=black](0,1.5)--(-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5,1.5)--(0,1.5); \draw (.45-.5*.7,1.68-.5*.7) node{\emph{0}}; \filldraw [fill=lgr,draw=black] (-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5-1*.7,-1*.7+1.5)--(-1*.7,-1*.7+1.5)--(-.5*.7,1.5-.5*.7); \filldraw [fill=dgr,draw=black] (.5-.5*.7,-.5*.7+1.5)--(.5-1*.7,-1*.7+1.5)--(.5-1*.7,-1*.7+1)--(.5-.5*.7,-.5*.7+1)--(.5-.5*.7,-.5*.7+1.5); \filldraw [fill=gr,draw=black] (-1*.7,-1*.7+1.5)rectangle(.5-1*.7,-1*.7+1); \draw (.45-1*.7,1.68-1*.7) node{\emph{3}}; \filldraw [fill=lgr,draw=black] (-1*.7,-1*.7+1)--(.5-1*.7,-1*.7+1)--(.5-1.5*.7,-1.5*.7+1)--(-1.5*.7,-1.5*.7+1)--(-1*.7,-1*.7+1); \filldraw [fill=dgr,draw=black] (.5-1*.7,-1*.7+1)--(.5-1.5*.7,-1.5*.7+1)--(.5-1.5*.7,-1.5*.7+.5)--(.5-1*.7,-1*.7+.5)-- (.5-1*.7,-1*.7+1); \filldraw [fill=gr,draw=black] (-1.5*.7,1-1.5*.7)rectangle (.5-1.5*.7,.5-1.5*.7) (.5-1.5*.7,.5-1.5*.7)rectangle ( -1.5*.7,-1.5*.7) ; \draw (.45-1.5*.7,1.18-1.5*.7) node{\emph{6}}; \filldraw [fill=lgr,draw=black] (.5-1.5*.7,.5-1.5*.7)-- (.5-1*.7,.5-1*.7)-- (1-1*.7,.5-1*.7)-- (1-1.5*.7,.5-1.5*.7)-- (.5-1.5*.7,.5-1.5*.7); \filldraw [fill=dgr,draw=black] (1-1*.7,.5-1*.7)-- (1-1.5*.7,.5-1.5*.7)--(1-1.5*.7,-1.5*.7)--(1-1*.7,-1*.7)--(1-1*.7,.5-1*.7); \filldraw [fill=gr,draw=black] (.5-1.5*.7,-1.5*.7)rectangle(1-1.5*.7,.5-1.5*.7); \draw (.95-1.5*.7,.67-1.5*.7) node{\emph{7}}; \filldraw [fill=lgr,draw=black](.5-1*.7,1-1*.7)-- (.5-.5*.7,1-.5*.7)-- (1-.5*.7,1-.5*.7)-- (1-1*.7,1-1*.7)-- (.5-1*.7,1-1*.7); \filldraw [fill=dgr,draw=black] (1-.5*.7,1-.5*.7)--(1-1*.7,1-1*.7)--(1-1*.7,.5-1*.7)--(1-.5*.7,.5-.5*.7)--(1-.5*.7,1-.5*.7); \filldraw [fill=gr,draw=black] (1-1*.7,1-1*.7)rectangle(.5-1*.7,.5-1*.7); \draw (.95-1*.7,1.18-1*.7) node{\emph{4}}; \filldraw [fill=lgr,draw=black](1-1*.7,.5-1*.7)-- (1-.5*.7,.5-.5*.7)-- (1.5-.5*.7,.5-.5*.7)-- (1.5-1*.7,.5-1*.7)-- (1-1*.7,.5-1*.7); \filldraw [fill=dgr,draw=black] (1.5-.5*.7,.5-.5*.7)--(1.5-1*.7,.5-1*.7)--(1.5-1*.7,-1*.7)--(1.5-.5*.7,-.5*.7)--(1.5-.5*.7,.5-.5*.7); \filldraw [fill=gr,draw=black] (1.5-1*.7,.5-1*.7)rectangle(1-1*.7,-1*.7); \draw (1.45-1*.7,.68-1*.7) node{\emph{5}}; \filldraw [fill=lgr,draw=black] (.5,1.5)--(.5-.5*.7,1.5-.5*.7)--(1-.5*.7,1.5-.5*.7)--(1,1.5)--(.5,1.5); \filldraw [fill=dgr,draw=black] (1,1.5)--(1-.5*.7,1.5-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1)--(1,1.5); \filldraw [fill=gr,draw=black] (.5-.5*.7,1.5-.5*.7)rectangle(1-.5*.7,1-.5*.7); \draw (.95-.5*.7,1.68-.5*.7) node{\emph{1}}; \filldraw [fill=lgr,draw=black] (1,1)--(1.5,1)--(1.5-.5*.7,1-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1); \filldraw [fill=dgr,draw=black] (1.5,1)--(1.5,.5)--(1.5-.5*.7,.5-.5*.7)--(1.5-.5*.7,1-.5*.7)--(1.5,1); \filldraw [fill=gr,draw=black] (1-.5*.7,.5-.5*.7)rectangle(1.5-.5*.7,1-.5*.7); \draw (1.45-.5*.7,1.18-.5*.7) node{\emph{2}}; \filldraw [fill=lgr,draw=black] (1.5,.5)--(2,.5)--(2-.5*.7,.5-.5*.7)--(1.5-.5*.7,.5-.5*.7)--(1.5,.5); \filldraw [fill=dgr,draw=black] (2,.5)--(2,0)--(2-.5*.7,-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5); \filldraw [fill=gr,draw=black] (1.5-.5*.7,.5-.5*.7)rectangle(2-.5*.7,-.5*.7); \draw (1.95-.5*.7,.68-.5*.7) node{\emph{3}}; \end{tikzpicture} } \end{picture} } We can rearrange the boxes in a way so that all the \col{colors respectively }degrees only sit over the allowed spots dedicated by staircases. In every level the area should be filled out as large as possible and take the maximal height in this level, however the rows and columns should still decrease. In this process the boxes can be cut horizontally in every rational proportion, so the levels can have any positive rational height. In our example we get the following stack: \col{ \begin{picture}(10,3.8) \put(2,0){ \begin{tikzpicture} \draw(0,0)--(6,0); \draw(0,0)--(0,2.2); \draw(0,0)--(-2*.7,-2*.7); \filldraw [fill=plum,draw=black](0,1.5)--(-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5,1.5)--(0,1.5); \filldraw [fill=plum,draw=black](-.5*.7,1.5-.5*.7)rectangle(.5-.5*.7,1-.5*.7); \filldraw [fill=blueberry,draw=black] (.5,1.5)--(.5-.5*.7,1.5-.5*.7)--(1-.5*.7,1.5-.5*.7)--(1,1.5)--(.5,1.5); \filldraw [fill=blueberry,draw=black] (1,1.5)--(1-.5*.7,1.5-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1)--(1,1.5); \filldraw [fill=blueberry,draw=black] (.5-.5*.7,1.5-.5*.7)rectangle(1-.5*.7,1-.5*.7); \filldraw [fill=blueberry,draw=black] (.5-.5*.7,1-.5*.7)rectangle(1-.5*.7,.5-.5*.7); \filldraw [fill=eggplant,draw=black] (1,1)--(1.5,1)--(1.5-.5*.7,1-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1); \filldraw [fill=eggplant,draw=black] (1.5,1)--(1.5,.5)--(1.5-.5*.7,.5-.5*.7)--(1.5-.5*.7,1-.5*.7)--(1.5,1); \filldraw [fill=eggplant,draw=black] (1-.5*.7,.5-.5*.7)rectangle(1.5-.5*.7,1-.5*.7); \filldraw [fill=eggplant,draw=black] (1-.5*.7,-.5*.7)rectangle(1.5-.5*.7,.5-.5*.7); \filldraw [fill=cranberry,draw=black] (1.5,1)--(2,1)--(2-.5*.7,1-.5*.7)--(1.5-.5*.7,1-.5*.7)--(1.5,1); \filldraw [fill=cranberry,draw=black] (2,1)--(2,.5)--(2-.5*.7,.5-.5*.7)--(2-.5*.7,1-.5*.7)--(2,1); \filldraw [fill=cranberry,draw=black] (2,.5)--(2,0)--(2-.5*.7,-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5); \filldraw [fill=cranberry,draw=black] (1.5-.5*.7,1-.5*.7)rectangle(2-.5*.7,.5-.5*.7); \filldraw [fill=cranberry,draw=black] (1.5-.5*.7,.5-.5*.7)rectangle(2-.5*.7,-.5*.7); \filldraw [fill=beans,draw=black] (2,.5)--(2.5,.5)--(2.5-.5*.7,.5-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5); \filldraw [fill=beans,draw=black] (2.5,.5)--(2.5,0)--(2.5-.5*.7,-.5*.7)--(2.5-.5*.7,.5-.5*.7)--(2.5,.5); \filldraw [fill=beans,draw=black] (2-.5*.7,.5-.5*.7)rectangle(2.5-.5*.7,-.5*.7); \filldraw [fill=apricot,draw=black] (2.5,.25)--(3,.25)--(3-.5*.7,.25-.5*.7)--(2.5-.5*.7,.25-.5*.7)--(2.5,.25); \filldraw [fill=apricot,draw=black] (3,.25)--(3,0)--(3-.5*.7,-.5*.7)--(3-.5*.7,.25-.5*.7)--(3,.25); \filldraw [fill=apricot,draw=black] (2.5-.5*.7,.25-.5*.7)rectangle(3-.5*.7,-.5*.7); \filldraw [fill=corn,draw=black] (3,.25)--(3.5,.25)--(3.5-.5*.7,.25-.5*.7)--(3-.5*.7,.25-.5*.7)--(3,.25); \filldraw [fill=corn,draw=black] (3.5,.25)--(3.5,0)--(3.5-.5*.7,-.5*.7)--(3.5-.5*.7,.25-.5*.7)--(3.5,.25); \filldraw [fill=corn,draw=black] (3-.5*.7,.25-.5*.7)rectangle(3.5-.5*.7,-.5*.7); \filldraw [fill=llemon,draw=black] (3.5,.33*.5)--(4,.33*.5)--(4-.5*.7,.33*.5-.5*.7)--(3.5-.5*.7,.33*.5-.5*.7)--(3.5,.33*.5); \filldraw [fill=llemon,draw=black] (4,.33*.5)--(4,0)--(4-.5*.7,-.5*.7)--(4-.5*.7,.33*.5-.5*.7)--(4,.33*.5); \filldraw [fill=llemon,draw=black] (3.5-.5*.7,.33*.5-.5*.7)rectangle(4-.5*.7,-.5*.7); \draw [densely dotted](3.5-.5*.7,.33*.5-.5*.7)--(2-.5*.7,.33*.5-.5*.7) (2.5-.5*.7,.25-.5*.7)--(2-.5*.7,.25-.5*.7); \filldraw [fill=cranberry,draw=black] (-.5*.7,1-.5*.7)--(.5-.5*.7,1-.5*.7)--(.5-1*.7,1-1*.7)--(-1*.7,1-1*.7)--(-.5*.7,1-.5*.7); \filldraw[fill=cranberry,draw=black](.5-.5*.7,1-.5*.7)--(.5-1*.7,1-1*.7)--(.5-1*.7,.5-1*.7)--(.5-.5*.7,.5-.5*.7)--(.5-.5*.7,1-.5*.7); \filldraw [fill=cranberry,draw=black] (-1*.7,1-1*.7)rectangle(.5-1*.7,.5-1*.7); \filldraw [fill=cranberry,draw=black] (.5-.5*.7,.5-.5*.7)--(.5-1*.7,.5-1*.7)--(.5-1*.7,-1*.7)--(.5-.5*.7,-.5*.7)--(.5-.5*.7,.5-.5*.7); \filldraw [fill=cranberry,draw=black](-1*.7,.5-1*.7)rectangle(.5-1*.7,-1*.7); \filldraw [fill=beans,draw=black](.5-1*.7,.5-1*.7)-- (.5-.5*.7,.5-.5*.7)-- (1-.5*.7,.5-.5*.7)-- (1-1*.7,.5-1*.7)-- (.5-1*.7,.5-1*.7); \filldraw [fill=beans,draw=black] (1-.5*.7,.5-.5*.7)--(1-1*.7,.5-1*.7)--(1-1*.7,-1*.7)--(1-.5*.7,-.5*.7)--(1-.5*.7,.5-.5*.7); \filldraw [fill=beans,draw=black] (1-1*.7,.5-1*.7)rectangle(.5-1*.7,-1*.7); \filldraw [fill=apricot,draw=black](1-1*.7,.25-1*.7)-- (1-.5*.7,.25-.5*.7)-- (1.5-.5*.7,.25-.5*.7)-- (1.5-1*.7,.25-1*.7)-- (1-1*.7,.25-1*.7); \filldraw [fill=apricot,draw=black] (1.5-.5*.7,.25-.5*.7)--(1.5-1*.7,.25-1*.7)--(1.5-1*.7,-1*.7)--(1.5-.5*.7,-.5*.7)--(1.5-.5*.7,.25-.5*.7); \filldraw [fill=apricot,draw=black] (1.5-1*.7,.25-1*.7)rectangle(1-1*.7,-1*.7); \filldraw [fill=corn,draw=black](1.5-1*.7,.25-1*.7)-- (1.5-.5*.7,.25-.5*.7)-- (2-.5*.7,.25-.5*.7)-- (2-1*.7,.25-1*.7)-- (1.5-1*.7,.25-1*.7); \filldraw [fill=corn,draw=black] (2-.5*.7,.25-.5*.7)--(2-1*.7,.25-1*.7)--(2-1*.7,-1*.7)--(2-.5*.7,-.5*.7)--(2-.5*.7,.25-.5*.7); \filldraw [fill=corn,draw=black] (2-1*.7,.25-1*.7)rectangle(1.5-1*.7,-1*.7); \filldraw [fill=llemon,draw=black](2-1*.7,.66*.25-1*.7)-- (2-.5*.7,.66*.25-.5*.7)-- (2.5-.5*.7,.66*.25-.5*.7)-- (2.5-1*.7,.66*.25-1*.7)-- (2-1*.7,.66*.25-1*.7); \filldraw [fill=llemon,draw=black] (2.5-.5*.7,.66*.25-.5*.7)--(2.5-1*.7,.66*.25-1*.7)--(2.5-1*.7,-1*.7)--(2.5-.5*.7,-.5*.7)--(2.5-.5*.7,.66*.25-.5*.7); \filldraw [fill=llemon,draw=black] (2.5-1*.7,.66*.25-1*.7)rectangle(2-1*.7,-1*.7); \draw [densely dotted] (2-1*.7,.33*.5-1*.7)--(1-1*.7,.33*.5-1*.7) (1-1*.7,.25-1*.7)--(.5-1*.7,.25-1*.7) ; \filldraw [fill=corn,draw=black] (-1*.7,.5-1*.7)--(.5-1*.7,.5-1*.7)--(.5-1.5*.7,.5-1.5*.7)--(-1.5*.7,.5-1.5*.7)--(-1*.7,.5-1*.7); \filldraw [fill=corn,draw=black] (.5-1*.7,.5-1*.7)--(.5-1.5*.7,.5-1.5*.7)--(.5-1.5*.7,-1.5*.7)--(.5-1*.7,-1*.7)-- (.5-1*.7,.5-1*.7); \filldraw [fill=corn,draw=black] (.5-1.5*.7,.5-1.5*.7)rectangle ( -1.5*.7,-1.5*.7) ; \filldraw [fill=llemon,draw=black] (.5-1.5*.7,.33*.5-1.5*.7)-- (.5-1*.7,.33*.5-1*.7)-- (1-1*.7,.33*.5-1*.7)-- (1-1.5*.7,.33*.5-1.5*.7)-- (.5-1.5*.7,.33*.5-1.5*.7); \filldraw [fill=llemon,draw=black] (1-1*.7,.33*.5-1*.7)-- (1-1.5*.7,.33*.5-1.5*.7)--(1-1.5*.7,-1.5*.7)--(1-1*.7,-1*.7)--(1-1*.7,.33*.5-1*.7); \filldraw [fill=llemon,draw=black] (.5-1.5*.7,-1.5*.7)rectangle(1-1.5*.7,.33*.5-1.5*.7); \draw [densely dotted] (-1.5*.7,.33*.5-1.5*.7)--(.5-1.5*.7,.33*.5-1.5*.7) (-1.5*.7,.25-1.5*.7)--(.5-1.5*.7,.25-1.5*.7) (.5-1*.7,.25-1*.7)--(.5-1.5*.7,.25-1.5*.7); \draw [dotted] (4,.33*.5)--(6,.33*.5) (3.5,.25)--(6,.25) (2.5,.5)--(6,.5) (1.5,1)--(6,1) (1,1.5)--(6,1.5); \draw [dotted] (-1.5*.7,.33*.5-1.5*.7)--(-2*.7,.33*.5-2*.7) (-1.5*.7,.25-1.5*.7)--(-2*.7,.25-2*.7) (-1.5*.7,.5-1.5*.7)--(-2*.7,.5-2*.7) (-1*.7,1-1*.7)--(-2*.7,1-2*.7) (-.5*.7,1.5-.5*.7)--(-2*.7,1.5-2*.7); \draw (6.1,0.1) node{\tiny{$\frac{1}{3}$}} (6.25,0.2) node{\tiny{$\frac{1}{6}$}} (6.1,0.4) node{\tiny{$\frac{1}{2}$}} (6.1,.75) node{\small{1}} (6.1,1.25) node{\small{1}}; \end{tikzpicture} } \end{picture} } \bw{ \begin{picture}(10,3.8) \put(2,0){ \begin{tikzpicture} \draw(0,0)--(6,0); \draw(0,0)--(0,2.2); \draw(0,0)--(-2*.7,-2*.7); \filldraw [fill=lgr,draw=black](0,1.5)--(-.5*.7,1.5-.5*.7)--(.5-.5*.7,-.5*.7+1.5)--(.5,1.5)--(0,1.5); \filldraw [fill=gr,draw=black] (-.5*.7,1.5-.5*.7)rectangle(.5-.5*.7,1-.5*.7); \draw (.45-.5*.7,1.68-.5*.7) node{\emph{0}}; \filldraw [fill=lgr,draw=black] (.5,1.5)--(.5-.5*.7,1.5-.5*.7)--(1-.5*.7,1.5-.5*.7)--(1,1.5)--(.5,1.5); \filldraw [fill=dgr,draw=black] (1,1.5)--(1-.5*.7,1.5-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1)--(1,1.5); \filldraw [fill=gr,draw=black] (.5-.5*.7,1.5-.5*.7)rectangle(1-.5*.7,1-.5*.7); \filldraw [fill=gr,draw=black] (.5-.5*.7,1-.5*.7)rectangle(1-.5*.7,.5-.5*.7); \draw (.95-.5*.7,1.68-.5*.7) node{\emph{1}}; \filldraw [fill=lgr,draw=black] (1,1)--(1.5,1)--(1.5-.5*.7,1-.5*.7)--(1-.5*.7,1-.5*.7)--(1,1); \filldraw [fill=dgr,draw=black] (1.5,1)--(1.5,.5)--(1.5-.5*.7,.5-.5*.7)--(1.5-.5*.7,1-.5*.7)--(1.5,1); \filldraw [fill=gr,draw=black] (1-.5*.7,.5-.5*.7)rectangle(1.5-.5*.7,1-.5*.7); \filldraw [fill=gr,draw=black] (1-.5*.7,-.5*.7)rectangle(1.5-.5*.7,.5-.5*.7); \draw (1.45-.5*.7,1.18-.5*.7) node{\emph{2}}; \filldraw [fill=lgr,draw=black] (1.5,1)--(2,1)--(2-.5*.7,1-.5*.7)--(1.5-.5*.7,1-.5*.7)--(1.5,1); \filldraw [fill=dgr,draw=black] (2,1)--(2,.5)--(2-.5*.7,.5-.5*.7)--(2-.5*.7,1-.5*.7)--(2,1); \filldraw [fill=dgr,draw=black] (2,.5)--(2,0)--(2-.5*.7,-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5); \filldraw [fill=gr,draw=black] (1.5-.5*.7,1-.5*.7)rectangle(2-.5*.7,.5-.5*.7); \filldraw [fill=gr,draw=black] (1.5-.5*.7,.5-.5*.7)rectangle(2-.5*.7,-.5*.7); \draw (1.95-.5*.7,1.18-.5*.7) node{\emph{3}}; \filldraw [fill=lgr,draw=black] (2,.5)--(2.5,.5)--(2.5-.5*.7,.5-.5*.7)--(2-.5*.7,.5-.5*.7)--(2,.5); \filldraw [fill=dgr,draw=black] (2.5,.5)--(2.5,0)--(2.5-.5*.7,-.5*.7)--(2.5-.5*.7,.5-.5*.7)--(2.5,.5); \filldraw [fill=gr,draw=black] (2-.5*.7,.5-.5*.7)rectangle(2.5-.5*.7,-.5*.7); \draw (2.45-.5*.7,.68-.5*.7) node{\emph{4}}; \filldraw [fill=lgr,draw=black] (2.5,.25)--(3,.25)--(3-.5*.7,.25-.5*.7)--(2.5-.5*.7,.25-.5*.7)--(2.5,.25); \filldraw [fill=gr,draw=black] (2.5-.5*.7,.25-.5*.7)rectangle(3-.5*.7,-.5*.7); \draw (2.95-.5*.7,.43-.5*.7) node{\emph{5}}; \filldraw [fill=lgr,draw=black] (3,.25)--(3.5,.25)--(3.5-.5*.7,.25-.5*.7)--(3-.5*.7,.25-.5*.7)--(3,.25); \filldraw [fill=dgr,draw=black] (3.5,.25)--(3.5,0)--(3.5-.5*.7,-.5*.7)--(3.5-.5*.7,.25-.5*.7)--(3.5,.25); \filldraw [fill=gr,draw=black] (3-.5*.7,.25-.5*.7)rectangle(3.5-.5*.7,-.5*.7); \draw (3.45-.5*.7,.43-.5*.7) node{\emph{6}}; \filldraw [fill=lgr,draw=black] (3.5,.33*.5)--(4,.33*.5)--(4-.5*.7,.33*.5-.5*.7)--(3.5-.5*.7,.33*.5-.5*.7)--(3.5,.33*.5); \filldraw [fill=dgr,draw=black] (4,.33*.5)--(4,0)--(4-.5*.7,-.5*.7)--(4-.5*.7,.33*.5-.5*.7)--(4,.33*.5); \filldraw [fill=gr,draw=black] (3.5-.5*.7,.33*.5-.5*.7)rectangle(4-.5*.7,-.5*.7); \draw (3.95-.5*.7,.33-.5*.7) node{\emph{7}}; \draw [densely dotted](3.5-.5*.7,.33*.5-.5*.7)--(2-.5*.7,.33*.5-.5*.7) (2.5-.5*.7,.25-.5*.7)--(2-.5*.7,.25-.5*.7); \filldraw [fill=lgr,draw=black] (-.5*.7,1-.5*.7)--(.5-.5*.7,1-.5*.7)--(.5-1*.7,1-1*.7)--(-1*.7,1-1*.7)--(-.5*.7,1-.5*.7); \filldraw [fill=dgr,draw=black] (.5-.5*.7,1-.5*.7)--(.5-1*.7,1-1*.7)--(.5-1*.7,.5-1*.7)--(.5-.5*.7,.5-.5*.7)--(.5-.5*.7,1-.5*.7); \filldraw [fill=gr,draw=black] (-1*.7,1-1*.7)rectangle(.5-1*.7,.5-1*.7); \draw (.45-1*.7,1.18-1*.7) node{\emph{3}}; \filldraw [fill=lgr,draw=black](.5-1*.7,.5-1*.7)-- (.5-.5*.7,.5-.5*.7)-- (1-.5*.7,.5-.5*.7)-- (1-1*.7,.5-1*.7)-- (.5-1*.7,.5-1*.7); \filldraw [fill=dgr,draw=black] (1-.5*.7,.5-.5*.7)--(1-1*.7,.5-1*.7)--(1-1*.7,-1*.7)--(1-.5*.7,-.5*.7)--(1-.5*.7,.5-.5*.7); \filldraw [fill=gr,draw=black] (1-1*.7,.5-1*.7)rectangle(.5-1*.7,-1*.7); \draw (.95-1*.7,.68-1*.7) node{\emph{4}}; \filldraw [fill=lgr,draw=black](1-1*.7,.25-1*.7)-- (1-.5*.7,.25-.5*.7)-- (1.5-.5*.7,.25-.5*.7)-- (1.5-1*.7,.25-1*.7)-- (1-1*.7,.25-1*.7); \filldraw [fill=dgr,draw=black] (1.5-.5*.7,.25-.5*.7)--(1.5-1*.7,.25-1*.7)--(1.5-1*.7,-1*.7)--(1.5-.5*.7,-.5*.7)--(1.5-.5*.7,.25-.5*.7); \filldraw [fill=gr,draw=black] (1.5-1*.7,.25-1*.7)rectangle(1-1*.7,-1*.7); \draw (1.45-1*.7,.43-1*.7) node{\emph{5}}; \filldraw [fill=lgr,draw=black](1.5-1*.7,.25-1*.7)-- (1.5-.5*.7,.25-.5*.7)-- (2-.5*.7,.25-.5*.7)-- (2-1*.7,.25-1*.7)-- (1.5-1*.7,.25-1*.7); \filldraw [fill=dgr,draw=black] (2-.5*.7,.25-.5*.7)--(2-1*.7,.25-1*.7)--(2-1*.7,-1*.7)--(2-.5*.7,-.5*.7)--(2-.5*.7,.25-.5*.7); \filldraw [fill=gr,draw=black] (2-1*.7,.25-1*.7)rectangle(1.5-1*.7,-1*.7); \draw (1.95-1*.7,.43-1*.7) node{\emph{6}}; \filldraw [fill=lgr,draw=black](2-1*.7,.66*.25-1*.7)-- (2-.5*.7,.66*.25-.5*.7)-- (2.5-.5*.7,.66*.25-.5*.7)-- (2.5-1*.7,.66*.25-1*.7)-- (2-1*.7,.66*.25-1*.7); \filldraw [fill=dgr,draw=black] (2.5-.5*.7,.66*.25-.5*.7)--(2.5-1*.7,.66*.25-1*.7)--(2.5-1*.7,-1*.7)--(2.5-.5*.7,-.5*.7)--(2.5-.5*.7,.66*.25-.5*.7); \filldraw [fill=gr,draw=black] (2.5-1*.7,.66*.25-1*.7)rectangle(2-1*.7,-1*.7); \draw (2.45-1*.7,.33-1*.7) node{\emph{7}}; \draw [densely dotted] (2-1*.7,.33*.5-1*.7)--(1-1*.7,.33*.5-1*.7) (1-1*.7,.25-1*.7)--(.5-1*.7,.25-1*.7) ; \filldraw [fill=lgr,draw=black] (-1*.7,.5-1*.7)--(.5-1*.7,.5-1*.7)--(.5-1.5*.7,.5-1.5*.7)--(-1.5*.7,.5-1.5*.7)--(-1*.7,.5-1*.7); \filldraw [fill=dgr,draw=black] (.5-1*.7,.5-1*.7)--(.5-1.5*.7,.5-1.5*.7)--(.5-1.5*.7,-1.5*.7)--(.5-1*.7,-1*.7)-- (.5-1*.7,.5-1*.7); \filldraw [fill=gr,draw=black] (.5-1.5*.7,.5-1.5*.7)rectangle ( -1.5*.7,-1.5*.7) ; \draw (.45-1.5*.7,.68-1.5*.7) node{\emph{6}}; \filldraw [fill=lgr,draw=black] (.5-1.5*.7,.33*.5-1.5*.7)-- (.5-1*.7,.33*.5-1*.7)-- (1-1*.7,.33*.5-1*.7)-- (1-1.5*.7,.33*.5-1.5*.7)-- (.5-1.5*.7,.33*.5-1.5*.7); \filldraw [fill=dgr,draw=black] (1-1*.7,.33*.5-1*.7)-- (1-1.5*.7,.33*.5-1.5*.7)--(1-1.5*.7,-1.5*.7)--(1-1*.7,-1*.7)--(1-1*.7,.33*.5-1*.7); \filldraw [fill=gr,draw=black] (.5-1.5*.7,-1.5*.7)rectangle(1-1.5*.7,.33*.5-1.5*.7); \draw (.95-1.5*.7,.33-1.5*.7) node{\emph{7}}; \draw [densely dotted] (-1.5*.7,.33*.5-1.5*.7)--(.5-1.5*.7,.33*.5-1.5*.7) (-1.5*.7,.25-1.5*.7)--(.5-1.5*.7,.25-1.5*.7) (.5-1*.7,.25-1*.7)--(.5-1.5*.7,.25-1.5*.7); \draw [dotted] (4,.33*.5)--(6,.33*.5) (3.5,.25)--(6,.25) (2.5,.5)--(6,.5) (1.5,1)--(6,1) (1,1.5)--(6,1.5); \draw [dotted] (-1.5*.7,.33*.5-1.5*.7)--(-2*.7,.33*.5-2*.7) (-1.5*.7,.25-1.5*.7)--(-2*.7,.25-2*.7) (-1.5*.7,.5-1.5*.7)--(-2*.7,.5-2*.7) (-1*.7,1-1*.7)--(-2*.7,1-2*.7) (-.5*.7,1.5-.5*.7)--(-2*.7,1.5-2*.7); \draw (6.1,0.1) node{\scalebox{.5}{$\frac{1}{3}$}} (6.25,0.2) node{\scalebox{.5}{$\frac{1}{6}$}} (6.1,0.4) node{\scalebox{.5}{$\frac{1}{2}$}} (6.1,.75) node{\small{1}} (6.1,1.25) node{\small{1}}; \end{tikzpicture} } \end{picture} } Running the algorithm coming up next we get the following decomposition: $$h\;=\;\frac{1}{3} \cdot s^7 \;+\; \frac{1}{6} \cdot s^6 \;+ \;\frac{1}{2}(t^6 * s^3)\; + \;s^3 \;+\; s^1$$ \end{exa} \pagebreak The largest area that can be filled out up to a fixed degree is given by the staircase of the maximal h-vector of the same degree. If that does not fit, the maximal h-vector of lower degree must be combined with a tower on the left and so on. So we get layers of staircases of extremal points in a total order. That is what the algorithm does. An explicit description of the algorithm as a flowchart can be found in the appendix. We now follow the algorithm and prove the accuracy of theorem (\ref{extr}) in several steps. \\ Starting with any element $h=(h_0,\dots,h_d)\in \mathbb{Q}_{\geq0}^{d+1}$ we want to decide whether it is in the cone $\mathbb{H}$(d) and if so we want to give a positive rational decomposition in extremal points. First we subtract a rational multiple of the maximal h-vectors $s^d$ as large as possible, so the rest stays non-negative. We continue by lowering the degree in every loop until we end up in $h=(0)$ or we get a trivial entry in a coordinate with index smaller than the degree, but $h_d >0$. Let's call this a \textit{reduced h-vector}. \begin{lemma}\label{lem1} If $g \in$ $\mathbb{H}$(d) then $h=g-q \cdot s^d$ with $q \leq \underset{i \in \{0,\cdots,d\}}{\min}\{\frac{g_i}{s_i}\}$ is still in $\mathbb{H}$. \end{lemma} \begin{proof} We have a decomposition $g_i= \sum_{j=1}^{s_i} g_i^j$ with properties (1) and (2) of proposition (\ref{hcone}) and as $q\leq \min \{\frac{g_i}{s_i}\}$ we may assume that $g_i^j\geq q$ for all $i,j$. We can achieve this for example by replacing the decomposition in the highest degree $d$ by $\bar{g}_d^{s_d} =g_d-q(s_d-1)$ and $\bar{g}_d^j=q$ for all $j<s_d$. To fill the gaps $g_i^j<q$ in the lower degrees we take the material from the other spots of same degree starting with the smallest row but never underrunning $q$ and always ensuring that the rows and columns are still decreasing looking upwards from degree $i$. Having such a decomposition we set $h_i^j=g_i^j-q$ for $h=(g_0-q, \ldots,g_d- q s_d)$ . Then the computation $$\sum_{j=1}^{s_i}h_i^j=\sum_{j=1}^{s_i}g_i^j -q \cdot s_i = g_i - q \cdot s_i $$ shows that both conditions in proposition (\ref{hcone}) are fulfilled, hence $h\in\mathbb{H}$. \end{proof} \begin{exa}\label{notextr} Let $h=t^{n\cdot m-1}$ be the tower of degree $n\cdot m-1$. The staircase corresponding to this h-vector is a rectangle of size $n \times (m+1)$. The previous lemma gives the decomposition in extremal points $\sum_{\ell=1}^m q_\ell\cdot s^{n\ell-1}$ with $$q_m=\frac{1}{m} \quad \text{and} \quad q_\ell=\frac{1}{\ell}-\sum_{k=\ell+1}^m \frac{1}{q_k}.$$ \end{exa} \pagebreak \begin{lemma}\label{lem3} There is no reduced h-vector of degree $d=n\!\cdot\! m-1$. \end{lemma} \begin{proof} Let $h$ be reduced, thus there exists $k < d$ with $ h_k=0$ but $h_d>0$. \linebreak Let $k=n\!\cdot\! m_k+r_k$ be maximal among those indices fufilling the previous conditions. From the decreasing of the columns in the h-diagram follows $ h_{n\ell+r_k}=0 $ \, for all $\ell \geq m_k $ and from the decreasing of the rows follows the contradiction \linebreak $0=h_{n(m-1)+r_k}\geq h_{nm-1}>0$. Therefore no such $k$ can exist. \end{proof} The central part of the proof of theorem (\ref{extr}) is given by the following lemma. \begin{lemma}\label{lem2} Every element of the cone $\mathbb{H}$(d) decomposes into a positive rational linear combination of extremal points. \end{lemma} \begin{proof} The proof goes by induction over the degree $d$. \\Set $m= \lfloor\frac{d}{n} \rfloor$ and $r=d-mn$. It is clear up to $d=n-1$, since there is only one row in the h-diagram and the extremal points are just of the form $s^\ell=(1,\dots,1),$ \, with \,$\ell \leq d$. So we can use lemma (\ref{lem1}) to decompose. Let $h\in \mathbb{H}(d)$ with $h_d > 0$. By lemma (\ref{lem1}) and lemma (\ref{lem3}) we can assume that $h$ is reduced and $d\neq (n-1) \mathrm{mod}\; n$. For a given $h \in \mathbb{H}(d)$ there are many decompositions fulfilling proposition (\ref{hcone}). We choose one that is maximal in the truncated sum $\sum_{i,j} \bar{h}_i^j$ with $\bar{h}_i^j= h_d$ for $h_i^j \geq h_d$ and $\bar{h}_i^j =h_i^j$ else. We also can achieve this by shifting the boxes as far to the right as possible. As $h$ is reduced there exists a $k<d$ with $h_k =0$. From descending row and column condition we get $h_{k+1}=0$ for all $k \neq (n-1) \mathrm{mod}\; n$ and $h_{k+n}=0$ for all $k+n<d$. Therefore $h_{mn-1}=0$ and $h_d^{s_d} =h_d$. Coming back to the h-diagram we cut off anything to the right of column \linebreak $r+1$, saying next to the tower $t^d$ and call the diagram on the right side $h'$. A visualisation of this process and the next steps is given in example (\ref{cutoff}). The boxes of the new diagram are still nested in the corner with decreasing rows and columns therefore we get again an h-diagram, that has by construction at most $m$ rows and $d-2r-2$ columns. Hence the maximal degree of $h'$ is $d'= d-2r-3$ and we can use induction to decompose $h'$, but we only do this up to height $h_d$. Glueing this back to the first $r+1$ columns again we have an h-diagram. The layers of $h'$ up to height $h_d$ are corresponding to staircases of extremal points in $\text{Ex}(d')$, which are glued by the $*$-operator to the towers $t^d$ with the corresponding height, or maybe there is nothing left to glue we just take the tower itself. Next we cut off anything above height $h_d$ and get the upper h-diagram $h''$ with degree strictly less than $d$ and therefore we can use induction again. The lowest level of $h''$ is with respect to the partial ordering smaller than the lower adjacent level to height $h_d$ in the last completed h-diagram, because we shifted as many boxes as possible under height $h_d$ and decomposed $h'$ only up to height $h_d$. Therefore we get a decomposition in a total order as stated in corollary (\ref{cor}). \end{proof} \pagebreak \begin{exa}\label{cutoff} Let $n=4$\; and \,$h=(3,3,2,2,3,3,2,0,1,1)$, a possible h-diagram could look like this: \unitlength1cm \begin{picture}(13,6.3) \put(0,3.8){ \put(.5,0){ \begin{tikzpicture} \draw (0,0)--(1.5,0) (0,0)--(0,1.3) (0,0)--(-.8,-.8); \filldraw [fill=lgr,draw=black] (0,.3)--(-.3*.7,.3-.3*.7)--(.3-.3*.7,.3-.3*.7)--(.3,.3)--(0,.3); \filldraw [fill=lgr,draw=black] (.3,.3)--(.3-.3*.7,.3-.3*.7)--(.6-.3*.7,.3-.3*.7)--(.6,.3)--(.3,.3); \filldraw [fill=lgr,draw=black] (-.3*.7,.3-.3*.7)--(-.6*.7,.3-.6*.7)--(.3-.6*.7,.3-.6*.7)--(.3-.3*.7,.3-.3*.7)--(-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (.3-.3*.7,.3-.3*.7)--(.3-.6*.7,.3-.6*.7)--(.6-.6*.7,.3-.6*.7)--(.6-.3*.7,.3-.3*.7)--(.3-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (-.6*.7,.3-.6*.7)--(-.9*.7,.3-.9*.7)--(.3-.9*.7,.3-.9*.7)--(.3-.6*.7,.3-.6*.7)--(-.6*.7,.3-.6*.7); \filldraw [fill=lgr,draw=black] (.3-.6*.7,.3-.6*.7)--(.3-.9*.7,.3-.9*.7)--(.6-.9*.7,.3-.9*.7)--(.6-.6*.7,.3-.6*.7)--(.3-.6*.7,.3-.6*.7); \filldraw [fill=dgr,draw=black] (.6,.3)--(.6-.3*.7,.3-.3*.7)--(.6-.3*.7,-.3*.7)--(.6,0)--(.6,.3); \filldraw [fill=dgr,draw=black] (.6-.3*.7,.3-.3*.7)--(.6-.6*.7,.3-.6*.7)--(.6-.6*.7,-.6*.7)--(.6-.3*.7,-.3*.7)--(.6-.3*.7,.3-.3*.7); \filldraw [fill=dgr,draw=black] (.6-.6*.7,.3-.6*.7)--(.6-.9*.7,.3-.9*.7)--(.6-.9*.7,-.9*.7)--(.6-.6*.7,-.6*.7)--(.6-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (-.9*.7,.3-.9*.7)rectangle(.3-.9*.7,-.9*.7); \filldraw [fill=gr,draw=black] (.3-.9*.7,.3-.9*.7)rectangle(.6-.9*.7,-.9*.7); \filldraw [fill=lgr,draw=black] (.3,.9)--(.3-.6*.7,.9-.6*.7)--(.6-.6*.7,.9-.6*.7)--(.6,.9)--(.3,.9); \filldraw [fill=lgr,draw=black] (0,.9)--(-.6*.7,.9-.6*.7)--(.3-.6*.7,.9-.6*.7)--(.3,.9)--(0,.9); \filldraw [fill=lgr,draw=black] (0,.9)--(-.3*.7,.9-.3*.7)--(.3-.3*.7,.9-.3*.7)--(.3,.9)--(0,.9); \filldraw [fill=lgr,draw=black] (.3,.9)--(.3-.3*.7,.9-.3*.7)--(.6-.3*.7,.9-.3*.7)--(.6,.9)--(.3,.9); \filldraw [fill=dgr,draw=black] (.6,.9)--(.6-.3*.7,.9-.3*.7)--(.6-.3*.7,.6-.3*.7)--(.6,.6)--(.6,.9); \filldraw [fill=dgr,draw=black] (.6-.3*.7,.9-.3*.7)--(.6-.6*.7,.9-.6*.7)--(.6-.6*.7,.6-.6*.7)--(.6-.3*.7,.6-.3*.7)--(.6-.3*.7,.9-.3*.7); \filldraw [fill=gr,draw=black] (-.6*.7,.6-.6*.7)rectangle(.3-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (.3-.6*.7,.6-.6*.7)rectangle(.6-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (-.6*.7,.9-.6*.7)rectangle(.3-.6*.7,.6-.6*.7); \filldraw [fill=gr,draw=black] (.3-.6*.7,.9-.6*.7)rectangle(.6-.6*.7,.6-.6*.7); \filldraw [fill=lgr,draw=black] (.6,.6)--(.6-.3*.7,.6-.3*.7)--(.9-.3*.7,.6-.3*.7)--(.9,.6)--(.6,.6); \filldraw [fill=lgr,draw=black] (.9,.6)--(.9-.3*.7,.6-.3*.7)--(1.2-.3*.7,.6-.3*.7)--(1.2,.6)--(.9,.6); \filldraw [fill=lgr,draw=black] (.6-.3*.7,.6-.3*.7)--(.6-.6*.7,.6-.6*.7)--(.9-.6*.7,.6-.6*.7)--(.9-.3*.7,.6-.3*.7)--(.6-.3*.7,.6-.3*.7); \filldraw [fill=dgr,draw=black] (1.2,.3)--(1.2-.3*.7,.3-.3*.7)--(1.2-.3*.7,-.3*.7)--(1.2,0)--(1.2,.3); \filldraw [fill=dgr,draw=black] (1.2,.6)--(1.2-.3*.7,.6-.3*.7)--(1.2-.3*.7,.3-.3*.7)--(1.2,.3)--(1.2,.6); \filldraw [fill=dgr,draw=black] (.9-.3*.7,.3-.3*.7)--(.9-.6*.7,.3-.6*.7)--(.9-.6*.7,-.6*.7)--(.9-.3*.7,-.3*.7)--(.9-.3*.7,.3-.3*.7); \filldraw [fill=dgr,draw=black] (.9-.3*.7,.6-.3*.7)--(.9-.6*.7,.6-.6*.7)--(.9-.6*.7,.3-.6*.7)--(.9-.3*.7,.3-.3*.7)--(.9-.3*.7,.6-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,.3-.3*.7)rectangle(1.2-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,.6-.3*.7)rectangle(1.2-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (.6-.6*.7,.6-.6*.7)rectangle(.9-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (.6-.6*.7,.3-.6*.7)rectangle(.9-.6*.7,-.6*.7); \end{tikzpicture} } \put(3.9,.7){$\longrightarrow$} \put(2.9,1.5){\scriptsize{shifting the boxes}} \put(4.8,0){ \begin{tikzpicture} \draw (0,0)--(2.5,0) (0,0)--(0,1.3) (0,0)--(-.8,-.8); \filldraw [fill=lgr,draw=black] (0,.3)--(-.3*.7,.3-.3*.7)--(.3-.3*.7,.3-.3*.7)--(.3,.3)--(0,.3); \filldraw [fill=lgr,draw=black] (.3,.3)--(.3-.3*.7,.3-.3*.7)--(.6-.3*.7,.3-.3*.7)--(.6,.3)--(.3,.3); \filldraw [fill=lgr,draw=black] (-.3*.7,.3-.3*.7)--(-.6*.7,.3-.6*.7)--(.3-.6*.7,.3-.6*.7)--(.3-.3*.7,.3-.3*.7)--(-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (.3-.3*.7,.3-.3*.7)--(.3-.6*.7,.3-.6*.7)--(.6-.6*.7,.3-.6*.7)--(.6-.3*.7,.3-.3*.7)--(.3-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (-.6*.7,.3-.6*.7)--(-.9*.7,.3-.9*.7)--(.3-.9*.7,.3-.9*.7)--(.3-.6*.7,.3-.6*.7)--(-.6*.7,.3-.6*.7); \filldraw [fill=lgr,draw=black] (.3-.6*.7,.3-.6*.7)--(.3-.9*.7,.3-.9*.7)--(.6-.9*.7,.3-.9*.7)--(.6-.6*.7,.3-.6*.7)--(.3-.6*.7,.3-.6*.7); \filldraw [fill=dgr,draw=black] (.6,.3)--(.6-.3*.7,.3-.3*.7)--(.6-.3*.7,-.3*.7)--(.6,0)--(.6,.3); \filldraw [fill=dgr,draw=black] (.6-.3*.7,.3-.3*.7)--(.6-.6*.7,.3-.6*.7)--(.6-.6*.7,-.6*.7)--(.6-.3*.7,-.3*.7)--(.6-.3*.7,.3-.3*.7); \filldraw [fill=dgr,draw=black] (.6-.6*.7,.3-.6*.7)--(.6-.9*.7,.3-.9*.7)--(.6-.9*.7,-.9*.7)--(.6-.6*.7,-.6*.7)--(.6-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (-.9*.7,.3-.9*.7)rectangle(.3-.9*.7,-.9*.7); \filldraw [fill=gr,draw=black] (.3-.9*.7,.3-.9*.7)rectangle(.6-.9*.7,-.9*.7); \filldraw [fill=lgr,draw=black] (.3,.9)--(.3-.3*.7,.9-.3*.7)--(.6-.3*.7,.9-.3*.7)--(.6,.9)--(.3,.9); \filldraw [fill=lgr,draw=black] (0,.9)--(-.3*.7,.9-.3*.7)--(.3-.3*.7,.9-.3*.7)--(.3,.9)--(0,.9); \filldraw [fill=gr,draw=black] (-.3*.7,.6-.3*.7)rectangle(.3-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (.3-.3*.7,.6-.3*.7)rectangle(.6-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (-.3*.7,.9-.3*.7)rectangle(.3-.3*.7,.6-.3*.7); \filldraw [fill=gr,draw=black] (.3-.3*.7,.9-.3*.7)rectangle(.6-.3*.7,.6-.3*.7); \filldraw [fill=dgr,draw=black] (.6,.9)--(.6-.3*.7,.9-.3*.7)--(.6-.3*.7,.6-.3*.7)--(.6,.6)--(.6,.9); \filldraw [fill=lgr,draw=black] (.6,.6)--(.6-.3*.7,.6-.3*.7)--(.9-.3*.7,.6-.3*.7)--(.9,.6)--(.6,.6); \filldraw [fill=lgr,draw=black] (.9,.6)--(.9-.3*.7,.6-.3*.7)--(1.2-.3*.7,.6-.3*.7)--(1.2,.6)--(.9,.6); \filldraw [fill=lgr,draw=black] (1.2,.6)--(1.2-.3*.7,.6-.3*.7)--(1.5-.3*.7,.6-.3*.7)--(1.5,.6)--(1.2,.6); \filldraw [fill=lgr,draw=black] (1.5,.6)--(1.5-.3*.7,.6-.3*.7)--(1.8-.3*.7,.6-.3*.7)--(1.8,.6)--(1.5,.6); \filldraw [fill=lgr,draw=black] (1.8,.6)--(1.8-.3*.7,.6-.3*.7)--(2.1-.3*.7,.6-.3*.7)--(2.1,.6)--(1.8,.6); \filldraw [fill=gr,draw=black] (.6-.3*.7,.3-.3*.7)rectangle(.9-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (.6-.3*.7,.6-.3*.7)rectangle(.9-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,.3-.3*.7)rectangle(1.2-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,.6-.3*.7)rectangle(1.2-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.2-.3*.7,.3-.3*.7)rectangle(1.5-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.2-.3*.7,.6-.3*.7)rectangle(1.5-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.5-.3*.7,.3-.3*.7)rectangle(1.8-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.5-.3*.7,.6-.3*.7)rectangle(1.8-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.8-.3*.7,.3-.3*.7)rectangle(2.1-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.8-.3*.7,.6-.3*.7)rectangle(2.1-.3*.7,.3-.3*.7); \filldraw [fill=dgr,draw=black] (2.1,.3)--(2.1-.3*.7,.3-.3*.7)--(2.1-.3*.7,-.3*.7)--(2.1,0)--(2.1,.3); \filldraw [fill=dgr,draw=black] (2.1,.6)--(2.1-.3*.7,.6-.3*.7)--(2.1-.3*.7,.3-.3*.7)--(2.1,.3)--(2.1,.6); \end{tikzpicture} } \put(9,.7){$\longrightarrow$} \put(7,1.8){\scriptsize{cutting off $h'$ and decompose it}} \put(8,1.4){\scriptsize{ up to height $h_d$}} \put(10.6,0){ \begin{tikzpicture} \draw (0,0)--(2.4,0) (0,0)--(0,1.3) (0,0)--(-.8,-.8); \filldraw [fill=lgr,draw=black] (-.5,.3)--(-.5-.3*.7,.3-.3*.7)--(-.2-.3*.7,.3-.3*.7)--(-.2,.3)--(-.5,.3); \filldraw [fill=lgr,draw=black] (-.2,.3)--(-.2-.3*.7,.3-.3*.7)--(.1-.3*.7,.3-.3*.7)--(.1,.3)--(-.2,.3); \filldraw [fill=lgr,draw=black] (-.5-.3*.7,.3-.3*.7)--(-.5-.6*.7,.3-.6*.7)--(-.2-.6*.7,.3-.6*.7)--(-.2-.3*.7,.3-.3*.7)--(-.5-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (-.2-.3*.7,.3-.3*.7)--(-.2-.6*.7,.3-.6*.7)--(.1-.6*.7,.3-.6*.7)--(.1-.3*.7,.3-.3*.7)--(-.2-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (-.5-.6*.7,.3-.6*.7)--(-.5-.9*.7,.3-.9*.7)--(-.2-.9*.7,.3-.9*.7)--(-.2-.6*.7,.3-.6*.7)--(-.5-.6*.7,.3-.6*.7); \filldraw [fill=lgr,draw=black] (-.2-.6*.7,.3-.6*.7)--(-.2-.9*.7,.3-.9*.7)--(.1-.9*.7,.3-.9*.7)--(.1-.6*.7,.3-.6*.7)--(-.2-.6*.7,.3-.6*.7); \filldraw [fill=dgr,draw=black] (.1,.3)--(.1-.3*.7,.3-.3*.7)--(.1-.3*.7,-.3*.7)--(.1,0)--(.1,.3); \filldraw [fill=dgr,draw=black] (.1-.3*.7,.3-.3*.7)--(.1-.6*.7,.3-.6*.7)--(.1-.6*.7,-.6*.7)--(.1-.3*.7,-.3*.7)--(.1-.3*.7,.3-.3*.7); \filldraw [fill=dgr,draw=black] (.1-.6*.7,.3-.6*.7)--(.1-.9*.7,.3-.9*.7)--(.1-.9*.7,-.9*.7)--(.1-.6*.7,-.6*.7)--(.1-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (-.5-.9*.7,.3-.9*.7)rectangle(-.2-.9*.7,-.9*.7); \filldraw [fill=gr,draw=black] (-.2-.9*.7,.3-.9*.7)rectangle(.1-.9*.7,-.9*.7); \filldraw [fill=lgr,draw=black] (-.2,.9)--(-.2-.3*.7,.9-.3*.7)--(.1-.3*.7,.9-.3*.7)--(.1,.9)--(-.2,.9); \filldraw [fill=lgr,draw=black] (-.5,.9)--(-.5-.3*.7,.9-.3*.7)--(-.2-.3*.7,.9-.3*.7)--(-.2,.9)--(-.5,.9); \filldraw [fill=gr,draw=black] (-.5-.3*.7,.6-.3*.7)rectangle(-.2-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (-.2-.3*.7,.6-.3*.7)rectangle(.1-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (-.5-.3*.7,.9-.3*.7)rectangle(-.2-.3*.7,.6-.3*.7); \filldraw [fill=gr,draw=black] (-.2-.3*.7,.9-.3*.7)rectangle(.1-.3*.7,.6-.3*.7); \filldraw [fill=dgr,draw=black] (.1,.9)--(.1-.3*.7,.9-.3*.7)--(.1-.3*.7,.6-.3*.7)--(.1,.6)--(.1,.9); \filldraw [fill=dgr,draw=black] (.1,.6)--(.1-.3*.7,.6-.3*.7)--(.1-.3*.7,.3-.3*.7)--(.1,.3)--(.1,.6); \filldraw [fill=lgr,draw=black] (.6,.6)--(.6-.3*.7,.6-.3*.7)--(.9-.3*.7,.6-.3*.7)--(.9,.6)--(.6,.6); \filldraw [fill=lgr,draw=black] (.9,.6)--(.9-.3*.7,.6-.3*.7)--(1.2-.3*.7,.6-.3*.7)--(1.2,.6)--(.9,.6); \filldraw [fill=lgr,draw=black] (1.2,.6)--(1.2-.3*.7,.6-.3*.7)--(1.5-.3*.7,.6-.3*.7)--(1.5,.6)--(1.2,.6); \filldraw [fill=lgr,draw=black] (1.5,.6)--(1.5-.3*.7,.6-.3*.7)--(1.8-.3*.7,.6-.3*.7)--(1.8,.6)--(1.5,.6); \filldraw [fill=lgr,draw=black] (1.8,.3)--(1.8-.3*.7,.3-.3*.7)--(2.1-.3*.7,.3-.3*.7)--(2.1,.3)--(1.8,.3); \filldraw [fill=gr,draw=black] (.6-.6*.7,.3-.6*.7)rectangle(.9-.6*.7,-.6*.7); \filldraw [fill=gr,draw=black] (.6-.3*.7,.6-.3*.7)rectangle(.9-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,.3-.3*.7)rectangle(1.2-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,.6-.3*.7)rectangle(1.2-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.2-.3*.7,.3-.3*.7)rectangle(1.5-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.2-.3*.7,.6-.3*.7)rectangle(1.5-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.5-.3*.7,.3-.3*.7)rectangle(1.8-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.5-.3*.7,.6-.3*.7)rectangle(1.8-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.8-.3*.7,.3-.3*.7)rectangle(2.1-.3*.7,-.3*.7); \filldraw [fill=dgr,draw=black] (2.1,.3)--(2.1-.3*.7,.3-.3*.7)--(2.1-.3*.7,-.3*.7)--(2.1,0)--(2.1,.3); \filldraw [fill=dgr,draw=black] (1.8,.6)--(1.8-.3*.7,.6-.3*.7)--(1.8-.3*.7,.3-.3*.7)--(1.8,.3)--(1.8,.6); \filldraw [fill=dgr,draw=black] (.9-.3*.7,.3-.3*.7)--(.9-.6*.7,.3-.6*.7)--(.9-.6*.7,-.6*.7)--(.9-.3*.7,0-.3*.7)--(.9-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (.6-.3*.7,.3-.3*.7)--(.6-.6*.7,.3-.6*.7)--(.9-.6*.7,.3-.6*.7)--(.9-.3*.7,.3-.3*.7)--(.6-.3*.7,.3-.3*.7); \draw[->] (1.2,1.4)--(1.2,.9); \draw(1.25,1.65)node{$ h'$}; \end{tikzpicture} } \put(10.4,-.55){\scriptsize{glueing and cutting off $h''$}} \put(7.8,-3){ \begin{tikzpicture} \draw [->] (1.4,2)--(.9,1.5); \draw (0,0)--(2.4,0) (0,0)--(0,1.7) (0,0)--(-.8,-.8); \filldraw [fill=lgr,draw=black] (0,.3)--(-.3*.7,.3-.3*.7)--(.3-.3*.7,.3-.3*.7)--(.3,.3)--(0,.3); \filldraw [fill=lgr,draw=black] (.3,.3)--(.3-.3*.7,.3-.3*.7)--(.6-.3*.7,.3-.3*.7)--(.6,.3)--(.3,.3); \filldraw [fill=lgr,draw=black] (-.3*.7,.3-.3*.7)--(-.6*.7,.3-.6*.7)--(.3-.6*.7,.3-.6*.7)--(.3-.3*.7,.3-.3*.7)--(-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (.3-.3*.7,.3-.3*.7)--(.3-.6*.7,.3-.6*.7)--(.6-.6*.7,.3-.6*.7)--(.6-.3*.7,.3-.3*.7)--(.3-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (-.6*.7,.3-.6*.7)--(-.9*.7,.3-.9*.7)--(.3-.9*.7,.3-.9*.7)--(.3-.6*.7,.3-.6*.7)--(-.6*.7,.3-.6*.7); \filldraw [fill=lgr,draw=black] (.3-.6*.7,.3-.6*.7)--(.3-.9*.7,.3-.9*.7)--(.6-.9*.7,.3-.9*.7)--(.6-.6*.7,.3-.6*.7)--(.3-.6*.7,.3-.6*.7); \filldraw [fill=dgr,draw=black] (.6,.3)--(.6-.3*.7,.3-.3*.7)--(.6-.3*.7,-.3*.7)--(.6,0)--(.6,.3); \filldraw [fill=dgr,draw=black] (.6-.3*.7,.3-.3*.7)--(.6-.6*.7,.3-.6*.7)--(.6-.6*.7,-.6*.7)--(.6-.3*.7,-.3*.7)--(.6-.3*.7,.3-.3*.7); \filldraw [fill=dgr,draw=black] (.6-.6*.7,.3-.6*.7)--(.6-.9*.7,.3-.9*.7)--(.6-.9*.7,-.9*.7)--(.6-.6*.7,-.6*.7)--(.6-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (-.9*.7,.3-.9*.7)rectangle(.3-.9*.7,-.9*.7); \filldraw [fill=gr,draw=black] (.3-.9*.7,.3-.9*.7)rectangle(.6-.9*.7,-.9*.7); \filldraw [fill=lgr,draw=black] (.6,.3)--(.6-.3*.7,.3-.3*.7)--(.9-.3*.7,.3-.3*.7)--(.9,.3)--(.6,.3); \filldraw [fill=lgr,draw=black] (.9,.3)--(.9-.3*.7,.3-.3*.7)--(1.2-.3*.7,.3-.3*.7)--(1.2,.3)--(.9,.3); \filldraw [fill=lgr,draw=black] (1.2,.3)--(1.2-.3*.7,.3-.3*.7)--(1.5-.3*.7,.3-.3*.7)--(1.5,.3)--(1.2,.3); \filldraw [fill=lgr,draw=black] (1.5,.3)--(1.5-.3*.7,.3-.3*.7)--(1.8-.3*.7,.3-.3*.7)--(1.8,.3)--(1.5,.3); \filldraw [fill=lgr,draw=black] (1.8,.3)--(1.8-.3*.7,.3-.3*.7)--(2.1-.3*.7,.3-.3*.7)--(2.1,.3)--(1.8,.3); \filldraw [fill=dgr,draw=black] (2.1,.3)--(2.1-.3*.7,.3-.3*.7)--(2.1-.3*.7,-.3*.7)--(2.1,0)--(2.1,.3); \filldraw [fill=gr,draw=black] (.9-.3*.7,.3-.3*.7)rectangle(1.2-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.2-.3*.7,.3-.3*.7)rectangle(1.5-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.5-.3*.7,.3-.3*.7)rectangle(1.8-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.8-.3*.7,.3-.3*.7)rectangle(2.1-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (.6-.6*.7,.3-.6*.7)rectangle(.9-.6*.7,-.6*.7); \filldraw [fill=dgr,draw=black] (.9-.3*.7,.3-.3*.7)--(.9-.6*.7,.3-.6*.7)--(.9-.6*.7,-.6*.7)--(.9-.3*.7,0-.3*.7)--(.9-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (.6-.3*.7,.3-.3*.7)--(.6-.6*.7,.3-.6*.7)--(.9-.6*.7,.3-.6*.7)--(.9-.3*.7,.3-.3*.7)--(.6-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (.3,1.4)--(.3-.3*.7,1.4-.3*.7)--(.6-.3*.7,1.4-.3*.7)--(.6,1.4)--(.3,1.4); \filldraw [fill=lgr,draw=black] (0,1.4)--(-.3*.7,1.4-.3*.7)--(.3-.3*.7,1.4-.3*.7)--(.3,1.4)--(0,1.4); \filldraw [fill=gr,draw=black] (-.3*.7,1.1-.3*.7)rectangle(.3-.3*.7,.8-.3*.7); \filldraw [fill=gr,draw=black] (.3-.3*.7,1.1-.3*.7)rectangle(.6-.3*.7,.8-.3*.7); \filldraw [fill=gr,draw=black] (-.3*.7,1.4-.3*.7)rectangle(.3-.3*.7,1.1-.3*.7); \filldraw [fill=gr,draw=black] (.3-.3*.7,1.4-.3*.7)rectangle(.6-.3*.7,1.1-.3*.7); \filldraw [fill=dgr,draw=black] (.6,1.4)--(.6-.3*.7,1.4-.3*.7)--(.6-.3*.7,1.1-.3*.7)--(.6,1.1)--(.6,1.4); \filldraw [fill=lgr,draw=black] (.6,1.1)--(.6-.3*.7,1.1-.3*.7)--(.9-.3*.7,1.1-.3*.7)--(.9,1.1)--(.6,1.1); \filldraw [fill=lgr,draw=black] (.9,1.1)--(.9-.3*.7,1.1-.3*.7)--(1.2-.3*.7,1.1-.3*.7)--(1.2,1.1)--(.9,1.1); \filldraw [fill=lgr,draw=black] (1.2,1.1)--(1.2-.3*.7,1.1-.3*.7)--(1.5-.3*.7,1.1-.3*.7)--(1.5,1.1)--(1.2,1.1); \filldraw [fill=lgr,draw=black] (1.5,1.1)--(1.5-.3*.7,1.1-.3*.7)--(1.8-.3*.7,1.1-.3*.7)--(1.8,1.1)--(1.5,1.1); \filldraw [fill=gr,draw=black] (.6-.3*.7,1.1-.3*.7)rectangle(.9-.3*.7,.8-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,1.1-.3*.7)rectangle(1.2-.3*.7,.8-.3*.7); \filldraw [fill=gr,draw=black] (1.2-.3*.7,1.1-.3*.7)rectangle(1.5-.3*.7,.8-.3*.7); \filldraw [fill=gr,draw=black] (1.5-.3*.7,1.1-.3*.7)rectangle(1.8-.3*.7,.8-.3*.7); \filldraw [fill=dgr,draw=black] (1.8,1.1)--(1.8-.3*.7,1.1-.3*.7)--(1.8-.3*.7,.8-.3*.7)--(1.8,.8)--(1.8,1.1); \put(2.3,.9) {$\longleftarrow h''$} \end{tikzpicture} } \put(6.8,-1.95) {$\longleftarrow $} \put(4.8,-1.2) {\scriptsize{decompose $h''$ and glue}} \put(2.7,-3){ \begin{tikzpicture} \draw (0,0)--(2.5,0) (0,0)--(0,1.3) (0,0)--(-.8,-.8); \filldraw [fill=lgr,draw=black] (0,.3)--(-.3*.7,.3-.3*.7)--(.3-.3*.7,.3-.3*.7)--(.3,.3)--(0,.3); \filldraw [fill=lgr,draw=black] (.3,.3)--(.3-.3*.7,.3-.3*.7)--(.6-.3*.7,.3-.3*.7)--(.6,.3)--(.3,.3); \filldraw [fill=lgr,draw=black] (-.6*.7,.3-.6*.7)--(-.9*.7,.3-.9*.7)--(.3-.9*.7,.3-.9*.7)--(.3-.6*.7,.3-.6*.7)--(-.6*.7,.3-.6*.7); \filldraw [fill=lgr,draw=black] (.3-.6*.7,.3-.6*.7)--(.3-.9*.7,.3-.9*.7)--(.6-.9*.7,.3-.9*.7)--(.6-.6*.7,.3-.6*.7)--(.3-.6*.7,.3-.6*.7); \filldraw [fill=dgr,draw=black] (.6,.3)--(.6-.3*.7,.3-.3*.7)--(.6-.3*.7,-.3*.7)--(.6,0)--(.6,.3); \filldraw [fill=dgr,draw=black] (.6-.3*.7,.3-.3*.7)--(.6-.6*.7,.3-.6*.7)--(.6-.6*.7,-.6*.7)--(.6-.3*.7,-.3*.7)--(.6-.3*.7,.3-.3*.7); \filldraw [fill=dgr,draw=black] (.6-.6*.7,.3-.6*.7)--(.6-.9*.7,.3-.9*.7)--(.6-.9*.7,-.9*.7)--(.6-.6*.7,-.6*.7)--(.6-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (-.9*.7,.3-.9*.7)rectangle(.3-.9*.7,-.9*.7); \filldraw [fill=gr,draw=black] (.3-.9*.7,.3-.9*.7)rectangle(.6-.9*.7,-.9*.7); \filldraw [fill=lgr,draw=black] (.3,.9)--(.3-.3*.7,.9-.3*.7)--(.6-.3*.7,.9-.3*.7)--(.6,.9)--(.3,.9); \filldraw [fill=lgr,draw=black] (0,.9)--(-.3*.7,.9-.3*.7)--(.3-.3*.7,.9-.3*.7)--(.3,.9)--(0,.9); \filldraw [fill=gr,draw=black] (-.3*.7,.6-.3*.7)rectangle(.3-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (.3-.3*.7,.6-.3*.7)rectangle(.6-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (-.3*.7,.9-.3*.7)rectangle(.3-.3*.7,.6-.3*.7); \filldraw [fill=gr,draw=black] (.3-.3*.7,.9-.3*.7)rectangle(.6-.3*.7,.6-.3*.7); \filldraw [fill=dgr,draw=black] (.6,.9)--(.6-.3*.7,.9-.3*.7)--(.6-.3*.7,.6-.3*.7)--(.6,.6)--(.6,.9); \filldraw [fill=lgr,draw=black] (.6,.6)--(.6-.3*.7,.6-.3*.7)--(.9-.3*.7,.6-.3*.7)--(.9,.6)--(.6,.6); \filldraw [fill=lgr,draw=black] (.9,.6)--(.9-.3*.7,.6-.3*.7)--(1.2-.3*.7,.6-.3*.7)--(1.2,.6)--(.9,.6); \filldraw [fill=lgr,draw=black] (1.2,.45)--(1.2-.3*.7,.45-.3*.7)--(1.5-.3*.7,.45-.3*.7)--(1.5,.45)--(1.2,.45); \filldraw [fill=lgr,draw=black] (1.5,.45)--(1.5-.3*.7,.45-.3*.7)--(1.8-.3*.7,.45-.3*.7)--(1.8,.45)--(1.5,.45); \filldraw [fill=lgr,draw=black] (1.8,.3)--(1.8-.3*.7,.3-.3*.7)--(2.1-.3*.7,.3-.3*.7)--(2.1,.3)--(1.8,.3); \filldraw [fill=gr,draw=black] (.6-.3*.7,.3-.3*.7)rectangle(.9-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (.6-.3*.7,.6-.3*.7)rectangle(.9-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,.3-.3*.7)rectangle(1.2-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (.9-.3*.7,.6-.3*.7)rectangle(1.2-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.2-.3*.7,.3-.3*.7)rectangle(1.5-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.2-.3*.7,.45-.3*.7)rectangle(1.5-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.5-.3*.7,.3-.3*.7)rectangle(1.8-.3*.7,-.3*.7); \filldraw [fill=gr,draw=black] (1.5-.3*.7,.45-.3*.7)rectangle(1.8-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (1.8-.3*.7,.3-.3*.7)rectangle(2.1-.3*.7,-.3*.7); \filldraw [fill=dgr,draw=black] (2.1,.3)--(2.1-.3*.7,.3-.3*.7)--(2.1-.3*.7,-.3*.7)--(2.1,0)--(2.1,.3); \filldraw [fill=dgr,draw=black] (1.8,.45)--(1.8-.3*.7,.45-.3*.7)--(1.8-.3*.7,.3-.3*.7)--(1.8,.3)--(1.8,.45); \filldraw [fill=dgr,draw=black] (1.2,.45)--(1.2-.3*.7,.45-.3*.7)--(1.2-.3*.7,.6-.3*.7)--(1.2,.6)--(1.2,.45); \filldraw [fill=lgr,draw=black] (-.3*.7,.45-.3*.7)--(-.6*.7,.45-.6*.7)--(.3-.6*.7,.45-.6*.7)--(.3-.3*.7,.45-.3*.7)--(-.3*.7,.45-.3*.7); \filldraw [fill=lgr,draw=black] (.3-.3*.7,.45-.3*.7)--(.3-.6*.7,.45-.6*.7)--(.6-.6*.7,.45-.6*.7)--(.6-.3*.7,.45-.3*.7)--(.3-.3*.7,.45-.3*.7); \filldraw [fill=gr,draw=black] (-.6*.7,.45-.6*.7)rectangle(.3-.6*.7,.3-.6*.7); \filldraw [fill=gr,draw=black] (.3-.6*.7,.45-.6*.7)rectangle(.6-.6*.7,.3-.6*.7); \filldraw [fill=dgr,draw=black] (.6-.3*.7,.3-.3*.7)--(.6-.6*.7,.3-.6*.7)--(.6-.6*.7,.45-.6*.7)--(.6-.3*.7,.45-.3*.7)--(.6-.3*.7,.3-.3*.7); \filldraw [fill=gr,draw=black] (.6-.6*.7,.3-.6*.7)rectangle(.9-.6*.7,-.6*.7); \filldraw [fill=dgr,draw=black] (.9-.3*.7,.3-.3*.7)--(.9-.6*.7,.3-.6*.7)--(.9-.6*.7,-.6*.7)--(.9-.3*.7,0-.3*.7)--(.9-.3*.7,.3-.3*.7); \filldraw [fill=lgr,draw=black] (.6-.3*.7,.3-.3*.7)--(.6-.6*.7,.3-.6*.7)--(.9-.6*.7,.3-.6*.7)--(.9-.3*.7,.3-.3*.7)--(.6-.3*.7,.3-.3*.7); \end{tikzpicture} } } \end{picture} The algorithm leads to $h=t^9*s^4+ \frac{1}{2} s^5 + \frac{1}{2}s^3+s^1$, however without shifting the boxes we get $h=t^9*t^4*s^0 + \frac{1}{2}s^6+ \frac{1}{2}s^5+s^1$. The example shows that shifting the boxes is not really necessary to get a decomposition in extremal points and again that this decomposition is not unique. But that it may be important to get a totally ordered chain of extremal points. \end{exa} Now we know that $\text{Ex}(d)$ is a generating system of $\mathbb{H}$$(d)$. It remains to show that the points are extremal: \begin{lemma} Let $v\in\text{Ex}(d)$. Then $v$ is not in the convex hull of the remaining points in $\text{Ex}(d)$. \end{lemma} \begin{proof} Let $n \in \mathbb{N}$ \,and\,$d \geq1$ be fixed, $v=\sum_{i \geq 0} q_iv^i$ with $v^i \in \text{Ex}(d)$ and $q_i\geq 0.$ Since $v_0^i= 1$ for all $v^i \in \text{Ex}(d)$ we get $\sum q_i =1.$ \begin{itemize} \item [(i)] If $v=s^\ell$ with $\ell \leq d$ each $v^i$ belongs to $\text{Ex}(\ell).$\\ In the cases $\ell \leq n-1$ or $\ell=n\!\cdot \!k-1$ for any $k$, the only extremal point in $\text{Ex}(\ell)$ with nontrivial $(\ell+1)$-th component is $s^\ell$.\\ Otherwise $v_\ell=s_\ell = \lfloor\frac{\ell}{n}\rfloor+1 >1$ \,and the only extremal points of length \linebreak $\ell+1$ with nontrivial $(\ell+1)$-th component except $s^\ell$ are $t^\ell$ and $t^\ell*w$ for \linebreak $w\in\text{Ex}(\ell-3-2r)$ with $\ell=nm+r,\; r \in \{0,\dots, n-1\}$ and their entry is always 1. Assuming $v^i \neq s^\ell$ we get the contradiction $v_\ell=(\sum q_iv^i)_\ell \leq 1.$ \item [(ii)] Let $v=t^\ell*w$ with $\ell=nm+r, r \neq n-1$ and $w\in\text{Ex}(\ell-2r-3) \cup\{(0)\}$. We know $v_{nm-1}=0$ \,and\,$v_0=v_{nm}=\!...=v_{nm+r+1}=1,$ therefore all $v^i$ in the decomposition are of the form $t^\ell*w^i$ with $w^i\in\text{Ex}(\ell-2r-3)\cup \{(0)\}$.\\ As the $*$-operator is a shifted addition we get\\ $$v=t^\ell*w=\sum q_i(t^\ell*w^i)=(\sum q_i) t^\ell* \sum q_iw^i=t^\ell*\sum q_iw^i.$$ This gives $w=\sum q_iw^i $ and by induction over $d$ we are done.\qedhere \end{itemize} \end{proof} \newpage \section*{Appendix} Flowchart of the \textbf{ ALGORITHM:}\\ \col{ \tikzstyle{decision} = [shape aspect=2,diamond, draw, fill=blueberry!10, text width=4em, text centered, inner sep=0pt] \tikzstyle{block1} = [rectangle, draw, fill=blueberry!10, minimum size=9mm, text centered \tikzstyle{block2} = [rectangle, draw, fill=blueberry!10, text width=6em, text centered, minimum height=3em] \tikzstyle{block3} = [rectangle, draw, fill=blueberry!10, text width=52mm, text centered,minimum height=3em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{para1} = [draw,trapezium,trapezium left angle=75,trapezium right angle=-75,fill=plum!10,minimum size=8mm \tikzstyle{para2} = [draw,trapezium,trapezium left angle=75,trapezium right angle=-75,fill=cranberry!20,minimum size=8mm \vspace{2em} \begin{tikzpicture}[node distance = 1cm, auto,scale=0.68, transform shape] \node [para1] (eingabe) {$ n \in \mathbb{N},\;\, d\!=\!d_0\geq 1,\; \underline{h}\!=\!h\!=\!(h_0,\dots,h_d) \in \mathbb{Q}_{\geq0}^{d+1},\; h_j\!=\!0 \:\, $and$ \;s_j\!=\!1\;$for$\; j<0, \;\, p_0=\infty,\;\, i=0$}; \node [block3, below right of=eingabe,node distance=30mm] (q) {\begin{minipage}{55mm}$m = \lfloor \frac{d}{n}\rfloor, \;r=d-m \cdot n \\q=\underset{j \in \{0,\cdots,d\}}{\min}\{\frac{h_j}{s_j}\}, d\!<\!0\!: \, q\!=\!0$\\ \end{minipage}}; \node [decision, below of=q, node distance=25mm] (diff) {$p_i\!-\!q\!>\!0$}; \node [block1, right of=diff, node distance=43mm] (qgross) {$q_{s^d}=p_i,\; p_i=0$}; \node [block3, below of=qgross, node distance=28mm] (iklein) {\begin{minipage}{54mm}$h\!=\!g^i\!-\!\sum\limits_{v \leq s^{d_i\!-\!2r_i\!-\!3}}q_{v}(\underbrace{0,\!..,0}_{r_i\!+\!1},v), \\ q_{t^{d_i}\! \ast v}\!=\!q_v \quad \text{and}\\ q_{v}\!=\!0 \quad \text{for}\; v\!\leq \!s^{d_i\!-\!2r_i\!-\!3},\\ q_{t^{d_i}}=p_i,\\ d=d_i-1,\quad i=i-1$\end{minipage}}; \node [block2, below of=diff, node distance=20mm] (neux) {$p_i=p_i-q$ $ h=h-q\cdot s^d$ $ q_{s^d}=q$}; \node [decision, below of=neux, node distance=20mm] (max) {$h_d=0$}; \node [decision, left of=max, node distance=29mm] (odd) {$r \!= \!n\!-\!1$}; \node [decision, above of=odd, node distance=16mm] (io) {$i=0$}; \node [block1, left of=io, node distance=43mm] (dmin2){\begin{minipage}{50.8mm}$h_j\!=\!\begin{cases} \! h_j\!-\!h_d & \text{if} \, j\!=\!r \,\text{mod}\, n \\ \! h_j & \text{else},\end{cases} \\ d=d-1 $\end{minipage}}; \node [decision,above of=dmin2, node distance=22mm](xjklein0){$\exists \!\!: h_j\!<\!0$}; \node [para2, above of=io, node distance=22mm] (xnotcone) {$\underline{h} \notin \mathbb{H}$}; \node [decision, below of=odd, node distance=16mm] (kleinpi){$h_d\leq p_i$}; \node [block1, left of=kleinpi, node distance= 25mm] (gleichpi) {$h_d=p_i$}; \node [block1, below right of=gleichpi, node distance= 25mm] (dmin3) {\begin{minipage}{30mm}$i=i+1, p_i=h_d\\g^i=h-h_d\cdot t^d$ \end{minipage}}; \node [decision, below of=dmin3, node distance= 18mm] (yjklein0) {$\exists\!\!: g_j^i\!<\!0$}; \node [decision, left of=yjklein0, node distance= 30mm] (igleich1) {$i=1$}; \node [para2, above of= igleich1, node distance=16mm] (nichthcone) {$\underline{h} \notin \mathbb{H}$}; \node [block1, left of= gleichpi, node distance= 29mm] (imin1) {$i=i-1$}; \node [block1, below left of=yjklein0, node distance= 39mm] (x) {\begin{minipage}{86.2mm}$ h_j\!= \! g_{j\!+\!r\!+\!1}^i\!-\!g_{mn\!+\!k}^i,\; \text{if}\, j\!+\!r\!+\!1 \,\text{mod}\; n \!=\! k \!\in \! \{0,\!..,r\!-\!1\}, \\ h_j\!=g_{j\!+\!r\!+\!1}^i,\hspace{11.5mm} \text{if}\, j\!+\!r\!+\!1 \,\text{mod}\; n \notin \{0,\!..,r\!-\!1\} \\ \text{and}\;\;h_j=0\;\; \text{if} \;\;j < 0 \;\;\text{or}\;\; j > d-2r-3, \\ d_i=d, \quad r_i = r,\\p_{i-1}=p_{i-1}-p_i, \quad d=d-2r-3$ \end{minipage}}; \node [block1, below of=max,node distance=48mm] (dim) {$d=d-1$}; \node [decision, below of=dim, node distance=52mm] (dklein) {$d< 1$}; \node [block1, below of=dklein, node distance=17mm] (x1) {$q_{s^0}=h_0$}; \node [decision, below of=x1, node distance=17mm] (i0) {$i=0$}; \node [decision, right of=i0, node distance=28mm] (x1kleinp1) {$h_0 \leq p_i$}; \node [block1, right of=x1kleinp1, node distance=32mm] (pi0) {$q_{s^0}=p_i, p_i=0$}; \node [block1, above of=x1kleinp1, node distance=17mm] (piminx1) {$p_i=p_i-h_0$}; \node [para1, below left of=i0, node distance=22mm] (qi) {$\qquad \qquad \qquad \qquad(q_{v}),v \in Ex(d_0)$\qquad with \qquad$\underline{h}=\sum q_{v}\cdot v$}; \draw[-latex'] ($(eingabe)+(21.6mm,-4mm)$)-- (q); \path [line] (q) -- (diff); \path [line] (diff) -- node {yes} (neux); \path [line] (diff) -- node {no} (qgross); \path [line] (qgross) -- (iklein); \draw[-latex'] ($(iklein)+(22mm,15.9mm)$)|-(q); \path [line] (neux) -- (max); \path [line] (max)-- node {no} (odd); \path [line] (odd) -- node {yes} (io); \path [line] (odd) -- node {no} (kleinpi); \path [line] (kleinpi) -- node {no} (gleichpi); \draw[-latex'] ($(gleichpi)+(5mm,-4.7mm)$) -- ($(dmin3)+(-12.7mm,5.6mm)$); \draw[-latex'] (kleinpi)-- node {yes} ($(dmin3)+(7.7mm,5.6mm)$); \draw[-latex'] ($(x)+(-40mm,13.7mm)$)|-(q); \path [line] (dmin3)--(yjklein0); \path [line] (yjklein0)--node {no}($(x)+(27.5mm,13.5mm)$); \path [line] (yjklein0)-- node {yes} (igleich1); \path [line] (igleich1)-- node {yes} (nichthcone); \draw[-latex'] (igleich1)-|node [near end]{no} (imin1); \path [line] (imin1)-- ($(dmin2)+(-11mm,-9.6mm)$); \path [line] (io)-- node {yes} (xnotcone); \path [line] (io)-- node {no} (dmin2); \path [line] (dmin2)--(xjklein0); \path [line] (xjklein0)-- node {yes} (xnotcone); \draw[-latex'] (xjklein0)|- node [near start] {no}($(q)+(-27.5mm,-3mm)$); \path [line] (max)-- node {yes} (dim); \draw[-latex'] (dklein)-- node [near start] {no} ++(-110mm,0) |- ($(q)+(-27.5mm,3mm)$); \path [line] (dim) -- (dklein); \path [line] (dklein) -- node {yes} (x1); \path [line] (x1) -- (i0); \path [line] (i0) -- node {no} (x1kleinp1); \path [line] (x1kleinp1) -- node {no} (pi0); \path [line] (x1kleinp1) -- node {yes} (piminx1); \draw[-latex'] (piminx1)--($ (iklein)+(-16.2mm,-16mm)$); \draw[-latex'] (pi0)--($(iklein)+(15.2mm,-16mm)$); \draw[-latex'] (i0) -- node [near start]{yes} ($(qi)+(15.4mm,4mm)$); \end{tikzpicture}} \bw{ \tikzstyle{decision} = [shape aspect=2,diamond, draw, fill=gr!7, text width=4em, text centered, inner sep=0pt] \tikzstyle{block1} = [rectangle, draw, fill=gr!7, minimum size=9mm, text centered \tikzstyle{block2} = [rectangle, draw, fill=gr!7, text width=6em, text centered, minimum height=3em] \tikzstyle{block3} = [rectangle, draw, fill=gr!7, text width=52mm, text centered,minimum height=3em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{para1} = [draw,trapezium,trapezium left angle=75,trapezium right angle=-75,fill=gr!7,minimum size=8mm \tikzstyle{para2} = [draw,trapezium,trapezium left angle=75,trapezium right angle=-75,fill=gr!7,minimum size=8mm \vspace{2em} \begin{tikzpicture}[node distance = 1cm, auto,scale=0.68, transform shape] \node [para1] (eingabe) {$ n \in \mathbb{N},\;\, d\!=\!d_0\geq 1,\; \underline{h}\!=\!h\!=\!(h_0,\dots,h_d) \in \mathbb{Q}_{\geq0}^{d+1},\; h_j\!=\!0 \:\, $and$ \;s_j\!=\!1\;$for$\; j<0, \;\, p_0=\infty,\;\, i=0$}; \node [block3, below right of=eingabe,node distance=30mm] (q) {\begin{minipage}{55mm}$m = \lfloor \frac{d}{n}\rfloor, \;r=d-m \cdot n \\q=\underset{j \in \{0,\cdots,d\}}{\min}\{\frac{h_j}{s_j}\}, d\!<\!0\!: \, q\!=\!0$\\ \end{minipage}}; \node [decision, below of=q, node distance=25mm] (diff) {$p_i\!-\!q\!>\!0$}; \node [block1, right of=diff, node distance=43mm] (qgross) {$q_{s^d}=p_i,\; p_i=0$}; \node [block3, below of=qgross, node distance=28mm] (iklein) {\begin{minipage}{54mm}$h\!=\!g^i\!-\!\sum\limits_{v \leq s^{d_i\!-\!2r_i\!-\!3}}q_{v}(\underbrace{0,\!..,0}_{r_i\!+\!1},v), \\ q_{t^{d_i}\! \ast v}\!=\!q_v \quad \text{and}\\ q_{v}\!=\!0 \quad \text{for}\; v\!\leq \!s^{d_i\!-\!2r_i\!-\!3},\\ q_{t^{d_i}}=p_i,\\ d=d_i-1,\quad i=i-1$\end{minipage}}; \node [block2, below of=diff, node distance=20mm] (neux) {$p_i=p_i-q$ $ h=h-q\cdot s^d$ $ q_{s^d}=q$}; \node [decision, below of=neux, node distance=20mm] (max) {$h_d=0$}; \node [decision, left of=max, node distance=29mm] (odd) {$r \!= \!n\!-\!1$}; \node [decision, above of=odd, node distance=16mm] (io) {$i=0$}; \node [block1, left of=io, node distance=43mm] (dmin2){\begin{minipage}{50.8mm}$h_j\!=\!\begin{cases} \! h_j\!-\!h_d & \text{if} \, j\!=\!r \,\text{mod}\, n \\ \! h_j & \text{else},\end{cases} \\ d=d-1 $\end{minipage}}; \node [decision,above of=dmin2, node distance=22mm](xjklein0){$\exists \!\!: h_j\!<\!0$}; \node [para2, above of=io, node distance=22mm] (xnotcone) {$\underline{h} \notin \mathbb{H}$}; \node [decision, below of=odd, node distance=16mm] (kleinpi){$h_d\leq p_i$}; \node [block1, left of=kleinpi, node distance= 25mm] (gleichpi) {$h_d=p_i$}; \node [block1, below right of=gleichpi, node distance= 25mm] (dmin3) {\begin{minipage}{30mm}$i=i+1, p_i=h_d\\g^i=h-h_d\cdot t^d$ \end{minipage}}; \node [decision, below of=dmin3, node distance= 18mm] (yjklein0) {$\exists\!\!: g_j^i\!<\!0$}; \node [decision, left of=yjklein0, node distance= 30mm] (igleich1) {$i=1$}; \node [para2, above of= igleich1, node distance=16mm] (nichthcone) {$\underline{h} \notin \mathbb{H}$}; \node [block1, left of= gleichpi, node distance= 29mm] (imin1) {$i=i-1$}; \node [block1, below left of=yjklein0, node distance= 39mm] (x) {\begin{minipage}{86.2mm}$ h_j\!= \! g_{j\!+\!r\!+\!1}^i\!-\!g_{mn\!+\!k}^i,\; \text{if}\, j\!+\!r\!+\!1 \,\text{mod}\; n \!=\! k \!\in \! \{0,\!..,r\!-\!1\}, \\ h_j\!=g_{j\!+\!r\!+\!1}^i,\hspace{11.5mm} \text{if}\, j\!+\!r\!+\!1 \,\text{mod}\; n \notin \{0,\!..,r\!-\!1\} \\ \text{and}\;\;h_j=0\;\; \text{if} \;\;j < 0 \;\;\text{or}\;\; j > d-2r-3, \\ d_i=d, \quad r_i = r,\\p_{i-1}=p_{i-1}-p_i, \quad d=d-2r-3$ \end{minipage}}; \node [block1, below of=max,node distance=48mm] (dim) {$d=d-1$}; \node [decision, below of=dim, node distance=52mm] (dklein) {$d< 1$}; \node [block1, below of=dklein, node distance=17mm] (x1) {$q_{s^0}=h_0$}; \node [decision, below of=x1, node distance=17mm] (i0) {$i=0$}; \node [decision, right of=i0, node distance=28mm] (x1kleinp1) {$h_0 \leq p_i$}; \node [block1, right of=x1kleinp1, node distance=32mm] (pi0) {$q_{s^0}=p_i, p_i=0$}; \node [block1, above of=x1kleinp1, node distance=17mm] (piminx1) {$p_i=p_i-h_0$}; \node [para1, below left of=i0, node distance=22mm] (qi) {$\qquad \qquad \qquad \qquad(q_{v}),v \in Ex(d_0)$\qquad with \qquad$\underline{h}=\sum q_{v}\cdot v$}; \draw[-latex'] ($(eingabe)+(21.6mm,-4mm)$)-- (q); \path [line] (q) -- (diff); \path [line] (diff) -- node {yes} (neux); \path [line] (diff) -- node {no} (qgross); \path [line] (qgross) -- (iklein); \draw[-latex'] ($(iklein)+(22mm,15.9mm)$)|-(q); \path [line] (neux) -- (max); \path [line] (max)-- node {no} (odd); \path [line] (odd) -- node {yes} (io); \path [line] (odd) -- node {no} (kleinpi); \path [line] (kleinpi) -- node {no} (gleichpi); \draw[-latex'] ($(gleichpi)+(5mm,-4.7mm)$) -- ($(dmin3)+(-12.7mm,5.6mm)$); \draw[-latex'] (kleinpi)-- node {yes} ($(dmin3)+(7.7mm,5.6mm)$); \draw[-latex'] ($(x)+(-40mm,13.7mm)$)|-(q); \path [line] (dmin3)--(yjklein0); \path [line] (yjklein0)--node {no}($(x)+(27.5mm,13.5mm)$); \path [line] (yjklein0)-- node {yes} (igleich1); \path [line] (igleich1)-- node {yes} (nichthcone); \draw[-latex'] (igleich1)-|node [near end]{no} (imin1); \path [line] (imin1)-- ($(dmin2)+(-11mm,-9.6mm)$); \path [line] (io)-- node {yes} (xnotcone); \path [line] (io)-- node {no} (dmin2); \path [line] (dmin2)--(xjklein0); \path [line] (xjklein0)-- node {yes} (xnotcone); \draw[-latex'] (xjklein0)|- node [near start] {no}($(q)+(-27.5mm,-3mm)$); \path [line] (max)-- node {yes} (dim); \draw[-latex'] (dklein)-- node [near start] {no} ++(-110mm,0) |- ($(q)+(-27.5mm,3mm)$); \path [line] (dim) -- (dklein); \path [line] (dklein) -- node {yes} (x1); \path [line] (x1) -- (i0); \path [line] (i0) -- node {no} (x1kleinp1); \path [line] (x1kleinp1) -- node {no} (pi0); \path [line] (x1kleinp1) -- node {yes} (piminx1); \draw[-latex'] (piminx1)--($ (iklein)+(-16.2mm,-16mm)$); \draw[-latex'] (pi0)--($(iklein)+(15.2mm,-16mm)$); \draw[-latex'] (i0) -- node [near start]{yes} ($(qi)+(15.4mm,4mm)$); \end{tikzpicture}} \vspace{1cm} The $v'$s here are always elements of $\text{Ex}(d), \,d$ being large enough. \pagebreak \section*{Acknowledgements} We would like to thank the organizers of the workshop P.R.A.G.Mat.I.C, in particular Prof. Alfio Ragusa and Guiseppe Zappal\`a for providing an excellent environment for collaboration and research in Catania, Italy in the summer 2011. We also would like to thank Prof. Mats Boij and Prof. Ralf Fr\"oberg as well as Dr. Alexander Engstr\"om for their outstanding lectures providing interesting problems, their support, suggestions and ideas.\\ \nocite{*}
2,869,038,155,737
arxiv
\section{Introduction} Control of robotic manipulators have been a fruitful research field for the last three decades. Due to the nonlinear nature and the presence of uncertain terms in its dynamics, the position tracking control problem for robotic devices have attracted numerous researches. The output feedback control problem, that is when only joint level position measurements are available for the control design, is among the most popular problems in the field. The reason behind this is mainly for two folds: From the theoretical perspective, designing a controller without the joint level velocity measurements is more challenging, and from practical perspective, output feedback controllers allows the manufacturers to remove one expensive sensor from each joint. Motivated by these, researchers have proposed several output feedback type controllers for robotic manipulators. To name a few Nicosia and Tomei in \cite{NicoTome90} proposed a model based observer in order to cope for the velocity measurements, in \cite{YuanStep91}, authors proposed a robust controller approach and a passivity based controller-observer formulation was proposed in \cite{BergNilm93}. Recently a model independent observer based controller was presented in \cite{ObserverOfb17}. Repetitive learning type controller approaches, where the desired trajectory needs to be periodic, were also extended to output feedback tracking control of robotic devices \cite{MerveACC15}. Filtered based approaches, where a velocity surrogate signal generated by a filter formulation is used instead of the actual velocity signal were also applied to the output feedback control of robotic manipulators \cite{BurgFilter97}. A composite adaptive scheme where the uncertain parameters of the robotic device were estimated by a combination of gradient based and least squares based estimators in conjunction with a filtered based velocity surrogate signal approach was presented in \cite{CompEZ99}. Most output feedback controllers are semi-global in nature, while global output feedback controllers with uncertain dynamics term adaptations were also presented in \cite{FangOfb20} and \cite{zerCDC00}. Nearly all of the previous research mentioned above, were backed up via Lyapunov type arguments for the stability and convergence of the tracking error term. One short coming of the Lyapunov based analysis is the lack of system's overshoot limitation. For robotic systems, tracking response of the system should bare nearly no overshoot. In order to reduce or minimize engineers need to tweak with the controller and estimator gains and when the controller design cannot limit the tracking error or the initial overshoot theoretically, control gain adjustments rely mainly on experience. A solution to address this research problem relies on barrier Lyapunov functions (BLFs) \cite{jiang05}, \cite{Tee09}. In BLF based designs, bounds for each entry of the system states could be imposed \textit{a priori}. Some past research focused on applying BLF based designs to nonlinear systems of various forms \cite{jiang05}, \cite{Tee09}, \cite{Tee11}, \cite{Liu16}, \cite{Wang17}, \cite{Afflitto18}, while applying BLF type control techniques to mechatronic systems was also studied. \cite{kabzinski17} designed a BLF based systematic motion controller for servo systems. Doulgeri et al. used prescribed performance criteria for regulation control of robot manipulators in \cite{Doulgeri09}, \cite{Doulgeri10icra}, and for tracking control in \cite{Doulgeri12RAS}, \cite{Doulgeri13ROB}, \cite{Doulgeri16RAL}. \cite{Hackl12} designed a position controller based on prescribed performance criteria for robot manipulators where partial dynamic model knowledge was assumed known. \cite{Zhang18} designed a neural network based controller for robot manipulators subject to model uncertainties. Robust fixed--time control of a biped robot based on tangent BLF was presented in \cite{Rincon19}. Some task space control problems were researched in \cite{Doulgeri10iros}, \cite{Doulgeri16}, \cite{Doulgeri12}, while position/force control were addressed in \cite{Doulgeri12AUT}, \cite{Doulgeri10}. Neural network based adaptive methods in conjunction with BLFs were designed to address different control problems for Euler Lagrange systems \cite{Zhao18}, marine vessels \cite{SSGe14}, \cite{He17}, \cite{Xia19}. Review of the relevant literature reveals that when full state feedback is available several research problems were studied in conjunction with BLFs for different constraints, but when output is the only available state there are a few results obtained \cite{Rincon19}, \cite{Xia19}. In this work, tracking control of robot manipulators is aimed where the control problem is restricted by the unavailability of joint velocity measurements and parametric uncertainties in the mathematical model. Guaranteeing \textit{a priori} limits for each entry of joint tracking error and thus for each joint position is targeted as a secondary control task. To overcome the lack of velocity sensing, a filter based approach is preferred. In filter based approaches, via utilizing a bank of filters, a controller could be designed without needing velocity measurements. Parametric uncertainties are dealt with the design of a desired compensation based adaptive component. To restrict the entries of the joint position tracking error, a tracking error--dependent gain matrix multiplying the tracking error is proposed as part of the controller. The final form of the controller is composed of a filter vector acting as pseudo velocity tracking error in conjunction with an error--dependent gain matrix multiplying the tracking error fused with a desired compensation based adaptive component. Via introducing a BLF, in addition to proving asymptotic convergence of the tracking error to the origin, its entries are restricted to remain within user defined bounds. The simulation result obtained from a two degree of freedom robot manipulator is shown to be commensurate with the analysis. \section{Robot Dynamic Model and Model Properties} The mathematical model of the robot manipulators along with standard model properties are presented in this section. The dynamic model of an $n$ degree of freedom revolute joint robot manipulator has the given form \cite{lewis} \begin{equation} M\left(q\right)\ddot{q} + V_m\left(q ,\dot{q}\right)\dot{q} + G\left(q\right) + F_{d}\dot{q} = \tau \label{model} \end{equation} in which $q\left(t\right)$, $\dot{q}\left(t\right)$, $\ddot{q}\left(t\right) \in \Re^{n}$ are joint position, velocity and acceleration vectors, respectively, $M\left(q\right) \in \Re^{n\times n}$ is the inertia matrix, $V_m\left(q,\dot{q}\right) \in \Re^{n\times n}$ stands for centripetal Coriolis matrix, $G\left(q\right) \in \Re^{n}$ models gravitational effects, $F_d \in \Re^{n\times n}$ denotes diagonal viscous frictional effects, and $\tau\left(t\right) \in \Re^{n} $ is the control input torque. The dynamic modeling terms in \eqref{model} satisfy the following commonly utilized properties. \begin{property}\label{P1} The inertia matrix is positive definite and symmetric and satisfies \cite{lewis} \begin{equation} m_1 I_{n} \leq M \left(q\right) \leq m_2 I_{n} \label{prop1} \end{equation} in which $m_1$ and $m_2$ are known positive bounding constants, and $I_n$ stands for $n$--by--$n$ identity matrix. \end{property} \begin{property}\label{P2} The inertia and centripetal Coriolis matrices satisfy \cite{lewis} \begin{equation} \xi^T \left(\dot{M}-2V_m\right) \xi = 0 \quad \forall \xi \in \Re^{n}. \label{prop2} \end{equation} \end{property} \begin{property}\label{P3} The centripetal Coriolis matrix satisfies \cite{lewis} \begin{equation} V_m\left(\xi ,\nu\right)\eta = V_m\left(\xi ,\eta\right) \nu \quad \forall \xi, \nu, \eta \in \Re^{n}. \label{prop3} \end{equation} \end{property} \begin{property}\label{prop4} Following bounds can be proven to be satisfied for the dynamic modeling terms in \eqref{model} \cite{lewis} \begin{eqnarray} \Vert M\left( \xi \right) - M\left( \nu \right) \Vert_{i\infty} & \leq & \zeta_{m1} \Vert \xi - \nu \Vert \label{prop4a} \\ \Vert V_m\left( \xi ,\nu \right) \Vert_{i\infty} & \leq & \zeta_{c1} \Vert \nu \Vert \label{prop4b} \\ \Vert V_m\left( \xi ,\nu \right) - V_m\left( \eta , \nu\right) \Vert_{i\infty} &\leq & \zeta_{c2} \Vert \xi - \eta \Vert \Vert \nu \Vert \label{prop4c} \\ \Vert G\left( \xi \right) - G\left( \nu\right) \Vert & \leq & \zeta_{g} \Vert \xi - \nu \Vert \label{prop4d} \end{eqnarray} $\forall \xi $, $\nu$, $\eta \in \Re^{n}$ with $\zeta_{m1}$, $\zeta_{c1}$, $\zeta_{c2}$, $\zeta_{g}\in \Re$ being known, positive bounding constants. \end{property} \begin{property}\label{P5} The left hand side of \eqref{model} can be reconfigured to be written in linearly parameterized form as \begin{equation} Y\left(q,\dot{q},\ddot{q}\right)\theta = M\left( q\right)\ddot{q} + V_m\left( q,\dot{q}\right)\dot{q} + G\left( q\right) + F_{d}\dot{q} \label{prop5} \end{equation} with $Y\left(q,\dot{q},\ddot{q}\right)\in \Re^{n\times p}$ being a regression matrix and $\theta \in \Re^{p}$ containing constant model parameters that depends on physical properties of the robot manipulator. A desired form of the structure in \eqref{prop5} can be written as \begin{equation} Y_d \theta = M\left( q_{d}\right)\ddot{q}_{d} + V_m\left( q_{d},\dot{q}_{d}\right)\dot{q}_{d} + G\left( q_{d}\right) + F_{d}\dot{q}_{d} \label{prop5d} \end{equation} in which $Y_d\left(q_{d},\dot{q}_{d},\ddot{q}_{d}\right)\in \Re^{n\times p}$ is the desired version of the regression matrix that is a function of the desired joint position, velocity and acceleration vectors, denoted respectively with $q_{d}\left( t\right)$, $\dot{q}_{d}\left( t\right)$, $\ddot{q}_{d}\left( t\right) \in \Re^{n}$. \end{property} \section{Control Objective, Error System Development and Design} In this section, the control objective along with the restrictions will be presented first, the error system will then be introduced, and the control input torque will be designed. Ensuring tracking of a sufficiently smooth\footnote{The desired joint position trajectory along with its first two time derivatives are to be designed as bounded functions of time.} desired joint position vector $q_{d}\left( t\right)$ via designing the control input torque $\tau \left( t\right)$ is the primary control objective. In addition to the tracking objective, each joint's position is required to remain within a predefined neighborhood $\Delta_{i}$ of its desired joint position. Guaranteeing stability of the closed loop system via ensuring boundedness of all the signals is also essential. The control design is restricted by the unavailability of joint velocity measurements (\textit{i.e.}, only joint position $q\left( t\right) $ is available for using in control) and the mathematical model in \eqref{model} is subject to structured uncertainties in the sense that $\theta$ in Property \ref{P5} is unknown. To quantify the main control objective, the joint position tracking error, shown with $e\left( t\right) \in \Re^{n}$, is introduced as \begin{equation} e \triangleq q_{d} - q \label{e} \end{equation} and in view of the above definition, the secondary control objective can be formulated as \begin{equation} \vert e_{i} \left(t\right) \vert < \Delta_{i} \quad \forall t>0 \text{ , } i \in \lbrace 1, \cdots, n \rbrace . \label{obj} \end{equation} Since joint velocity measurements are not available for using in the control design, a filter based approach is to be preferred. Specifically, $e_f\left(t\right) \in \Re^n$, which will be used instead of the actual velocity error, is designed as the output of the following filter \begin{equation} e_{f} = -ke + w \label{ef} \end{equation} in which $k$ is a positive gain and $w\left(t\right) \in \Re^n$ is updated according to \begin{equation} \dot{w} = -\left(k+1 \right)e_{f} - k e + K_{e}e \quad w\left(0\right) = k e\left(0\right) \label{wdot} \end{equation} where $K_{e} \left(e\right) \in \Re^{n\times n}$ is an error--dependent gain matrix designed as \begin{equation} K_{e} = \text{diag} \left\{ \frac{K_{i}}{\Delta_{i}^{2} - e_{i}^{2}} \right\} \label{Keln} \end{equation} with $K_{i}$ $i \in \lbrace 1, \cdots ,n \rbrace$ being constant gains. To obtain the dynamics for $e_f\left(t\right)$, time derivative of \eqref{ef} is taken to which \eqref{wdot} is substituted into to deduce \begin{equation} \dot{e}_{f} = - e_{f} - k\eta + K_{e}e \label{efdot} \end{equation} with $\eta\left(t\right) \in \Re^n$ being another filter term defined as \begin{equation} \eta \triangleq \dot{e} + e + e_{f} . \label{eta} \end{equation} Due to the time derivative of the tracking error in its definition, $\eta\left(t\right)$ is not an available quantity. The dynamics for position tracking error is obtained after rearranging \eqref{eta} as \begin{equation} \dot{e} = - e - e_{f} + \eta . \label{edot} \end{equation} To obtain the dynamics of $\eta\left(t\right)$, the time derivative of \eqref{eta} is pre--multiplied with $M\left(q\right)$ to reach \begin{eqnarray} M\left(q\right)\dot{\eta} &=& M\left(q\right) \left(\ddot{q}_{d} + \dot{e} + \dot{e}_{f} \right) \nonumber \\ &+& V_m\left(q ,\dot{q}\right) \dot{q} + G\left(q\right) + F_{d}\dot{q} - \tau \nonumber \end{eqnarray} where \eqref{model} was substituted into. Adding and subtracting the desired dynamics in Property \ref{P5} to the right hand side of the above expression and making use of \eqref{efdot} and \eqref{edot} yields \begin{equation} M\left(q\right) \dot{\eta} = - V_m\left(q,\dot{q}\right)\eta - k M\left(q\right)\eta + Y_d \theta + \chi - \tau \label{Metadot} \end{equation} in which $\chi\left(e,e_f,\eta,t\right) \in \Re^n$ has the following structure \begin{eqnarray} \chi & \triangleq& M\left(q\right) \left( \ddot{q}_d + \eta - 2 e_f + K_e e - e \right) \nonumber \\ &+& V_m\left(q,\dot{q}\right)\left(\dot{q}_d + e_f + e\right) \nonumber \\ &+& G\left(q\right) + F_d\dot{q} - Y_d\theta . \label{chi} \end{eqnarray} Making use of the boundedness properties of the dynamic modeling terms, an upper bound can be deduced for $\chi$ as \begin{equation} \Vert \chi \Vert \leq \zeta_1 \Vert x \Vert + \zeta_2 \Vert x \Vert^2 \label{chiBound} \end{equation} in which $\zeta_1$ and $\zeta_2$ are known positive bounding constants and $x\left(t\right) \in \Re^{3n}$ is the combined error vector that has the form \begin{equation} x \triangleq \left[ \eta^T \quad e_f^T \quad e^T \right]^T . \label{xdef} \end{equation} The control input torque is designed as \begin{equation} \tau = Y_{d}\hat{\theta} + K_e e - k e_f \label{tau} \end{equation} where $\hat{\theta}\left(t\right) \in \Re^{p}$ is the adaptive estimate of the uncertain parameter vector $\theta$ that was introduced in Property \ref{P5} and its update law is designed as \begin{eqnarray} \hat{\theta} &=& \Gamma \int_{0}^{t} Y_d^T \left(\sigma\right) \left(e_f\left(\sigma\right) + e\left(\sigma\right)\right) d\sigma \nonumber \\ &+& \Gamma Y_d^T e - \Gamma \int_{0}^{t} \frac{d\lbrace Y_d^T\left(\sigma\right) \rbrace}{d\sigma} e\left(\sigma\right) d\sigma \label{hattheta} \end{eqnarray} in which $\Gamma \in \Re^{p\times p}$ is constant, diagonal, positive definite adaptation gain matrix. In view of \eqref{eta}, the time derivative of \eqref{hattheta} yields \begin{equation} \dot{\hat{\theta}} = \Gamma Y_d^T \eta . \label{uRule} \end{equation} Substituting the control input torque in \eqref{tau} into \eqref{Metadot} gives \begin{eqnarray} M\left(q\right)\dot{\eta} &=& - V_m\left(q,\dot{q}\right)\eta - k M\left(q\right)\eta \nonumber \\ &-& K_e e + k e_f + Y_d\tilde{\theta} + \chi \label{MetadotCL} \end{eqnarray} with $\tilde{\theta}\left(t\right) \in \Re^{p}$ representing the parameter estimation error as \begin{equation} \tilde{\theta} \triangleq \theta - \hat{\theta} . \label{tildetheta} \end{equation} \section{Analysis} The stability analysis for the filter based output feedback control strategy will be investigated in this section. Following theorem frames the stability analysis. \begin{theorem} \label{Thm} The control input torque in \eqref{tau} in conjunction with the parameter update law in \eqref{hattheta} and the filter design of \eqref{ef} and \eqref{wdot} ensures boundedness of all the signals under the closed loop system and semi--global asymptotic tracking provided that $\min\lbrace K_i \rbrace \geq \max \lbrace \Delta_i^2 \rbrace$ is satisfied and the control gain $k$ is designed as \begin{equation} k = \frac{1}{m_1} \left(1 + \zeta_1^2 k_n + \zeta_2^2 k_n\right) \label{k} \end{equation} with $k_n$ being a nonlinear damping gain designed to satisfy \begin{equation} k_n > 1 + \frac{\lambda_2}{\lambda_1} \Vert z\left(0\right) \Vert^2 \label{kn} \end{equation} in which $z\left(t\right) \triangleq \left[x^T \quad \tilde{\theta}^T \right]^T \in \Re^{3n+p}$, and $\lambda_1$ and $\lambda_1$ are positive constants defined as\footnote{In \eqref{lambdas}, the notations $\lambda_{\min}\lbrace \cdot \rbrace$ and $\lambda_{\max}\lbrace \cdot \rbrace$ denote, respectively, minimum and maximum eigenvalue of a matrix.} \begin{eqnarray*} &&\lambda_1 \triangleq \frac{1}{2} \min \lbrace m_1, 1, \frac{\min \lbrace K_i \rbrace}{ \max \lbrace \Delta_{i}^{2}\rbrace }, \lambda_{\min} \lbrace \Gamma^{-1} \rbrace \rbrace , \\ &&\lambda_2 \triangleq \frac{1}{2} \max \lbrace m_2, 1, \max \lbrace K_e \rbrace, \lambda_{\max} \lbrace \Gamma^{-1} \rbrace \rbrace . \label{lambdas} \end{eqnarray*} \end{theorem} The proof is initiated via the definition of a barrier Lyapunov function denoted with $V\left(\eta,e_f,e,\tilde{\theta}\right)$ as \begin{eqnarray} V &\triangleq & \frac{1}{2}\eta^{T}M\left( q\right)\eta + \frac{1}{2}e_f^{T}e_f \nonumber \\ &+& \sum_{i=1}^{n}\frac{K_{i}}{2}\ln\left( \frac{\Delta_{i}^{2}}{\Delta_{i}^{2}-e_{i}^{2}}\right) + \frac{1}{2}\tilde{\theta}^{T}\Gamma^{-1}\tilde{\theta} \label{Vl} \end{eqnarray} which can be bounded via \begin{equation} \lambda_1 \Vert x \Vert^2 \leq \lambda_1 \Vert z \Vert^2 \leq V \leq \lambda_2 \Vert z \Vert^2 . \label{VlBound} \end{equation} Provided that the initial error values of all joints satisfy $\vert e_{i}\left( 0\right) \vert < \Delta_{i}$, $V\left(\eta,e_f,e,\tilde{\theta}\right)$ is positive definite and radially unbounded thus qualifies as a barrier Lyapunov function. Taking the time derivative of $V$ gives \begin{eqnarray} \dot{V} &=& \frac{1}{2}\eta^{T}\dot{M}\left(q\right)\eta + \eta^T M\left(q\right)\dot{\eta} + e_f^{T} \dot{e}_f \nonumber \\ &+& \sum_{i=1}^{n} \frac{e_i K_{i} \dot{e}_i}{\Delta_{i}^2 - e_i^2} + \tilde{\theta}^{T} \Gamma^{-1}\dot{\tilde{\theta}} \label{dotVlBound1} \end{eqnarray} and after noting the structure of \eqref{Keln}, the summation can be reformulated as \begin{equation} \sum_{i=1}^{n} \frac{e_i K_{i} \dot{e}_i}{\Delta_{i}^2 - e_i^2} = e^T K_e \dot{e} . \label{Keln2} \end{equation} Substituting \eqref{efdot}, \eqref{edot} in view of \eqref{Keln2}, \eqref{uRule} in view of time derivative of \eqref{tildetheta}, \eqref{MetadotCL} into $\dot{V}$ and making use of the skew--symmetry of the robot dynamics in Property \ref{P2} gives \begin{eqnarray} \dot{V} &=& \eta^T \left[ - k M\left(q\right)\eta - K_e e + k e_f + Y_d\tilde{\theta} + \chi \right] \nonumber \\ &+& e_f^{T} \left(- e_{f} - k\eta + K_{e}e\right) \nonumber \\ &+& e^T K_e \left( - e - e_{f} + \eta\right) - \tilde{\theta}^{T} Y_d^T\eta\label{dotVlBound2} \end{eqnarray} which after canceling common terms, deduces to \begin{equation} \dot{V} = - e_f^{T} e_{f} - e^T K_e e - k \eta^T M\left(q\right)\eta + \eta^T\chi . \label{dotVlBound3} \end{equation} In view of Property \ref{P1}, \begin{equation} - k \eta^T M\left(q\right)\eta \leq - k m_1 \Vert \eta \Vert^2 \label{Bound1} \end{equation} and from the structure of \eqref{Keln} and the condition of $\min\lbrace K_i \rbrace \geq \max \lbrace \Delta_i^2 \rbrace$, \begin{equation} - e^T K_e e \leq -\frac{\min\lbrace K_i \rbrace}{\max\lbrace \Delta_{i}^2 \rbrace} \Vert e \Vert^2 \leq - \Vert e \Vert^2 \label{Bound2} \end{equation} are satisfied. After substituting \eqref{k} and the bounds of \eqref{chiBound}, \eqref{Bound1} and \eqref{Bound2}, the right hand side of \eqref{dotVlBound3} reaches \begin{eqnarray} \dot{V} &\leq &- \Vert x \Vert^2 + \left[\xi_1 \Vert \eta \Vert \Vert x \Vert - \xi_1^2 k_n \Vert \eta \Vert^{2}\right] \nonumber \\ & + & \left[\xi_2 \Vert \eta \Vert \Vert x \Vert^2 - \xi_2^2 k_n \Vert \eta \Vert^{2}\right] \label{dotVlBound4} \end{eqnarray} to which applying the nonlinear damping argument in \cite{Kokotovic92} yields \begin{equation} \dot{V} \leq - \left[1 - \frac{1}{4k_n} \left(1+\Vert x \Vert^2\right)\right] \Vert x \Vert^2 . \label{dotVlBound5} \end{equation} The sign of the upper bound on $\dot{V}$ is determined by the bracketed term of the preceding inequality and when it is positive, negative semi--definiteness of $\dot{V}$ is ensured. Mathematically, \begin{equation} 1 - \frac{1}{4k_n} \left(1+\Vert x \Vert^2\right) > 0 \label{dotVlBound5a} \end{equation} is required for $\dot{V}$ to be negative semi--definite. In view of \eqref{VlBound}, a more conservative bound can be derived as \begin{equation} 1-\frac{1}{4k_n}\left(1+\frac{V}{\lambda_1}\right) > 0 \label{dotVlBound5b} \end{equation} which yields \begin{equation} \dot{V} \leq - \beta \Vert x \Vert^2 \text{ for } k_n > \frac{1}{4}\left(1+\frac{V}{\lambda_1}\right) \label{dotVlBound6} \end{equation} where $0<\beta<1$. From the structures of \eqref{Vl} and \eqref{dotVlBound6}, it is clear that $V$ is non--increasing, which allows us to have \begin{equation} \dot{V} \leq - \beta \Vert x \Vert^2 \text{ for } k_n > \frac{1}{4}\left(1+\frac{V\left(0\right)}{\lambda_1}\right) \label{dotVlBound7} \end{equation} and utilizing \eqref{VlBound} gives \begin{equation} \dot{V} \leq -\beta \Vert x \Vert^2 \text{ for } k_n > \frac{1}{4}\left(1+\frac{\lambda_2}{\lambda_1}\Vert z\left(0\right) \Vert^2\right) .\label{dotVlBound8} \end{equation} The direct implication of \eqref{Vl} and \eqref{dotVlBound8} is that $z\left(t\right)$ is bounded and thus $e\left(t\right)$, $e_f\left(t\right)$, $\eta\left(t\right)$, $\tilde{\theta}\left(t\right) \in \mathcal{L}_{\infty}$. The boundedness of $e\left(t\right)$ along with the desired joint trajectory vector being bounded implies that $q\left(t\right) \in \mathcal{L}_{\infty}$. After using the above boundedness statements with \eqref{efdot} and \eqref{eta}, $\dot{e}_f\left(t\right)$, $\dot{e}\left(t\right) \in \mathcal{L}_{\infty}$ can be proven; hence, $\dot{q}\left(t\right)$ is bounded since $\dot{q}_d\left(t\right)$ is bounded. Since $\tilde{\theta}\left(t\right)$ is bounded and $\theta$ is constant, then $\hat{\theta}\left(t\right) \in \mathcal{L}_\infty$. From \eqref{ef} and \eqref{wdot}, it is clear $w\left(t\right)$, $\dot{w}\left(t\right)$ are bounded. In view of the above boundedness statements, from its design in \eqref{tau}, $\tau\left(t\right)$ is bounded. From \eqref{MetadotCL}, it can be proven that $\dot{\eta}\left(t\right)\in \mathcal{L}_\infty$. All the remaining signals can be guaranteed to be bounded via the above boundedness results. After integrating \eqref{dotVlBound8} in time from initial time $0$ to $+\infty$, $x\left(t\right) \in \mathcal{L}_2$ is obtained. Since $x\left(t\right) \in \mathcal{L}_{2} \cap \mathcal{L}_\infty$ and $\dot{x}\left(t\right) \in \mathcal{L}_\infty$, in view of Barbalat's Lemma \cite{khalil}, $x \to 0$ as $t \to +\infty$ is proven and thus asymptotic tracking is ensured. We would like to note that, due to the gain condition \eqref{kn}, as one of the controller gains rely on the initial conditions of the system (semi--global stability), one might conclude that in order to extend the stability region the value of $k_n$ would selected to be high. However as the initial value of the error signal is required to start inside a predefined region, the maximum value of $k_n$ can be calculated a priori and does not necessarily have high values. \begin{remark} It is highlighted that instead of the $K_e$ design in \eqref{Keln} below tangent function based design could have also been utilized \begin{equation} K_{e} = \text{diag} \left\{ 1+\tan^{2}\left( \frac{\pi}{2} \frac{e_{i}^{2}}{\Delta_{i}^{2}}\right) \right\} \label{Ketr} \end{equation} which would have resulted in a similar stability result after changing the third term in \eqref{Vl} to \begin{equation} \sum_{i=1}^{n}\frac{\Delta_i^2}{\pi} \tan\left( \frac{\pi}{2} \frac{e_{i}^{2}}{\Delta_i^2}\right) . \label{Vt} \end{equation} \end{remark} \section{Simulation}\label{expResults} A simulation with the position constraint of 7 degrees is performed in order to see the effect of the $K_e$ term defined in equation \eqref{Keln} of the control torque input. In order to illustrate the feasibility and performance of the proposed position constrained adaptive output feedback controller, a simulation was preformed. In the simulation, the following desired joint trajectories are applied \begin{equation} q_d \left(t\right) = \left[ \begin{array}{c} 0.7 \sin\left(t\right)\left(1-e^{-0.3t^3}\right) \\ 1.2\sin\left(t\right)\left(1-e^{-0.3t^3}\right) \end{array} \right] \left[\text{rad} \right] \label{expdesiredtraj} \end{equation} where the exponential term in the above equation is applied to give a smooth start to the system. The links of the robot arm were started with $2.9$ degrees initial position error, the initial parameter estimates were set to zero and the torque outputs were saturated at $\pm 10$ newton-meters. The selected control and adaptation gains were as follows \[ \begin{array}{c} ~~k_{e}=diag\{% \begin{array}{cc} 2 & 2% \end{array}% \},~~~k=diag\{% \begin{array}{cc} 80 & 20% \end{array}% \} \\ \text{and} \\ \Gamma =\left[ \begin{array}{ccccc} 50 & 0.5 & 1 & 80 & 2.5% \end{array}% \right] ^{T}% \end{array}% \] The results of the simulation are presented in Figure 1, where the top two sub--figures are the tracking error performances, the sub--figure in the middle is the parameter estimations and the bottom two sub--figures are the corresponding control torque inputs. As can be seen from Figure 1, after around 30 seconds. The parameter estimates converge to some values and the tracking error performances for link 1 and 2 converge to values below $\pm 0.05$ degrees. We can conclude that the tracking performance of the proposed controller is quite satisfactory. \begin{figure*} \setlength{\fboxsep}{0pt}% \setlength{\fboxrule}{0pt}% \begin{center} \includegraphics[width=0.53\linewidth]{errors_7deg.eps}\\ \includegraphics[width=0.53\linewidth]{param7deg.eps}\\ \includegraphics[width=0.53\linewidth]{tau7deg.eps} \end{center} \caption{The simulation results of the output feedback controller with 7 degree constraint} \end{figure*} \section{Conclusions} In this work, the design of a joint position tracking error constrained output feedback controller for robot manipulators with uncertain dynamical parameters have been presented. The proposed controller ensures that the joint tracking error signals of each link stay inside a predefined bound and eventually converge to zero, despite the lack of velocity measurements and presence of dynamical uncertainties. A simulation is also presented to illustrate the effectiveness and performance of the proposed output feedback controller. Future work will concentrate on extending the results to output feedback learning type controllers and task space position constrained control of robotic manipulators.
2,869,038,155,738
arxiv
\section{Introduction} Intersection types have been originally developed as an extension of simple types, but they can also be used for refining simple types. In this survey we concentrate on the latter option; more precisely, on the use of intersection types for describing quantitative properties of simply typed lambda-terms. We consider lambda-terms as generators of trees. To this end, we assume a unique ground sort\footnote{% Following the convention in this area, we use the word ``sort'' for simple types, and the word ``type'' for intersection types refining them.} $\mathsf{o}$ describing trees, and we assume some uninterpreted constants, which are functions of order at most $1$. Then, a beta-normal form of a closed lambda-term of the ground sort does not contain any lambda-binders---it is just an applicative term composed of the uninterpreted constants, and thus can be seen as a tree. In other words, in the effect of calling a function $a$ with some trees as arguments, we obtain a new tree with a root labeled by $a$, and with the arguments attached in the children of the root. Suppose now that we have a closed lambda-term $M$ of the ground sort, and we want to estimate some quantities concerning its beta-normal form $T$. As a first example of such a quantity we can take the number of appearances of some fixed constant $a$ in $T$. How can we read this number from the original lambda-term $M$? As a first approach, we can look at the number of appearances of the constant $a$ in $M$. This can be completely inappropriate, though, for two reasons. First, we can have in $M$ some appearances of the constant $a$ that will be removed during beta-reductions. Second, maybe the constant $a$ appears in $M$ only once, but it will be replicated a lot of times during beta-reductions. In order to take into account these two phenomena we design an appropriate type system; a type derivation for the lambda-term $M$ identifies the places in $M$ that are really responsible for producing some constants $a$ in the beta-normal form $T$, so that these places can be counted. The type system realizing this goal is presented in Section~\ref{sec:deterministic}. Another quantity of the tree $T$ is the largest number of appearances of some fixed constant $a$ on a single branch in $T$. While the quantity of the first kind can be called deterministic, this one is slightly more complicated, and can be called nondeterministic. The justification of such a name is that while looking locally at some fragment of $T$ we do not know whether the constants $a$ appearing in this fragment should be counted or not (i.e., whether they are located on the branch of $T$ containing the largest number of constants $a$). We thus have to non-locally (nondeterministically) choose some branch of $T$ on which the constants $a$ should be counted. A type system that allows to estimate the above quantity is presented in Section~\ref{sec:nondeterministic}. The following quantity is even more involved: what is the largest number $n$ such that the binary tree with all nodes labeled by $a$ and all branches of length $n$ embeds homeomorphically in the considered tree $T$? In a sense, this quantity combines three elements: taking the maximum, taking the minimum, and counting. Indeed, we take here the maximum over all embeddings of trees with all nodes labeled by $a$ of the minimum of lengths of paths in the chosen tree (and internally, we count the number of constants $a$ on the chosen path). Unfortunately, the presented methods do not allow to estimate this quantity; it is an open problem to construct a type system concerning this quantity. One may wonder why we want to have the aforementioned type systems, instead of just expanding the lambda-term $M$ into its beta-normal form $T$, and computing the quantity there. The answer is: compositionality. Suppose that $M$ is an application $K\,L$. If we know types derivable for $K$ and for $L$, we can determine types derivable for $K\,L$. Moreover, knowing the quantities assigned to type derivations for $K$ and for $L$ we can determine the quantity assigned to type derivations for $K\,L$. We thus have a composable abstraction of every lambda-term: a set of its types, and a tuple of numbers (with a bound on the size of this set and on the length of this tuple that depends only on the sort of the lambda-term, not on its size). Existence of such an abstraction has some interesting implications. In particular, the research presented here is motivated by applications in the area of higher-order recursion schemes. Recursion schemes, or equivalently terms of the $\lambda Y$-calculus, form an extension of the simply typed lambda-calculus by a fixed-point operator $Y$~\cite{Damm82,KNU-hopda,Ong-hoschemes,kobayashiOng2009type}. Trees generated by recursion schemes can be used to faithfully represent the control flow of programs in languages with higher-order functions~\cite{Kobayashi-types}. We remark that the same class of trees can be generated by collapsible pushdown systems~\cite{collapsible} and ordered tree-pushdown systems~\cite{DBLP:conf/fsttcs/ClementePSW15}. Intersection type systems were intensively used in the context of recursion schemes, for several purposes like model-checking~\cite{Kobayashi-types,kobayashiOng2009type,DBLP:conf/csl/BroadbentK13,DBLP:conf/popl/RamsayNO14}, pumping~\cite{koba-pumping,koba-pumping-new}, transformations of HORSes~\cite{context-sensitive-2,word2tree,downward-closure}, etc. Interestingly, constructions very similar to intersection types were used also on the side of collapsible pushdown systems; they were alternating stack automata~\cite{saturation}, and types of stacks~\cite{ho-new,Kar-Par-pumping}. The type systems are also closely connected to linear logic~\cite{linear-logic-1,linear-logic-2}. The type system of Section~\ref{sec:deterministic} is based on the type system from Parys~\cite{numbers-journal}. A similar type system was used to prove that some trees generated by recursion schemes cannot be generated by so-called safe recursion schemes~\cite{ho-new}. The type system of Section~\ref{sec:nondeterministic} comes from Parys~\cite{itrs,diagonal-types}. It implies decidability of the model-checking problem for trees generated by recursion schemes against formulae of the WMSO+U logic~\cite{wmsou-schemes}. It also allows to solve the simultaneous unboundedness problem (aka.~diagonal problem) for recursion schemes, which was first solved in a different way~\cite{downward-closure}. \section{Preliminaries} The set of \emph{sorts} is constructed from a unique basic sort $\mathsf{o}$ using a binary operation $\mathbin{\to}$. Thus $\mathsf{o}$ is a sort and if $\alpha,\beta$ are sorts, so is $(\alpha\mathbin{\to}\beta)$. The \emph{order} of a sort is defined by: $\mathit{ord}(\mathsf{o})=0$, and $\mathit{ord}(\alpha\mathbin{\to}\beta)=\max(1+\mathit{ord}(\alpha),\mathit{ord}(\beta))$; in other words, $\mathit{ord}(\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\to}\mathsf{o})=1+\max_{i\in\{1,\dots,k\}}\mathit{ord}(\alpha_i)$ whenever $k\geq 1$. A \emph{signature} is a set of constants, that is, symbols with associated sorts. For simplicity, in this paper we use a signature consisting of three constants: $a$ of sort $\mathsf{o}\mathbin{\to}\mathsf{o}$, and $\mathit{b}$ of sort $\mathsf{o}\mathbin{\to}\mathsf{o}\mathbin{\to}\mathsf{o}$, and $e$ of sort $\mathsf{o}$ (it is easy to generalize the methods to an arbitrary signature, assuming that sorts of constants are of order at most $1$). The set of \emph{(simply typed) lambda-terms} is defined by induction as follows: \begin{compactitem} \item constants (node constructors)---a constant of sort $\alpha$ is a lambda-term of sort $\alpha$; \item variables---for each sort $\alpha$ there is a countable set of variables $x^\alpha,y^\alpha,\dots$ that are also lambda-terms of sort $\alpha$; \item lambda-binders---if $K$ is a lambda-term of sort $\beta$ and $x^\alpha$ a variable of sort $\alpha$ then $\lambda x^\alpha.K$ is a lambda-term of sort $\alpha\mathbin{\to}\beta$; \item applications---if $K$ is a lambda-term of sort $\alpha\mathbin{\to}\beta$ and $L$ is a lambda-term of sort $\alpha$ then $K\,L$ is a lambda-term of sort $\beta$. \end{compactitem} As usual, we identify lambda-terms up to alpha-conversion (renaming of bound variables). We often omit the sort annotation of variables, but please keep in mind that every variable is implicitly sorted. A term is called \emph{closed} when it does not have free variables. The \emph{order} of a lambda-term $M$, denoted $\mathit{ord}(M)$, is defined as the order of the sort of $M$, while the \emph{complexity} of $M$ is defined as the maximum of orders of subterms of $M$. A sort $\alpha_1\mathbin{\to}\dots\mathbin{\to}\alpha_k\mathbin{\to}\mathsf{o}$ is \emph{homogeneous} if $\mathit{ord}(\alpha_1)\geq\dots\geq\mathit{ord}(\alpha_k)$ and all $\alpha_1,\dots,\alpha_k$ are homogeneous (defined by induction). A lambda-term is homogeneous if all its subterms have homogeneous sorts. In order to avoid some technicalities, in this paper we only consider homogeneous lambda-terms. This is without loss of generality, since there is a simple syntactic transformation converting every closed lambda-term of sort $\mathsf{o}$ into a homogeneous lambda-term having the same beta-normal form~\cite{homogeneity}. We use the usual notion of beta-reduction: we have $M\to_\beta N$ if $N$ can be obtained from $M$ by replacing some of its subterms of the form $(\lambda x.K)\,L$ by $K[L/x]$. We recall that simply typed lambda-calculus has the properties of strong normalization and confluence, that is, every sequence of beta-reductions from a lambda-term $M$ eventually terminates in a unique lambda-term $N$ such that no more beta-reductions can be performed from $N$; the lambda-term $N$ is called the \emph{beta-normal form} of $M$. Observe that the beta-normal form of a closed lambda-term of sort $\mathsf{o}$ is an applicative term build of constants (it does not contain variables nor lambda-binders), and thus can be seen as a tree (generated by the lambda-term). In this paper we are interested in two particular reduction strategies (i.e., strategies of choosing a redex that should be reduced next). In the \emph{OI strategy}, we always reduce an \emph{outermost redex}, that is, a redex that is not located inside another redex. Notice that if $M$ is closed and of sort $\mathsf{o}$, then every outermost redex in $M$ is also closed. A redex $(\lambda x.K)\,L$ is a \emph{redex of order $m$} if $\mathit{ord}(\lambda x.K)=m$. Assuming that the lambda-term is homogeneous, we have $\mathit{ord}(\lambda x.K)=m$ if and only if $\mathit{ord}(x)=m-1$. In the \emph{RMF strategy} we always reduce a \emph{rightmost redex of the maximal order}, that is, a redex $(\lambda x.K)\,L$ of some order $m$ such that in the lambda-term there is no redex of a higher order, and in $L$ there are no redexes of order $m$,\label{pag:rmf} and the redex is not located inside $K'$ for some order-$m$ redex $(\lambda x'.K')\,L'$. In other words, whenever we see an order-$m$ redex $(\lambda x.K)\,L$, we first reduce all order-$m$ redexes in $L$, then the redex itself, and then we continue reducing the resulting lambda-term. We also write RMF$(m)$ to make it explicit that the order of the considered redex is $m$. When a closed lambda-term $M$ of sort $\mathsf{o}$ has complexity $m$ (and is not in the beta-normal form), then an RMF$(m)$ reduction always exist; thus following the RMF strategy we first reduce all redexes of order $m$ (until reaching a term of complexity $m-1$), then all redexes of order $m-1$, and so on. Moreover, for an RMF$(m)$ redex $(\lambda x.K)\,L$ in such a lambda-term, all variables appearing in $L$ are of order at most $m-2$. Suppose that we have two functions $f,g\colon X\to\mathbb{N}$, over some domain $X$. We want to define when $f$ estimates $g$. To this end, we say that $f$ is \emph{dominated} by $g$, written $f\preceq g$, if there exists a function $\eta\colon\mathbb{N}\to\mathbb{N}$ such that $f(x)\leq\eta(g(x))$ for all $x\in X$, and we say that $f$ \emph{estimates} $g$, written $f\approx g$, if $f\preceq g$ and $g\preceq f$. It is easy to see that $f$ estimates $g$ if and only if on every subset $Y$ of the domain $X$, the functions $f$ and $g$ are either both bounded or both unbounded. The above relation between functions is widely used in the area of regular cost functions (see, e.g., Colcombet~\cite{regular-cost-functions}). One may also consider infinite lambda-terms. Clearly they do not reduce to a normal form in a finite number of steps, but we can consider the (unique) normal form reached in the limit, called the \emph{B\"ohm tree}. As in the finite case, the B\"ohm tree of a closed lambda-term of sort $\mathsf{o}$ is a (potentially infinite) tree build out of constants. A \emph{recursion scheme} is a finite description of a regular (i.e., having finitely many different subterms) infinite lambda-term. \section{Deterministic Quantities}\label{sec:deterministic} In this section we present a type system that allows to estimate the number of appearances of the constant $a$ in the beta-normal form of a lambda-term. The type system should be such that a type derivation for a closed lambda-term $M$ of sort $\mathsf{o}$ identifies the places in $M$ that are responsible for producing some $a$-labeled nodes in the beta-normal form $T$ of $M$. To this end, we extend the notion of sorts by a \emph{productivity flag}, which can be ${\mathsf{pr}}$ (standing for productive) and ${\mathsf{np}}$ (standing for nonproductive). It may happen that a single lambda-term $K$ has multiple types; for example, $\lambda y.y\,(a\,e)$ is productive when the function (substituted for) $y$ uses its argument, and nonproductive otherwise. Because of that, we need intersection types (i.e., the ability of assigning multiple types to the same lambda-term). In effect, our types differ from sorts in that on the left side of $\mathbin{\to}$, instead of a single type, we have a set of pairs $(f,\tau)$, where $\tau$ is a type, and $f$ is a flag from $\{{\mathsf{pr}},{\mathsf{np}}\}$. The unique atomic type is denoted $\r$. More precisely, for each sort $\alpha$ we define the set $\mathcal{T}^\alpha$ of types of sort $\alpha$ as follows: \begin{align*} \mathcal{T}^\mathsf{o}=\{\r\},\qquad \mathcal{T}^{\alpha\mathbin{\to}\beta}=\mathcal{P}(\{{\mathsf{pr}},{\mathsf{np}}\}\times\mathcal{T}^\alpha)\times\mathcal{T}^\beta, \end{align*} where $\mathcal{P}$ denotes the powerset. A type $(T,\tau)\in\mathcal{T}^{\alpha\mathbin{\to}\beta}$ is denoted as $\bigwedge T\mathbin{\to}\tau$, or $\bigwedge_{i\in I}(f_i,\tau_i)\mathbin{\to}\tau$ when $T=\{(f_i,\tau_i)\mid i\in I\}$. The empty intersection is denoted by $\top$. To a lambda-term of sort $\alpha$ we assign not only a type $\tau\in\mathcal{T}^\alpha$, but also a flag $f\in\{{\mathsf{pr}},{\mathsf{np}}\}$ (which together form a pair $(f,\tau)$). Intuitively, a lambda-term has type $\bigwedge T\mathbin{\to}\tau$ when it can return $\tau$, while taking an argument for which we can derive all pairs (of a flag and a type) from $T$; simultaneously, while having such a type, the lambda-term is obligated to use its arguments in all ways described by type pairs from $T$. And, we assign the flag ${\mathsf{pr}}$ (productive), when this term (while being a subterm of a closed term of sort $\mathsf{o}$) increases the number of constants $a$ in the resulting tree. To be more precise, a term is productive in two cases. First, when it uses the constant $a$. Notice however that this $a$ has to be really used: there exist terms which syntactically contain $a$, but the result of this $a$ is then ignored, like in $(\lambda x.e)\,a$. Second, a term which takes a productive argument and uses it at least twice is also productive (for example, the productive argument may be a function that creates an $a$-labeled node; when a lambda-term uses such an argument twice, the lambda-term is itself responsible for increasing the number of constants $a$ in the resulting tree). A \emph{type judgment} is of the form $\Gamma\vdash M:(f,\tau)$, where we require that the type $\tau$ and the term $M$ are of the same sort. The \emph{type environment} $\Gamma$ is a set of bindings of variables of the form $x^\alpha:(f,\tau)$, where $\tau\in\mathcal{T}^\alpha$. In $\Gamma$ we may have multiple bindings for the same variable. By $\mathit{dom}(\Gamma)$ we denote the set of variables $x$ that are bound by $\Gamma$, and by $\Gamma{\restriction}_{\mathsf{pr}}$ we denote the set of those binding from $\Gamma$ that use flag ${\mathsf{pr}}$. We now gradually present rules of the type system. We begin with rules for node constructors: \begin{mathpar} \inferrule{}{\vdash a:({\mathsf{pr}},(f,\r)\mathbin{\to}\r)}\and \inferrule{}{\vdash \mathit{b}:({\mathsf{np}},(f_1,\r)\mathbin{\to}(f_2,\r)\mathbin{\to}\r)}\and \inferrule{}{\vdash e:({\mathsf{np}},\r)} \end{mathpar} Since we aim at counting constants $a$, we say here that $a$ is productive, while $\mathit{b}$ and $e$ are nonproductive. Notice that productivity of a node constructor does not depend on productivity of the argument; flags of the arguments ($f, f_1, f_2$) can be arbitrary. Then we have a rule for a variable: \begin{mathpar} \inferrule{}{x:(f,\tau)\vdash x:({\mathsf{np}},\tau)} \end{mathpar} The type of the variable is taken from the environment. The flag is always ${\mathsf{np}}$, though; by just using a variable we are not productive at all (and in the productivity flag we want to cover productivity of the lambda-term itself, not of lambda-terms that may be potentially substituted for free variables). The rule that talks about lambda-binders is very natural; it just moves type pairs from the argument to the environment: \begin{mathpar} \inferrule*[right=($\lambda$)]{\Gamma\cup\{x:(f_i,\tau_i)\mid i\in I\}\vdash K:(f,\tau)\\x\not\in \mathit{dom}(\Gamma)} {\Gamma\vdash\lambda x.K:(f,\bigwedge\nolimits_{i\in I}(f_i,\tau_i)\mathbin{\to}\tau)} \end{mathpar} Finally, we have the most complicated rule, for application: \begin{mathpar} \inferrule*[right=$(@)$]{\Gamma\vdash K:(f',\bigwedge\nolimits_{i\in I}(f_i^\bullet,\tau_i)\mathbin{\to}\tau)\\ \Gamma_i\vdash L:(f_i^\circ,\tau_i)\mbox{ for each }i\in I} {\Gamma\cup\bigcup\nolimits_{i\in I}\Gamma_i\vdash K\,L:(f,\tau)} \end{mathpar} where we assume that \begin{itemize} \item every pair $(f_i^\bullet,\tau_i)$ is different (where $i\in I$), \item for each $i\in I$, $f_i^\bullet={\mathsf{pr}}$ if and only if $f_i^\circ={\mathsf{pr}}$ or $\Gamma_i{\restriction}_{\mathsf{pr}}\neq\emptyset$, and \item $f={\mathsf{pr}}$ if and only if $f'={\mathsf{pr}}$, or $f_i^\circ={\mathsf{pr}}$ for some $i\in I$, or $|\Gamma{\restriction}_{\mathsf{pr}}|+\sum_{i\in I}|\Gamma_i{\restriction}_{\mathsf{pr}}|>|(\Gamma\cup\bigcup_{i\in I}\Gamma_i){\restriction}_{\mathsf{pr}}|$. \end{itemize} Let us explain the above conditions. The first condition is technical: we need to provide exactly one derivation for every needed type pair. The second condition says that when $K$ requires a ``productive'' argument, either we can apply an argument $L$ that is itself productive, or we can apply a nonproductive $L$ that uses a productive variable; in the latter case, after substituting something for the variable, $L$ will become productive. The third condition says that $K\,L$ is productive if $K$ is productive, or if $L$ is productive, or if some productive free variable is duplicated (i.e., used in at least two subderivations simultaneously). Notice that weakening of type environments is disallowed: $\Gamma\vdash M:(f,\tau)$ does not necessarily imply $\Gamma,x:(g,\sigma)\vdash M:(f,\tau)$; in other words, every binding $x:(g,\sigma)$ in the type environment (and thus every pair $(g,\sigma)$ assigned to an argument) has to be really used somewhere in the type derivation. This property of the type system is very expected, if we recall that we want to distinguish lambda-terms that really use their (productive) arguments from those in which the arguments are discarded. On the other hand, contraction is allowed: we may say that $\Gamma,x:(g,\sigma),x:(g,\sigma)\vdash M:(f,\tau)$ implies $\Gamma,x:(g,\sigma)\vdash M:(f,\tau)$, since a type environment is a set of type bindings. As we see in the \TirName{(\!@\!)}\xspace rule, such contractions (for productive type binding) cause productivity of lambda-terms. A \emph{derivation} is defined as usual: it is a tree labeled by type judgments, such that each node together with its children fit to one of the rules of the type system. We now define a \emph{value} of every node of a derivation, saying how much this node is productive. In a node using the rule for the constant $a$, the value is $1$. In a node using the \TirName{(\!@\!)}\xspace rule with type environments $\Gamma$ and $\Gamma_i$ for $i\in I$, the value is \begin{align*} |\Gamma{\restriction}_{\mathsf{pr}}|+\sum\nolimits_{i\in I}|\Gamma_i{\restriction}_{\mathsf{pr}}|-|(\Gamma\cup\bigcup\nolimits_{i\in I}\Gamma_i){\restriction}_{\mathsf{pr}}|\,. \end{align*} Spelling this out, the value in such a node equals the number of productive type bindings together in all the type environments $\Gamma$, $(\Gamma_i)_{i\in I}$, minus the number of such type bindings in their union. In other words, it says how many times we have to duplicate some productive type bindings before splitting them between type environments of subderivations. In all other nodes the value is $0$. For a derivation $D$, the \emph{value} of $D$, denoted $\mathit{val}(D)$, is the sum of values of all nodes in $D$. We can easily see that the value of a derivation $D$ is positive if and only if $D$ is productive (i.e., the flag in the derived type judgment is ${\mathsf{pr}}$). The main theorem says that $\mathit{val}(D)$ can be used to estimate the the number of constants $a$ in normal forms of lambda-terms. \begin{theorem}\label{thm:det} The following holds for the type system introduced above: \begin{compactenum}[(D1)] \item\label{it:d1} for every $m\in\mathbb{N}$ there is a function $\eta_m\colon\mathbb{N}\to\mathbb{N}$ such that if $M$ is a homogeneous and closed lambda-term of sort $\mathsf{o}$ and complexity at most $m$, and $D$ is a derivation for $\vdash M:(f,\r)$, then the number of constants $a$ in the normal form of $M$ is \begin{compactenum} \item[(D1A)]\label{it:d1a} at least $\mathit{val}(D)$, and \item[(D1B)]\label{it:d1b} at most $\eta_m(\mathit{val}(D))$; \end{compactenum} \item\label{it:d2} for every closed lambda-term $M$ of sort $\mathsf{o}$ one can derive $\vdash M:(f,\r)$ (for some $f\in\{{\mathsf{pr}},{\mathsf{np}}\}$).\footnote{% Actually, one can even prove that there is a unique derivation concerning $M$ (assuming that $M$ is closed and of sort $\mathsf{o}$).} \end{compactenum} \end{theorem} \begin{example}\label{ex:1} Observe how the type system behaves for the lambda-term $M=(\lambda y.N\,(N\,(N\,y))\,(a\,e))\,a$, where $N=\lambda y.\lambda x.y\,(y\,x)$. We start with a derivation concerning $N$, where we write $\tau_y^{\mathsf{pr}}$ for $({\mathsf{pr}},\r)\mathbin{\to}\r$: \begin{mathpar} \inferrule*[Right=$(\lambda)$,leftskip=-7.7em,rightskip=-13em]{ \inferrule*[Right=$(\lambda)$,leftskip=1.1em,rightskip=1.1em]{ \inferrule*[Right=$(@)$,leftskip=6.6em,rightskip=6.6em]{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash y:({\mathsf{np}},\tau_y^{\mathsf{pr}}) \and \inferrule*[right=$(@)$,leftskip=1em,rightskip=5.3em]{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash y:({\mathsf{np}},\tau_y^{\mathsf{pr}}) \and x:({\mathsf{pr}},\r)\vdash x:({\mathsf{np}},\r) }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}}),\,x:({\mathsf{pr}},\r)\vdash y\,x:({\mathsf{np}},\r) } }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}}),\,x:({\mathsf{pr}},\r)\vdash y\,(y\,x):({\mathsf{pr}},\r) } }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash\lambda x.y\,(y\,x):({\mathsf{pr}},\tau_y^{\mathsf{pr}}) } }{ \vdash N:({\mathsf{pr}},({\mathsf{pr}},\tau_y^{\mathsf{pr}})\mathbin{\to}\tau_y^{\mathsf{pr}}) } \end{mathpar} Notice that the type $\tau_y^{\mathsf{pr}}$ requires a productive argument, but (in both the \TirName{(\!@\!)}\xspace rules above) we apply an argument that is not productive itself. This is possible, because the type judgments for the arguments have productive type bindings in the type environments (and hence for the purposes of the \TirName{(\!@\!)}\xspace rule they are assumed to be productive). The lower use of the \TirName{(\!@\!)}\xspace rule has value $1$ (and in effect the productivity flag is set to ${\mathsf{pr}}$), because the productive type binding $y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})$ is taken to both children. Below, we have another derivation concerning $N$, where we write $\tau_y^{\mathsf{np}}$ for $({\mathsf{np}},\r)\mathbin{\to}\r$: \begin{mathpar} \inferrule*[Right=$(\lambda)$,leftskip=-5.2em,rightskip=-10.5em]{ \inferrule*[Right=$(\lambda)$,leftskip=1.1em,rightskip=1.1em]{ \inferrule*[Right=$(@)$,leftskip=4.1em,rightskip=4.1em]{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash y:({\mathsf{np}},\tau_y^{\mathsf{pr}}) \and \inferrule*[right=$(@)$,leftskip=1em,rightskip=5.3em]{ y:({\mathsf{pr}},\tau_y^{\mathsf{np}})\vdash y:({\mathsf{np}},\tau_y^{\mathsf{np}}) \and x:({\mathsf{np}},\r)\vdash x:({\mathsf{np}},\r) }{ y:({\mathsf{pr}},\tau_y^{\mathsf{np}}),\,x:({\mathsf{np}},\r)\vdash y\,x:({\mathsf{np}},\r) } }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}}),\,y:({\mathsf{pr}},\tau_y^{\mathsf{np}}),\,x:({\mathsf{np}},\r)\vdash y\,(y\,x):({\mathsf{np}},\r) } }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}}),\,y:({\mathsf{pr}},\tau_y^{\mathsf{np}})\vdash\lambda x.y\,(y\,x):({\mathsf{np}},\tau_y^{\mathsf{np}}) } }{ \vdash N:({\mathsf{np}},({\mathsf{pr}},\tau_y^{\mathsf{pr}})\wedge({\mathsf{pr}},\tau_y^{\mathsf{np}})\mathbin{\to}\tau_y^{\mathsf{np}}) } \end{mathpar} This time the value of all nodes is $0$, because every type binding is used in exactly one place. Likewise, it is possible to derive five other type pairs for the lambda-term $N$: \begin{align*} &({\mathsf{np}},({\mathsf{pr}},\tau_y^{\mathsf{pr}})\wedge({\mathsf{np}},\tau_y^{\mathsf{pr}})\mathbin{\to}\tau_y^{\mathsf{pr}})\,,& &({\mathsf{np}},({\mathsf{np}},\tau_y^{\mathsf{pr}})\mathbin{\to}\tau_y^{\mathsf{pr}})\,,\\ &({\mathsf{np}},({\mathsf{pr}},\tau_y^{\mathsf{np}})\wedge({\mathsf{np}},\tau_y^{\mathsf{np}})\mathbin{\to}\tau_y^{\mathsf{np}})\,,& &({\mathsf{np}},({\mathsf{np}},\tau_y^{\mathsf{np}})\mathbin{\to}\tau_y^{\mathsf{np}})\,.\\ &({\mathsf{np}},({\mathsf{np}},\tau_y^{\mathsf{pr}})\wedge({\mathsf{pr}},\tau_y^{\mathsf{np}})\mathbin{\to}\tau_y^{\mathsf{np}})\,,& \end{align*} While deriving a type for $M$, we only need one type pair for $N$: the type pair $({\mathsf{pr}},({\mathsf{pr}},\tau_y^{\mathsf{pr}})\mathbin{\to}\tau_y^{\mathsf{pr}})$ derived at the beginning. But we remark that if the lambda-term was $M'=(\lambda y.N\,(N\,(N\,y))\,e)\,a$ (we have replaced here $a\,e$ by $e$, and thus the first call to $N$ receives a nonproductive argument as $x$), it would be necessary to use both the above derivations for $N$. Denoting the type $({\mathsf{pr}},\tau_y^{\mathsf{pr}})\mathbin{\to}\tau_y^{\mathsf{pr}}$ as $\tau_N$, we continue the derivation for $M$: \begin{mathpar} \inferrule*[Right=$(@)$,rightskip=-9.5em]{ \vdash N:({\mathsf{pr}},\tau_N) \and \inferrule*[Right=$(@)$,leftskip=1em,rightskip=4.2em]{ \vdash N:({\mathsf{pr}},\tau_N) \and \inferrule*[right=$(@)$,leftskip=1em,rightskip=5.3em]{ \vdash N:({\mathsf{pr}},\tau_N) \and y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash y:({\mathsf{np}},\tau_y^{\mathsf{pr}}) }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash N\,y:({\mathsf{pr}},\tau_y^{\mathsf{pr}}) } }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash N\,(N\,y):({\mathsf{pr}},\tau_y^{\mathsf{pr}}) } }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash N\,(N\,(N\,y)):({\mathsf{pr}},\tau_y^{\mathsf{pr}}) } \and \inferrule*[right=$(@)$,leftskip=-3.5em]{ \inferrule*[right=$(\lambda)$,rightskip=1em]{ \inferrule*[Right=$(@)$,leftskip=5em,rightskip=5em]{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash N\,(N\,(N\,y)):({\mathsf{pr}},\tau_y^{\mathsf{pr}}) \and \inferrule*[right=$(@)$,leftskip=1em,rightskip=5.5em]{ \vdash a:({\mathsf{pr}},\tau_y^{\mathsf{np}}) \and \vdash e:({\mathsf{np}},\r) }{ \vdash a\,e:({\mathsf{pr}},\r) } }{ y:({\mathsf{pr}},\tau_y^{\mathsf{pr}})\vdash N\,(N\,(N\,y))\,(a\,e):({\mathsf{pr}},\r) } }{ \vdash\lambda y.N\,(N\,(N\,y))\,(a\,e):({\mathsf{pr}},({\mathsf{pr}},\tau^{\mathsf{pr}}_y)\mathbin{\to}\r) } \and \vdash a:({\mathsf{pr}},\tau_y^{\mathsf{pr}}) }{ \vdash M:({\mathsf{pr}},\r) } \end{mathpar} The total value of this derivation is $5$ ($2$ in the two nodes concerning $a$, and $3$ in the three subderivations concerning $N$), while the normal form of $M$ contains $9$ appearances of the constant $a$. Notice that while adding any further $N$ to the sequence $N\,(N\,(N\,y))$, we increase the value by $1$, while we almost double the number of $a$'s in the normal form. \end{example} \paragraph*{Proofs.} Let us now sketch the proof of Theorem~\ref{thm:det}. While proving Condition~\hyperref[it:d1]{(D1)}, it is convenient to consider the RMF strategy of reductions (defined on Page~\pageref{pag:rmf}). We have the following subject-reduction lemma for reductions of this kind. \begin{lemma}\label{lem:subj-red-det} If $D_0$ is a derivation for $\vdash M_0:(f,\mathsf{o})$, where $M_0$ is homogeneous, closed, and of complexity $m$ (and of sort $\mathsf{o}$), and $M_0\to_\beta M_1\to_\beta\dots\to_\beta M_n$ is a sequence of RMF$(m)$ beta-reductions, then there exists a derivation $D_n$ for $\vdash M_n:(f,\mathsf{o})$ such that $\mathit{val}(D_0)\leq\mathit{val}(D_n)$ and $\mathit{val}(D_n)\leq 2^{\mathit{val}(D_0)}$. \end{lemma} Because the maximal complexity $m$ of the lambda-term $M$ considered in Theorem~\ref{thm:det} is fixed, using Lemma~\ref{lem:subj-red-det} $m$ times (for complexities $m,m-1,\dots,1$) we obtain a derivation $D_T$ for the normal from $T$ of $M$ such that $\mathit{val}(D)\leq\mathit{val}(D_T)$ and $\mathit{val}(D_T)$ is bounded by a function of $\mathit{val}(D)$, that is, $\mathit{val}(D)$ estimates $\mathit{val}(D_T)$. It remains to notice that $\mathit{val}(D_T)$ is exactly the number of $a$-labeled nodes in the tree $T$. \begin{proof}[Proof sketch (Lemma~\ref{lem:subj-red-det})] We proceed by induction: for every $i\in\{1,\dots,n\}$ out of the derivation $D_{i-1}$ for $\vdash\nobreak M_{i-1}:(f,\mathsf{o})$ we construct a derivation $D_i$ for $\vdash M_i:(f,\mathsf{o})$. To this end, we consider every subderivation $D$ of $D_{i-1}$ starting with a type judgment $\Gamma\vdash(\lambda x.K)\,L:(g,\tau)$ concerning the redex involved in the reduction $M_{i-1}\to_\beta M_i$; we need to replace it by a derivation $D'$ for $\Gamma\vdash K[L/x]:(g,\tau)$. We obtain $D'$ by a surgery on $D$: we take the subderivation of $D$ concerning $K$, we replace every leaf deriving a type $\sigma$ for $x$ by the subderivation of $D$ deriving this type $\sigma$ for $L$, and we update type environments and productivity flags appropriately. Notice that every subderivation concerning $L$ is moved to at least one leaf concerning $x$ (nothing can disappear). The only reason why the value of the derivation can decrease is that potentially a productive type binding $x:({\mathsf{pr}},\sigma)$ was duplicated (say, $k$ times) in the derivation concerning $K$. In $D'$ this binding is no longer present (in $K[L/x]$ there is no $x$) so the value gets decreased by $k$, but in this situation the subderivation deriving $\sigma$ for $L$ becomes inserted in $k+1$ leaves. This subderivation is either productive itself, or uses a productive type binding in the environment; in both cases by creating $k$ additional copies of this subderivation we increase the value at least by $k$, compensating the loss caused by elimination of $x$. This implies that $\mathit{val}(D)\leq\mathit{val}(D')$, hence $\mathit{val}(D_{i-1})\leq\mathit{val}(D_i)$ (and, in effect, $\mathit{val}(D_0)\leq\mathit{val}(D_n)$). Conversely, the only reason why the value can grow is that some derivation concerning $L$ (that is either productive itself or uses some productive type bindings for its free variables) becomes inserted in $k+1$ leaves, for some $k\geq 1$. In the worst case, this may cause that the value (of the whole derivation for $M$) gets multiplied by $k+1$. But, simultaneously, in the subderivation concerning $K$, the productive type bindings for $x$ are removed, which decreases the value by $k$ in some nodes of this subderivation. The point is now that these nodes were never copied in the reduction sequence from $D_0$ to the considered $D_{i-1}$; this is because all the reductions are RMF$(m)$ reductions. Indeed, looking from the other side, all variables appearing in (the copied subderivation for) $L$ are of order at most $m-2$---% as observed on Page~\pageref{pag:rmf}---% but all variables involved in future order-$m$ reductions (i.e., all variables that we remove from type environments) are of order $m-1$---% because of homogeneity of the lambda-term. Thus, whenever we multiply the value of the current derivation by at most $k+1$, we subtract $k$ from the value of the original derivation $D_0$. The worst case is when $\mathit{val}(D_0)$ times we decrease the value by $1$, and $\mathit{val}(D_0)$ times we multiply it by $2$. It follows that $\mathit{val}(D_n)\leq\mathit{val}(D_0)\cdot2^{\mathit{val}(D_0)}$; a slightly more careful analysis shows that actually $\mathit{val}(D_n)\leq 2^{\mathit{val}(D_0)}$. \end{proof} In the proof of Condition~\hyperref[it:d2]{(D2)}, saying that we can derive a type for every closed lambda-term $M$ of sort $\mathsf{o}$, we proceed backwards: it is easy to derive a type for a tree (i.e., for the normal form of $M$), and thus it is enough to have a subject expansion lemma saying that out of a derivation for a lambda-term after a beta-reduction we can construct a derivation for the lambda-term before the beta-reduction. This time we follow the OI reduction strategy. Because outermost redexes are closed, it is thus enough to have the following lemma. \begin{lemma}\label{lem:subj-exp} If we can derive $\vdash K[L/x]:(g,\tau)$, then we can also derive $\vdash (\lambda x.K)\,L:(g,\tau)$. \end{lemma} \begin{proof}[Proof sketch] In the derivation $D$ for $K[L/x]$ we replace every subderivation concerning $L$ by a leaf rule for the variable $x$, and we correct type environments and productivity flags in the rest of the derivation. This way we obtain a derivation for $K$ with type environment requesting some types for $x$. Simultaneously, each of these types was derived for $L$ in some subderivation of $D$ (there may be multiple such subderivations, because $L$ may appear in many places in $K[L/x]$, but we choose only one subderivation for every type). It is not difficult to combine these derivations into a derivation concerning $(\lambda x.K)\,L$. \end{proof} We remark that by applying the above surgery to a derivation for $\Gamma\vdash K[L/x]:(g,\tau)$ (i.e., for an arbitrary redex, having some free variables) we only obtain a derivation for $\Gamma'\vdash (\lambda x.K)\,L:(g,\tau)$ with some $\Gamma'\subseteq\Gamma$, but not necessarily with $\Gamma'=\Gamma$. The reason is that we remove some subderivations concerning $L$ (we leave only one for every type), and possibly some type bindings from $\Gamma$ were used only in the removed subderivations. \paragraph*{Bibliographic Note.} As already mentioned in the introduction, the idea of the type system presented above originates from Parys~\cite{ho-new}. In that paper, a similar type system was introduced for configurations of collapsible pushdown systems. It was then used to prove that a restricted variant of these systems (systems without the so-called collapse operation) are less powerful than general collapsible pushdown systems. The type system was then transferred to the setting of lambda-terms in Parys~\cite{numbers-journal}. Their type system is slightly more complicated than ours, and allows to obtain a stronger version of Condition~\hyperref[it:d1]{(D1B)}, where the function $\eta_m$ does not depend on the complexity $m$ of considered lambda-terms. \section{Nondeterministic Quantities}\label{sec:nondeterministic} Suppose now that we want to estimate another quantity: the maximal number of appearances of the constant $a$ on a single branch in the beta-normal form $T$ of a lambda-term $M$. It seems that in order to describe this quantity, it is enough to take the type system from Section~\ref{sec:deterministic}, and replace the rule for the constant $b$ by two rules: \begin{mathpar} \inferrule{}{\vdash \mathit{b}:({\mathsf{np}},(f,\r)\mathbin{\to}\top\mathbin{\to}\r)}\and \inferrule{}{\vdash \mathit{b}:({\mathsf{np}},\top\mathbin{\to}(f,\r)\mathbin{\to}\r)} \end{mathpar} In these rules we ignore one of the arguments, and we descend only to the other one. This way, every type derivation $D$ for a tree $T$ follows one branch in $T$, and in effect $\mathit{val}(D)$ equals to the number of constants $a$ on that branch. By arguments like in the previous section we obtain the following, rather useless, properties of the modified type system: \begin{compactenum}[(N1)] \item\label{it:n1} for every $m\in\mathbb{N}$ there is a function $\eta_m\colon\mathbb{N}\to\mathbb{N}$ such that if $M$ is a homogeneous and closed lambda-term of sort $\mathsf{o}$ and complexity at most $m$, and $D$ is a derivation for $\vdash M:(f,\r)$, then the number of constants $a$ on some branch of the normal form of $M$ is \begin{compactenum} \item[(N1A)]\label{it:n1a} at least $\mathit{val}(D)$, and \item[(N1B)]\label{it:n1b} at most $\eta_m(\mathit{val}(D))$; \end{compactenum} \item\label{it:n2} for every closed lambda-term $M$ of sort $\mathsf{o}$ one can derive $\vdash M:(f,\r)$ (for some $f\in\{{\mathsf{pr}},{\mathsf{np}}\}$). \end{compactenum} These properties are not satisfactory for us, because they only say that there exists a branch with the number of constants $a$ estimated by $\mathit{val}(D)$, for some derivation $D$. We, however, are interested in the branch on which the number of constants $a$ is maximal. In other words: if in the beta-normal form $T$ of $M$ there are two branches, one with just a few constants $a$, and the other with a lot of them, we expect to have two derivations $D$ and $D'$, where $\mathit{val}(D)$ is small (corresponds to the first branch), and $\mathit{val}(D')$ is large (corresponds to the second branch). But Condition~\hyperref[it:n2]{(N2)} gives us only one derivation, and we do not know which one. Thus, we rather need to have the following property: \begin{compactenum} \item[(N2$'$)\!]\label{it:n2p} for every $m\in\mathbb{N}$ there is a function $\eta_m\colon\mathbb{N}\to\mathbb{N}$ such that if $M$ is a homogeneous and closed lambda-term of sort $\mathsf{o}$ and complexity at most $m$ and on some branch of the beta-normal form of $M$ there are $n$ appearances of the constant $a$, then there is a derivation $D$ for $\vdash M:(f,\r)$ such that $n\leq\eta_m(\mathit{val}(D))$. \end{compactenum} In the light of Condition~\hyperref[it:n2p]{(N2$'$)}, Condition~\hyperref[it:n1b]{(N1B)} becomes redundant, and thus we can restate Condition~\hyperref[it:n1a]{(N1A)} as follows: \begin{compactenum} \item[(N1$'$)\!]\label{it:n1p} if $M$ is a homogeneous and closed lambda-term of sort $\mathsf{o}$, and $D$ is a derivation for $\vdash M:(f,\r)$, then the number of constants $a$ on some branch of the normal form of $M$ is at least $\mathit{val}(D)$. \end{compactenum} It is, though, an open problem whether Condition~\hyperref[it:n2p]{(N2$'$)} holds. \begin{openpr} Does the modified type system satisfy Condition~\hyperref[it:n2p]{(N2$'$)}? \end{openpr} In order to prove Condition~\hyperref[it:n2p]{(N2$'$)}, we should probably proceed backward: we should start with a derivation concerning (the branch with the maximal number of constants $a$ in) the normal form of $M$, and then, successively, from a derivation for a lambda-term after a beta-reduction obtain a derivation for the lambda-term before the beta-reduction. We have a subject expansion lemma (Lemma~\ref{lem:subj-exp}) only for redexes without free variables (and it seems difficult to generalize it to arbitrary redexes, as explained at the end of the previous section); we should thus assume that we always reduce the outermost redex. In effect, in the considered sequence of beta-reductions from $M$ to its normal form we have to mix reductions concerning redexes of different orders. For such a sequence of reductions it is not clear how to estimate the value of the derivation for the beta-normal form $T$ by the value of the derivation for $M$. We remark that a modified type system, in which one allow weakening of type environments, satisfies a subject expansion lemma (like Lemma~\ref{lem:subj-exp}). But with unrestricted weakening of type environments Condition~\hyperref[it:n1p]{(N1$'$)} no longer holds. Indeed, if weakening was allowed, we could use a derivation $D$ (with an arbitrary large value) for a lambda-term $M$ as a part of a derivation for a lambda-term like $(\lambda x.e)\,M$, whose normal form contains no $a$. The reason why weakening is forbidden is exactly this: we want to have subderivations only for subterms that really participate to the normal form. The life is thus not so simple: because we want both Conditions~\hyperref[it:n1p]{(N1$'$)} and~\hyperref[it:n2p]{(N2$'$)}, we have to introduce a more complicated type system. In this type system, instead of one kind of values of nodes, we have \emph{values of order $k$} (or \emph{$k$-values}) for every $k\in\{1,\dots,m+1\}$ (where $m$ is the complexity of the considered lambda-term). We also mark some nodes as belonging to a \emph{zone of order $k$} (or \emph{$k$-zone}) for every order $k\in\{0,\dots,m\}$. Before defining the type system, let us first give some idea how Condition~\hyperref[it:n2p]{(N2$'$)} can be shown. Then, we give details of a type system motivated by this idea. Consider thus a lambda-term $M_m$ that is of complexity $m$, and reduces to a tree $M_0$. Following the RMF reduction strategy, we can find lambda-terms $M_{m-1},M_{m-2},\dots,M_1$ such that every $M_i$ is of complexity $i$ and all reductions between $M_i$ and $M_{i-1}$ are of order $i$. Our aim is to estimate the number of constants $a$ located on some branch in $M_0$. We thus mark all nodes of this branch as the $0$-zone, and we say that the order-$1$ value is $1$ in all nodes of the $0$-zone that are labeled by $a$. Next, we proceed back to $M_1$. Every node constructor in the $0$-zone in $M_0$ originates from some particular node constructor appearing already in $M_1$. We thus mark these node constructors in $M_1$ as belonging to the $0$-zone (notice that in $M_1$ they no longer form a branch); and again those of them that are $a$-labeled get $1$-value $1$. The crucial observation is that no two node constructors from the $0$-zone in $M_0$ can originate from a single node constructor of $M_1$. Indeed, all the beta-reductions between $M_1$ and $M_0$ are RMF$(1)$. In such a beta-reduction we take a whole subtree (i.e., a lambda-term of sort $\mathsf{o}$) of $M_1$, and we replace it somewhere, possibly replicating it. But since the considered nodes of $M_0$ lie on a single branch, they may belong to at most one copy of the replicated subtree. In effect, the total $1$-value in $M_1$ is the same as in $M_0$. We cannot directly repeat the same reasoning to move $1$-values from $M_1$ back to $M_2$, since now there is a problem: a single node constructor in $M_2$ may result in multiple (uncontrollably many) node constructors with a $1$-value in $M_1$. We rescue ourselves in the following way. We choose some branch of $M_1$ (included in the $0$-zone) as the $1$-zone. Then, for every node of $M_1$ with positive $1$-value, we look for the closest ancestor of this node that lies in the $1$-zone, and in this ancestor we set the $2$-value to $1$. Notice that for multiple nodes with positive $1$-value, their closest ancestor lying in the $1$-zone may be the same (and then we set its $2$-value to $1$, not to the number of these nodes). Thus, in general, the total $2$-value may be smaller than the total $1$-value. We can, however, ensure that it is smaller only logarithmically; to do so, we choose a branch forming the $1$-zone in a clever way: staring from the root, we always proceed to the subtree with the largest total $1$-value. In effect, the total $2$-value of $M_1$ estimates the total $1$-value of $M_1$. Once all nodes of $M_1$ with positive $2$-value lie on a single branch (which is chosen as the $1$-zone), we can transfer them back to $M_2$ without changing their number: because reductions between $M_2$ and $M_1$ are RMF$(2)$, every node of the $1$-zone in $M_1$ originates from a different node of $M_2$. Then in $M_2$ we again choose a branch as the $2$-zone, we assign $3$-value to some its nodes, and so on. At the end we obtain some labeling of $M_m$ by zones and values of particular orders. The goal of the type system presented below is, roughly speaking, to ensure that a labeling of $M_m$ actually is obtainable in the process as above. In fact, we do not label nodes of $M_m$ itself, but rather nodes of a type derivation for $M_m$. We now come to a formal definition of the type system. \paragraph*{Type Judgments.} For every sort $\alpha$ we define the set $\mathcal{T}^\alpha$ of \emph{types} of sort $\alpha$, and the set $\mathcal{\widehat T}}%\!\!T}^\alpha_m$ of \emph{type triples} of sort $\alpha$. This is done as follows, where $\mathcal{P}$ denotes the powerset: \begin{align*} &\mathcal{T}^{\alpha\mathbin{\to}\beta}=\mathcal{P}(\mathcal{\widehat T}}%\!\!T}_{\mathit{ord}(\alpha)}^\alpha)\times\mathcal{T}^\beta\,,\qquad \mathcal{T}^\mathsf{o}=\{\mathsf{o}\}\,,\\ &\mathcal{\widehat T}}%\!\!T}_m^\alpha= \{(Z,F,\tau)\in\{0,\dots,m\}^2\times\mathcal{T}^\alpha\mid F\leq Z+1\}\,. \end{align*} Notice that the sets $\mathcal{T}^\alpha$ and $\mathcal{\widehat T}}%\!\!T}_m^\alpha$ are finite. A type $(T,\tau)\in\mathcal{T}^{\alpha\mathbin{\to}\beta}$ is denoted as $T\mathbin{\to}\tau$. A type triple $\hat\tau=(Z,F,\tau)\in\mathcal{\widehat T}}%\!\!T}_m^\alpha$ consists of a zone order $Z$, a productivity order $F$, and a type $\tau$. In order to distinguish types from type triples, the latter are denoted by letters with a hat, like $\hat\tau$. A \emph{type judgment} is of the form $\Gamma\vdash_m M:\hat\tau$, where $\Gamma$, called a \emph{type environment}, is a set of bindings of the form $x^\alpha:\hat\sigma$ with $\hat\sigma\in\mathcal{\widehat T}}%\!\!T}^\alpha_{\mathit{ord}(\alpha)}$, and $M$ is a lambda-term, and $\hat\tau$ is a type triple of the same sort as $M$ (i.e., $\hat\tau\in\mathcal{\widehat T}}%\!\!T}^\beta_m$ when $M$ is of sort $\beta$). We assume that $M$ is homogeneous. As previously, the intuitive meaning of a type $\bigwedge T\mathbin{\to}\tau$ is that a lambda-term having this type can return a lambda-term having type $\tau$, while taking an argument for which we can derive all type triples from $T$. Moreover, in $\mathcal{T}^\mathsf{o}$ there is just one type $\mathsf{o}$, which can be assigned to every lambda-term of sort $\mathsf{o}$. Suppose that a node of a type derivation for a closed and homogeneous lambda-term $M_m$ of sort $\mathsf{o}$ is labeled by a type judgment $\Gamma\vdash_m M:\hat\tau$ with $\hat\tau=(Z,F,\tau)$. Then \begin{itemize} \item $\tau$ is the type derived for $M$; \item $\Gamma$ contains type triples that could be used for free variables of $M$ in the derivation; \item $m$ is an upper bound for the complexity of $M$ (this bound is not strict: in the proofs, it is useful to temporarily allow also lambda-terms $M$ of complexity $m+1$), and simultaneously for orders of considered zones and values; \item $Z\in\{0,\dots,m\}$ is the largest number such that for every $k\in\{0,\dots,Z\}$, the considered node of the derivation belongs to the $k$-zone; \item $F\in\{0,\dots,m\}$ is the largest number such that for every $k\in\{1,\dots,F\}$, in the imaginary lambda-term $M_k$ obtained from $M_m$ by reducing all redexes of order greater than $k$, the order-$k$ value will be positive in the subderivation starting in the considered node. \end{itemize} Notice that we always have that $Z\geq 0$, which means that every node of every derivation belongs at least to the $0$-zone. We choose zones in a derivation in such a way that for every node the set of orders $k$ of zones to which the node belongs is always of the form $\{0,\dots,Z\}$. For this reason in a type triple it is enough to have a number $Z$ (representing the set $\{0,\dots,Z\}$), instead of an arbitrary set of orders of zones. Moreover, if a node of a derivation belongs to a $k$-zone, then its parent as well; in effect, the zone order in the type triple labeling a parent cannot be smaller than in its child. Likewise, the set of orders $k$ for which the $k$-value is positive (after appropriate reductions) is always of the form $\{1,\dots,F\}$, so it is enough to remember its maximum. Moreover, if $k$-value is positive is some subderivation, then it is also positive in a larger subderivation, hence also the productivity order in the type triple labeling a parent cannot be smaller than in a child. \paragraph{Type System.} We now give the first four rules, concerning node constructors: \begin{align*} &\inferrule{}{\vdash_m \mathit{b}:(Z,0,(0,0,\r)\mathbin{\to}\top\mathbin{\to}\r)}& &\inferrule{}{\vdash_m a:(Z,\min(Z+1,m),(0,0,\r)\mathbin{\to}\r)}\\ &\inferrule{}{\vdash_m \mathit{b}:(Z,0,\top\mathbin{\to}(0,0,\r)\mathbin{\to}\r)}& &\inferrule{}{\vdash_m e:(Z,0,\r)} \end{align*} We say that the $k$-value in a node using the rule for the constant $a$ is $1$ for all $k\in\{1,\dots,Z+1\}$; for $k>Z+1$, and for the other constants the $k$-value is $0$. In the above rules we can choose $Z$ arbitrarily (from the set $\{0,\dots,m\}$), which amounts to deciding to which zones the node constructor should belong: it belongs to the $k$-zone for $k\in\{0,\dots,Z\}$. For the constant $b$ we descend only to one argument (because we want to count constants $a$ only on a single branch of the normal form). For the constant $a$ we have set the $k$-value to $1$ for all $k\in\{1,\dots,Z+1\}$, hence we set the productivity order to $Z+1$. There is an exception for $Z=m$: by definition of $\mathcal{\widehat T}}%\!\!T}_m^\alpha$, the productivity order can be at most $m$, so although the $(m+1)$-value is $1$ as well, this information is not covered by the productivity order. Notice that the type $(0,0,\r)$ assigned to arguments of node constructors is the only element of $\mathcal{\widehat T}}%\!\!T}^\mathsf{o}_{\mathit{ord}(\mathsf{o})}$; node constructors do not receive information about zones or values from their arguments. Next, we have a rule for a variable (in nodes using this rule, the $k$-value is $0$ for all $k$): \begin{mathpar} \inferrule*[right=(Var)]{(Z'=Z) \lor (Z'\geq\mathit{ord}(x)=Z)}{x:(Z,F,\tau)\vdash_m x:(Z',F,\tau)} \end{mathpar} In order to understand this rule, suppose that it labels a node of a type derivation for a closed lambda-term $M$ of sort $\mathsf{o}$. Take some $k\in\{0,\dots,m\}$, and consider the lambda-term $M_k$ obtained from our lambda-term by reducing all redexes of orders greater than $k$. According to the proof idea presented above, we create the $k$-zone as a branch of $M_k$ (and then we transfer it back to $M$). Moreover, as the productivity order we should take at least $k$ if in $M_k$ the $k$-value is positive in the subtree starting in the considered node. If $k\leq\mathit{ord}(x)$, the variable $x$ will be no longer present in $M_k$, and some lambda-term (described by the type environment) will be substituted for it. For this reason, the information about the $k$-zone and about positivity of the $k$-value is taken from the type environment. Conversely, if $k>\mathit{ord}(x)$, the node (leaf) concerning $x$ will be still present in $M_k$, and thus we can start the branch forming the $k$-zone there. But this is possible only if the node belongs to the $(k-1)$-zone; in particular for $k=\mathit{ord}(x)+1$ we need to be in the $\mathit{ord}(x)$-zone, which is the case if $Z=\mathit{ord}(x)$. Moreover, the total $k$-value in (the subtree starting in) the considered leaf is $0$, and thus the productivity order is taken from the environment (unlike in the previous type system). The rule for lambda-binders realizes a restricted variant of type weakening: we may ignore arguments that do not contain leaves of zones. This is formalized in the notion of balanced and unbalanced type triples, defined by induction on their structure. For $k\in\{0,\dots,m\}$, a type triple $(Z,F,\bigwedge T_1\mathbin{\to}\dots\mathbin{\to}\bigwedge T_n\mathbin{\to}\mathsf{o})$ is \emph{$k$-unbalanced} if $Z\geq k$ and all elements of the sets $T_1,\dots,T_n$ are $k$-balanced; otherwise, the type triple is \emph{$k$-balanced}. A type triple is \emph{unbalanced} if it is $k$-unbalanced for some $k\in\{0,\dots,m\}$; otherwise it is \emph{balanced}. Intuitively, a subderivation derives a $k$-unbalanced type triple if the unique leaf of the $k$-zone is contained either in this subderivation, or in an imaginary subderivation that will be substituted for a free variable. Indeed, the subderivation contains the leaf of the $k$-zone if it belongs to the $k$-zone, but none of the arguments provides the leaf. We can now give the rule; for nodes using this rule, the $k$-value is $0$ for all $k$. \begin{mathpar} \inferrule*[right=($\lambda$)]{ \Gamma\cup\{x:\hat\sigma\mid \hat\sigma\in T'\}\vdash_m K:(Z,F,\tau) \\ \{\hat\sigma\in T\mid \hat\sigma\mbox{ unbalanced}\}\subseteq T'\subseteq T \\ x\not\in\mathit{dom}(\Gamma) } {\Gamma\vdash_m\lambda x.K:(Z,F,\bigwedge T\mathbin{\to}\tau)} \end{mathpar} As previously, the rule for application is the most complicated one: \begin{mathpar} \inferrule*[right=$(@)$]{ \Gamma_0\vdash_m K:(Z_0,F_0,\tau_0) \\ \Gamma_i\vdash_m L:(Z_i,F_i,\tau_i)\mbox{ for each }i\in I \\ \tau_0=\bigwedge\nolimits_{i\in I}(\min(Z_i,\mathit{ord}(L)),\min(F_i,\mathit{ord}(L)),\tau_i)\mathbin{\to}\tau \and Z = \max\nolimits_{i\in\{0\}\cup I}Z_i \\ \forall k\in\{0,\dots,m\}.\,|\{i\in\{0\}\cup I\mid (Z_i,F_i,\tau_i)\mbox{ $k$-unbalanced}\}|\leq 1 } {\bigcup\nolimits_{i\in \{0\}\cup I}\Gamma_i\vdash_m K\,L:(Z,F,\tau)} \end{mathpar} where \begin{itemize} \item we assume that $0\not\in I$; \item if there is $i\in \{0\}\cup I$ such that $\mathit{ord}(L)\leq Z_i<F_i\leq Z$, then we set $F$ to $\min(Z+1,m)$, and we say that the $k$-value in the node using such a rule is $1$ for all $k\in\{F_i+1,\dots,Z+1\}$ (if there are multiple such $i$, we consider the one for which $F_i$ is the smallest); \item otherwise we set $F$ to $\max_{i\in\{0\}\cup I}F_i$, and the $k$-value to $0$ for all $k$. \end{itemize} Let us comment on the above conditions. First, notice that to the subderivation concerning $K$ we pass the information about $k$-values and $k$-zones from the subderivations concerning $L$ only for $k\leq\mathit{ord}(L)$ (i.e., we write $\min(Z_i,\mathit{ord}(L))$ and $\min(F_i,\mathit{ord}(L))$ instead of simply $Z_i$ and $F_i$). This is because, while thinking about $k$-values and about the $k$-zone, we should imagine the lambda-term $M_k$ obtained from the lambda-term under consideration by reducing all redexes of orders greater than $k$. If $k\leq\mathit{ord}(L)$, the application (for which we write the rule) is no longer present in $M_k$ (it gets reduced in some of the reductions leading to $M_k$), so we should pass the information from $L$ to $K$. Conversely, if $k>\mathit{ord}(L)$, the application is still present in $M_k$; this means $K$ and $L$ are independent subterms there, and hence the information from $L$ should not be passed to $K$. This is complementary to what we said on the \TirName{(Var)}\xspace rule. Second, we also say for every $k$ that only one child can be $k$-unbalanced. Under the intuitive meaning that a conclusion of a subderivation is $k$-unbalanced if the subderivation contains the leaf of the $k$-zone (that remains a leaf in $M_k$), this condition ensures that the $k$-zone has at most one leaf, and thus forms a branch in $M_k$. Third, observe that the $(k+1)$-value in our node is set to $1$ if, in $M_k$, it is the closest ancestor of some node with positive $k$-value that lies in the $k$-zone. Indeed, suppose that the current node is still present in $M_k$ (i.e., that $k>\mathit{ord}(L)$), and that it belongs to the $k$-zone (i.e., that $k\leq Z$). Moreover, suppose that in $M_k$ the $k$-value is positive in some node of the subderivation number $i$ (i.e., that $k\leq F_i$), where $i\in\{0\}\cup I$. If $k\leq Z_i$, then the closest ancestor being in the $k$-zone is already in the subderivation (because its root belongs to the $k$-zone). Conversely, if $k>Z_i$, the closest ancestor being in the $k$-zone is in our node. Recall that (by definition of type triples) we always have $F_i\leq Z_i+1$. All the inequalities hold when $\mathit{ord}(L)+1\leq Z_i+1=k=F_i\leq Z$, and this is exactly the situation when we set the $(k+1)$-value of the current node to $1$. If the node is also in the $(k+1)$-zone (i.e., if $k+1\leq Z$), then the closest ancestor being in the $(k+1)$-zone is in the node itself. It thus makes sense that we also set the $(k+2)$-value of the current node to $1$. Repeating this again, we should set to $1$ the values of all orders in $\{k+1,\dots,Z+1\}$. Denoting the $k$-value of a derivation $D$ by $\mathit{val}^k(D)$, we can state the desired properties of our type system. \begin{theorem}\label{thm:nondet} The following holds for the type system introduced above: \begin{compactenum}[(N1$''$)] \item\label{it:n1pp} if $M$ is a homogeneous and closed lambda-term of sort $\mathsf{o}$, and $D$ is a derivation for $\vdash_m M:(m,m,\r)$, then the number of constants $a$ on some branch of the normal form of $M$ is at least $\mathit{val}^{m+1}(D)$; \item\label{it:n2pp} for every $m\in\mathbb{N}$ there is a function $\eta_m\colon\mathbb{N}\to\mathbb{N}$ such that if $M$ is a homogeneous and closed lambda-term of sort $\mathsf{o}$ and complexity at most $m$, and on some branch of the beta-normal form of $M$ there are $n\geq 1$ appearances of the constant $a$, then there is a derivation $D$ for $\vdash_m M:(m,m,\r)$ such that $n\leq\eta_m(\mathit{val}^{m+1}(D))$. \end{compactenum} \end{theorem} \begin{example}\label{ex:2} Let us consider the same lambda-term as in Example~\ref{ex:1}, namely $M=(\lambda y.N\,(N\,(N\,y))\,(a\,e))\,a$ with $N=\lambda y.\lambda x.y\,(y\,x)$. As $m$ we take its complexity, that is, $2$. Notice that after performing all beta-reductions of order $2$, we obtain the lambda-term $M_1=(\lambda x.N_2\,(N_2\,x))\,(a\,e)$ with $N_2=\lambda x.N_1\,(N_1\,x)$ and $N_1=\lambda x.a\,(a\,x)$. In this term, the $1$-zone, which has to be a branch, can descend into one of the subterms $N_2$, then into one of the subterms $N_1$, and then it can finish in one of the constants $a$. In effect, while typing $M$, we need two derivations for $N$, one where the lambda-term belongs to the $1$-zone, and one where it does not. Denote $\tau_y=(0,0,\r)\mathbin{\to}\r$. Outside of the $1$-zone, we only pass (from the argument) the information that the $1$-value is positive: \begin{mathpar} \inferrule*[Right=$(\lambda)$,leftskip=-8.5em,rightskip=-13.8em]{ \inferrule*[Right=$(\lambda)$,leftskip=1.6em,rightskip=1.6em]{ \inferrule*[Right=$(@)$,leftskip=6.9em,rightskip=6.9em]{ y:(0,1,\tau_y)\vdash_2 y:(0,1,\tau_y) \and \inferrule*[right=$(@)$,leftskip=1em,rightskip=5.3em]{ y:(0,1,\tau_y)\vdash_2 y:(0,1,\tau_y) \and x:(0,0,\r)\vdash_2 x:(0,0,\r) }{ y:(0,1,\tau_y),\,x:(0,0,\r)\vdash_2 y\,x:(0,1,\r) } }{ y:(0,1,\tau_y),\,x:(0,0,\r)\vdash_2 y\,(y\,x):(0,1,\r) } }{ y:(0,1,\tau_y)\vdash_2\lambda x.y\,(y\,x):(0,1,\tau_y) } }{ \vdash_2 N:(0,1,(0,1,\tau_y)\mathbin{\to}\tau_y) } \end{mathpar} Notice that in the second (i.e., lower) node using the \TirName{(\!@\!)}\xspace rule, the function of type $\tau_y$, that is $(0,0,\r)\mathbin{\to}\r$, accepts an argument with type triple $(0,1,\r)$. This is correct, because according to the \TirName{(\!@\!)}\xspace rule, the function receives the information only about zones and values of order not greater than the order of the argument, which is $0$ in our case, and indeed we have $(\min(0,0),\min(1,0),\r)=(0,0,\r)$. Let us now see what happens inside the $1$-zone: \begin{mathpar} \inferrule*[Right=$(\lambda)$,leftskip=-6.1em,rightskip=-11.4em]{ \inferrule*[Right=$(\lambda)$,leftskip=1.6em,rightskip=1.6em]{ \inferrule*[Right=$(@)$,leftskip=4.5em,rightskip=4.5em]{ y:(0,1,\tau_y)\vdash_2 y:(0,1,\tau_y) \and \inferrule*[right=$(@)$,leftskip=1em,rightskip=5.3em]{ y:(1,1,\tau_y)\vdash_2 y:(1,1,\tau_y) \and x:(0,0,\r)\vdash_2 x:(0,0,\r) }{ y:(1,1,\tau_y),\,x:(0,0,\r)\vdash_2 y\,x:(1,1,\r) } }{ y:(0,1,\tau_y),\,y:(1,1,\tau_y),\,x:(0,0,\r)\vdash_2 y\,(y\,x):(1,2,\r) } }{ y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2\lambda x.y\,(y\,x):(1,2,\tau_y) } }{ \vdash_2 N:(1,2,(0,1,\tau_y)\wedge(1,1,\tau_y)\mathbin{\to}\tau_y) } \end{mathpar} In the second (i.e., lower) node using the \TirName{(\!@\!)}\xspace rule, the information about a positive $1$-value (coming from the left subderivation) meets the $1$-zone (coming from the right subderivation), and thus the $2$-value of this node is $1$. Denoting the type $(0,1,\tau_y)\mathbin{\to}\tau_y$ as $\tau^0_N$ and $(0,1,\tau_y)\wedge(1,1,\tau_y)\mathbin{\to}\tau_y$ as $\tau_N^1$, we continue the derivation for $M$. We choose to start the $2$-zone in a leaf concerning $y$. \begin{mathpar} \inferrule*[right=$(@)$]{ \vdash_2 N:(1,2,\tau_N^1) \and y:(0,1,\tau_y)\vdash_2 y:(0,1,\tau_y) \and y:(1,1,\tau_y)\vdash_2 y:(2,1,\tau_y) }{ y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2 N\,y:(2,2,\tau_y) } \end{mathpar} This results in having a node with $3$-value $1$. As we want to continue in the same way with $N\,(N\,y)$ and $N\,(N\,(N\,y))$, we need to derive $(0,1,\tau_y)$ for $N\,y$ and $N\,(N\,y)$ (which describes the situation outside of the $1$-zone): \begin{mathpar} \inferrule*[Right=$(@)$,rightskip=-5.3em]{ \vdash_2 N:(0,1,\tau_N^0) \and \inferrule*[right=$(@)$,leftskip=1em,rightskip=5.3em]{ \vdash_2 N:(0,1,\tau_N^0) \and y:(0,1,\tau_y)\vdash_2 y:(0,1,\tau_y) }{ y:(0,1,\tau_y)\vdash_2 N\,y:(0,1,\tau_y) } }{ y:(0,1,\tau_y)\vdash_2 N\,(N\,y):(0,1,\tau_y) } \end{mathpar} We continue as follows, obtaining two more nodes with $3$-value $1$: \begin{mathpar} \inferrule*[right=$(@)$]{ \vdash_2 N:(1,2,\tau_N^1) \and y:(0,1,\tau_y)\vdash_2 N\,y:(0,1,\tau_y) \and y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2 N\,y:(2,2,\tau_y) }{ y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2 N\,(N\,y):(2,2,\tau_y) } \and \inferrule*[right=$(@)$]{ \vdash_2 N:(1,2,\tau_N^1) \and\hspace{-10.4pt} y:(0,1,\tau_y)\vdash_2 N\,(N\,y):(0,1,\tau_y) \and\hspace{-10.4pt} y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2 N\,(N\,y):(2,2,\tau_y) }{ y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2 N\,(N\,(N\,y)):(2,2,\tau_y) } \end{mathpar} Next we apply the argument $a\,e$, obtaining one more node with $3$-value $1$: \begin{mathpar} \inferrule*[Right=$(@)$,leftskip=0em,rightskip=-5.5em]{ y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2 N\,(N\,(N\,y)):(2,2,\tau_y) \and \inferrule*[right=$(@)$,leftskip=1em,rightskip=5.5em]{ \vdash_2 a:(0,1,\tau_y) \and \vdash_2 e:(0,0,\r) }{ \vdash_2 a\,e:(0,1,\r) } }{ y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2 N\,(N\,(N\,y))\,(a\,e):(2,2,\tau_y) } \end{mathpar} In the last part of the derivation we also have a node with $3$-value $1$: \begin{mathpar} \inferrule*[right=$(@)$]{ \inferrule*[right=$(\lambda)$,rightskip=1em]{ y:(0,1,\tau_y),\,y:(1,1,\tau_y)\vdash_2 N\,(N\,(N\,y))\,(a\,e):(2,2,\tau_y) }{ \vdash_2\lambda y.N\,(N\,(N\,y))\,(a\,e):(2,2,(0,1,\tau_y)\wedge(1,1,\tau_y)\mathbin{\to}\r) } \and \vdash_2 a:(0,1,\tau_y) \and \vdash_2 a:(1,2,\tau_y) }{ \vdash_2 M:(2,2,\r) } \end{mathpar} As in Example~\ref{ex:1}, the total $3$-value of the derivation is $5$, and by adding any further $N$ to the sequence $N\,(N\,(N\,y))$, we can increase the $3$-value by $1$. \end{example} \begin{example}\label{ex:br} Let us also illustrate on a very simple example how the rule for the constant $b$ behaves: \begin{mathpar} \inferrule*[right=$(@)$,leftskip=-5.8em,rightskip=-7.7em]{ \inferrule*[right=$(@)$,leftskip=5.8em,rightskip=7.7em]{ \vdash_2 b:(0,0,(0,0,\r)\to\top\to\r) \and \vdash_2 M:(2,2,\r) }{ \vdash_2 b\,M:(2,2,\top\to\r) } }{ \vdash_2 b\,M\,e:(2,2,\r) } \end{mathpar} We thus simply ignore one of the arguments of $b$. Notice that in the second use of the application rule does not require any subderivations for the argument. \end{example} \paragraph*{Proofs.} Let us now sketch the proof of Theorem~\ref{thm:nondet}. Condition~\hyperref[it:n1pp]{(N1$''$)} is based on the following two lemmata. \begin{lemma}\label{lem:decrease-m} Let $M$ be a closed lambda-term of sort $\mathsf{o}$ and complexity at most $m+1$. If $D_{m+1}$ is a derivation for $\vdash_{m+1} M:(m+1,m+1,\mathsf{o})$, then there exists a derivation $D_m$ for $\vdash_m M:(m,m,\mathsf{o})$ with $\mathit{val}^{m+1}(D_m)\geq\mathit{val}^{m+2}(D_{m+1})$. \end{lemma} \begin{lemma}\label{lem:s-step} Let $M$ be a homogeneous and closed lambda-term of sort $\mathsf{o}$, and let $M\to_\beta N$ be an RMF$(m+1)$ reduction. If $D$ is a derivation for $\vdash_m M:(m,m,\mathsf{o})$, then there exists a derivation $E$ for $\vdash N:(m,m,\mathsf{o})$ with the same $(m+1)$-value. \end{lemma} Condition~\hyperref[it:n2pp]{(N2$''$)} is based on two symmetric lemmata. \begin{lemma}\label{lem:c-step} Let $M$ be a homogeneous and closed lambda-term of sort $\mathsf{o}$, and let $M\to_\beta N$ be an RMF$(m+1)$ reduction. If $E$ is a derivation for $\vdash_m N:(m,m,\mathsf{o})$, then there exists a derivation $D$ for $\vdash M:(m,m,\mathsf{o})$ with the same $(m+1)$-value. \end{lemma} \begin{lemma}\label{lem:increase-m} If $D_m$ is a derivation for $\vdash_m M:(m,m,\mathsf{o})$, then there exists a derivation $D_{m+1}$ for $\vdash_{m+1} M:(m+1,m+1,\mathsf{o})$ with $\mathit{val}^{m+2}(D_{m+1})\geq\log_3(\mathit{val}^{m+1}(D_m))$. \end{lemma} Theorem~\ref{thm:nondet} is easily implied. Indeed, consider a homogeneous and closed lambda-term $M_m=M$ of sort $\mathsf{o}$ and complexity at most $m$, its normal-form $M_0$, and lambda-terms $M_{m-1},M_{m-2},\dots,M_1$ such that all reductions between $M_i$ and $M_{i-1}$ are RMF$(i)$. In Condition~\hyperref[it:n1pp]{(N1$''$)} we start with a derivation $D_m$ for $\vdash_m M_m:(m,m,\r)$. Then, repeatedly for every $i=m-1,m-2,\dots,0$ we first apply Lemma~\ref{lem:decrease-m} to $D_{i+1}$ (with conclusion $\vdash_{i+1} M_{i+1}:(i+1,i+1,\mathsf{o})$) obtaining a derivation $D_i'$ for $\vdash_i M_{i+1}:(i,i,\mathsf{o})$ with $(i+1)$-value not smaller than the $(i+2)$-value of $D_{i+1}$, and next we apply Lemma~\ref{lem:s-step} to every RMF$(i)$-reduction between $M_{i+1}$ and $M_i$, obtaining a derivation $D_i$ for $\vdash_i M_i:(i,i,\mathsf{o})$ with the same $(i+1)$-value as $D_i'$. In effect, we obtain a derivation $D_0$ for $\vdash_0 M_0:(0,0,\r)$ with $1$-value not smaller than the $(m+1)$-value of the original derivation $D_m$. We conclude by observing that $D_0$ simply follows some branch of $M_0$, and that its $1$-value equals the number of constants $a$ on that branch. Conversely, while proving Condition~\hyperref[it:n2pp]{(N2$''$)}, at the beginning we construct a derivation $D_0$ for $\vdash_0 M_0:(0,0,\r)$, following some branch of $M_0$; the $1$-value of this derivation equals the number of constants $a$ on the considered branch. Then, repeatedly for every $i\in\{0,\dots,m-1\}$ we first apply Lemma~\ref{lem:c-step} for every RMF$(i)$-reduction between $M_{i+1}$ and $M_i$, obtaining a derivation $D_i'$ for $\vdash_i M_{i+1}:(i,i,\mathsf{o})$ with the same $(i+1)$-value as $D_i$, and next we apply Lemma~\ref{lem:increase-m} to $D_i'$ obtaining a derivation $D_{i+1}$ for $\vdash_{i+1} M_{i+1}:(i+1,i+1,\mathsf{o})$ with $(i+2)$-value at most logarithmically smaller than the $(i+1)$-value of $D_i$. In effect, we obtain a derivation $D_m$ for $\vdash_m M_m:(m,m,\r)$ with $(m+1)$-value dominating the number of constants $a$ on the selected branch of the beta-normal form $M_0$. It remains to prove the lemmata. In Lemma~\ref{lem:decrease-m} we are given a derivation $D_{m+1}$ (of order $m+1$) concerning a lambda-term of complexity at most $m+1$. In such a derivation, a node has positive $(m+2)$-value (equal $1$) if it is the closest ancestor of a node with positive $(m+1)$-value that is in the $(m+1)$-zone (because all variables are of order at most $m$, the information about positive $(m+1)$-values is not passed through type environments). Of course every node has only one closest ancestor that is in the $(m+1)$-zone, thus the total $(m+2)$-value is not greater than the total $(m+1)$-value. Having this, we decrease the order of the derivation to $m$, by simply forgetting about $(m+2)$-values and about the $(m+1)$-zone; the total $(m+1)$-value remains unchanged. Lemmata~\ref{lem:s-step} and~\ref{lem:c-step} can be shown by performing appropriate surgeries on the derivations, like in Section~\ref{sec:deterministic}. One has to observe there that if a subderivation (for a lambda-term of order $m$) derives a balanced type triple, then its $(m+1)$-value is $0$, and its type environment can contain only bindings with balanced type triples. In effect, we can treat subderivations deriving balanced and unbalanced type triples differently. Namely, subderivations deriving balanced type triples can be harmlessly removed or duplicated. Indeed, on the one hand, these operations do not change the total $(m+1)$-value. On the other hand, while removing such a subderivation, only bindings with balanced type triples are removed from type environments; this does not cause problems in nodes using the \TirName{($\lambda$)}\xspace rule, because this rule allows to drop some balanced type triples. On the other hand, for every $k$ the surgery wants to move at most one subderivation deriving a $k$-unbalanced type triple, so no removal or duplication is needed for such subderivations. In Lemma~\ref{lem:increase-m}, we have to add an $(m+1)$-zone to a derivation of order $m$. Starting from the root of the derivation, we repeatedly descend to the subderivation in which the total $(m+1)$-value is the greatest (arbitrarily in the case of a tie); the branch created in this way is taken as the $(m+1)$-zone. If, while descending from some node to its child, the total $(m+1)$-value decreases (i.e., either the node itself has $(m+1)$-value $1$, or a subderivation starting in some other child also has a positive $(m+1)$-value), then the node gets positive $(m+2)$-value: it is the closest ancestor of some node with positive $(m+1)$-value that is in the $(m+2)$-zone. This can happen only in the case of the \TirName{(\!@\!)}\xspace rule. In the \TirName{(\!@\!)}\xspace rule one may observe that if a subderivation derives an $m$-balanced type triple for the argument, then its total $(m+1)$-value is $0$. We can thus have at most two subderivations (among those starting in children) with positive $(m+1)$-value: one for the operand, and one concerning an $m$-unbalanced type triple for the argument. In consequence, while descending to a subderivation, the total $(m+1)$-value decreases at most three times (with the exception that it can decrease from $1$ to $0$). It follows that the total $(m+2)$-value is at least logarithmic in the total $(m+1)$-value. \paragraph*{Extension to Recursion Schemes.} Theorem~\ref{thm:nondet} can be also stated for infinite lambda-terms (hence, in particular, for regular infinite lambda-terms represented in a finite way by recursion schemes). The assumption is that we consider there only finite type derivations, and only finite branches of the generated tree (i.e., branches ending in a leaf). Notice that a type derivation for an infinite lambda-term can be finite, because a derivation does not need to descend to every subterm of the lambda-term. We claim that, under these assumptions, Theorem~\ref{thm:nondet} is true for infinite lambda-terms. To see this, consider a new constant $\bot$ of sort $\mathsf{o}$; it differs from $e$ in that we do not have a typing rule for $\bot$. A \emph{cut} of a lambda-term $M$ is a lambda-term obtained from $M$ by replacing some its subterms with lambda-terms of the form $\lambda x_1.\cdots{}.\lambda x_k.\bot$ (the number of the variables and their sorts are chosen so that the sort of the subterm does not change). It is easy to see that there is a finite derivation for $\vdash_m M:\hat\tau$ if and only if for some its finite cut $M'$ there is a derivation for $\vdash_m M':\hat\tau$, having the same values (we can cut off subterms not involved in the derivation). Likewise, the tree generated by a closed lambda-term $M$ of sort $\mathsf{o}$ contains some finite branch $B$, if and only if the tree generated by some finite cut $M'$ of $M$ contains the same branch $B$ (the finite branch is generated after finitely many beta-reductions, concerning only a top part of $M$, and subterms located deeper in $M$ can be cut off). This way, the infinitary version of Theorem~\ref{thm:nondet} can be reduced to the original statement concerning finite lambda-terms. Because in a single infinite tree we can have branches with arbitrarily many constants $a$, it makes sense to give the following direct corollary of Theorem~\ref{thm:nondet}. \begin{corollary}\label{cor:diag} The following conditions are equivalent for a homogeneous and closed (potentially infinite) lambda-term $M$ of sort $\mathsf{o}$: \begin{compactitem} \item for every $n\in\mathbb{N}$, in the tree generated by $M$ there exists a branch with at least $n$ appearances of the constant $a$, and \item for every $n\in\mathbb{N}$, there exists a derivation for $\vdash_m M:(m,m,\r)$ with $(m+1)$-value at least $n$. \end{compactitem} \end{corollary} Because the latter condition is easily decidable for lambda-terms represented by recursion schemes, the corollary implies decidability of the former condition. \paragraph*{Bibliographic Note.} The type system presented in this section is essentially taken from Parys~\cite{itrs}; we have applied some cosmetic changes, though. In Parys~\cite{diagonal-types} the type system is extended to the task of counting multiple constants: the $(m+1)$-value is not a number, but a tuple, where each coordinate of the tuple estimates the number of appearances of a particular constant. In particular, Corollary~\ref{cor:diag} is extended there to the property ``for every $n\in\mathbb{N}$, in the tree generated by $M$ there exists a branch with at least $n$ appearances of every constant from a set $A$'', giving its decidability. Deciding this property is known under the names \emph{simultaneous unboundedness problem} (SUP) and \emph{diagonal problem} (these are two different names for the same problem). SUP for recursion schemes was first solved in Clemente, Parys, Salvati, and Walukiewicz~\cite{downward-closure}, in a different way. The advantage of solving SUP using the type system presented here is twofold. First, the solution via the type system allows to obtain the optimal complexity, while the complexity of the original solution was much worse. Second, using the type system we can obtain so-called \emph{SUP reflection}: we can solve SUP simultaneously for all subtrees of the generated tree. More precisely, out of a recursion scheme we can create a new recursion scheme that generates a tree of the same shape as the original one, but such that the label of every node contains additionally the answer to SUP in the subtree starting in that node (i.e., the information whether in that node there start branches with arbitrarily many appearances of every constant from a set $A$). SUP reflection allowed to solve the model-checking problem for trees generated by recursion schemes against formulae of the WMSO+U logic~\cite{wmsou-schemes}. This logic extends WMSO (a fragment of MSO in which one can quantify only over finite sets) by the unbounding quantifier, $\mathsf U$. A formula using this quantifier, $\mathsf U X.\,\varphi$, says that $\varphi$ holds for arbitrarily large finite sets $X$. Let us also remark that decidability of SUP implies that given a language defined by a nondeterministic recursion scheme, it is possible to compute its downward closure~\cite{Zetzsche-down-clo}, and given two such languages, it is possible to decide whether they can be separated by a piecewise testable language~\cite{Czerwinski-piecewise}. The type system presented here is also used by Asada and Kobayashi~\cite{koba-pumping-new} in their work on a pumping lemma for recursion schemes. The type system was inspired by the previous solution of SUP by Clemente et al.~\cite{downward-closure}. The idea of having balanced and unbalanced type triples, and treating them differently in type environments, comes from Asada and Kobayashi~\cite{word2tree}. \section{Branching Quantities} Finally, we shortly mention one more quantity to be considered. In this part, suppose that the constant $a$ is of sort $\mathsf{o}\mathbin{\to}\mathsf{o}\mathbin{\to}\mathsf{o}$, that is, nodes with label $a$ have two children. For $n\in\mathbb{N}$, let $A_n$ be the full binary tree of height $n$, with all internal nodes labeled by $a$, and all leaves labeled by $e$. We say that $A_n$ \emph{embeds homeomorphically} in a tree $T$ if $T$ has a subtree of the form $T=a\,T_1\,T_2$ such that $A_{n-1}$ embeds in both $T_1$ and $T_2$ (defined by induction); $A_0=e$ embeds homeomorphically in every tree having a leaf labeled by $e$. Having a tree $T$ one may want to find the maximal height $n$ of a tree $A_n$ that embeds homeomorphically in $T$. It is an open problem how to estimate this quantity using a type system (or in any other way). \begin{openpr}\label{op:2} Design a type system such that the maximal value (appropriately defined) of a type derivation for a closed lambda-term $M$ of sort $\mathsf{o}$ estimates the maximal number $n$ such that $A_n$ embeds homeomorphically in the beta-normal form of $M$. \end{openpr} Like in Section~\ref{sec:nondeterministic} (cf.\ Corollary~\ref{cor:diag}), existence of such a type system would solve the following problem concerning infinite lambda-terms represented by recursion schemes. \begin{openpr}\label{op:3} Given a recursion scheme, decide whether for every $n$ the tree $A_n$ embeds homeomorphically in the (infinite) tree generated by the scheme. \end{openpr} A naive idea is to take the type system from Section~\ref{sec:nondeterministic}, and to change the rule for a constant $a$ into $\vdash_m a:(Z,\min(Z+1,m),(0,0,\r)\mathbin{\to}(0,0,\r)\mathbin{\to}\r)$. Notice, though, that if we derive a type for a tree $T$ using such a type system, the value of the derivation counts the maximal number of constants $a$ in a tree that embeds homeomorphically in $T$. This is not what we want since, for example, if all $a$ are located on a single branch, then their number can be arbitrarily large while only $A_1$ can be embedded. In other words, we add values from the two children of an $a$-labeled node, while we should take their minimum. It seems that Open Problems~\ref{op:2} and~\ref{op:3} are closely related to the problem of computing the downward closures of languages of finite trees generated by nondeterministic recursion schemes (we remark that the downward closure of every language of finite trees is a regular language, due to the Kruskal's tree theorem). If we want to compute the downward closure of a language, we have to decide in particular whether it contains trees $A_n$ for all $n\in\mathbb{N}$, that is, whether all $A_n$ embed homeomorphically in trees from the language. Like in the case of words, downward closures are also related to the problem of deciding whether two languages can be separated by a piecewise testable language. Goubault-Larrecq and Schmitz~\cite{schmitz-kruskal} derive a general framework for solving the piecewise testable separability for languages of trees. It is highly probable that Open Problem~\ref{op:3} can be solved for a subclass of recursion schemes, called safe recursion schemes, using methods from Blumensath, Colcombet, Kuperberg, Parys, and Vanden Boom~\cite{quasi-weak}. This requires further investigation. \bibliographystyle{eptcs}
2,869,038,155,739
arxiv
\section{Introduction} \label{sec:intro} Virtual assistants allow users to interact with devices such as speakers, phones, watches and headphones via voice commands. Typically, voice commands from a user are prefixed with a \emph{trigger phrase}. The presence of the trigger phrase at the beginning of an utterance helps distinguish audio that is directed towards the assistant from background speech. The problem of accurately detecting a trigger phrase is known as voice trigger detection \cite{MLBlogHS,bridle1973efficient}, wake-word detection \cite{kumatani2017direct,jose2020accurate}, or keyword spotting \cite{266505,fernandez2007application,chen2014small,lin2020training,rose1990hidden}. This work is motivated by the observation that audio following the trigger phrase can contain a strong signal about whether an utterance was directed towards the assistant or not. We found that the top 10 most popular words that follow the trigger phrase, \textbf{account for 80\% of the distribution}. While the top 20 words account for 90\% of the distribution. Although it is clear that audio following the trigger phrase can help in determining whether an utterance is directed towards the assistant, this improved accuracy comes at the cost of latency. Therefore, to design a practical voice trigger detector that runs on-device, simply waiting to listen to more audio is not sufficient. In this study, we first present a model and experiments that demonstrate that detection accuracy does indeed improve as we add more audio after the trigger phrase, i.e. the \emph{same model} is able to \emph{progressively} improve its estimates as we add more audio context after the trigger phrase. We then devise a two-stage architecture where the model produces an early and a late score (Figure \ref{windows}). We show that the early score is sufficient for a majority of examples in the test set, while the late score allows us to make better decisions for more difficult/marginal cases. This two-stage design allows us to achieve a favourable balance between accuracy and latency. We show that by delaying triggering for only 3\% of true utterances in the test set, we are able to reduce the number of false rejects by 66\% for the same FA rate, while incurring a 17\% relative increase in expected latency for accepted true triggers. Note that a similar idea was recently proposed in \cite{wangaudio,kumarbuilding}, however the model in their work is used to \emph{verify} whether a given segment contains the trigger phrase \emph{on the server}, whereas our proposal is for an on-device voice trigger detector. \begin{figure}[t] \begin{minipage}[]{1\linewidth} \centering \centerline{\includegraphics[width=0.8\textwidth]{spectrogram_v2.png}} \caption{Two-stage design: An example of early and late decision boundaries used in this paper. Early decisions result in a faster response, but later decisions provide greater accuracy.} \label{windows} \end{minipage} \end{figure} \vspace{-2.5mm} \section{Model} \label{sec:format} We employ a two-stage architecture for voice trigger detection \cite{MLBlogHS}. The first-stage comprises a low-power detector that processes streaming audio and is \emph{always-on} \cite{sigtia2018,higuchi2020s1dcnn}. If a detection is made at the first stage, the detector marks the start and end points of the purported keyword segment (Figure \ref{windows}) and the segment is then re-scored by larger, more complex models \cite{sigtia2020mtl,adya2020hybrid}. Note that this paper is concerned only with the larger models in the second-pass. \subsection{Architecture} Following previous work \cite{sigtia2020mtl,adya2020hybrid,sigtia2020mtlspk}, the input to the model is a 40-dimensional Mel-filterbank which is computed at a rate of 100 frames-per-second (FPS) from the input audio. We stack 7 contiguous frames together to form an input window and down-sample the sequence of windows by a factor of 3. These features are input to a stack of 4 bidirectional LSTM layers with 256 units in each of the forward and backward layers. The network is trained using multi-task learning (MTL) \cite{caruana1997multitask}, by minimising 2 different objectives simultaneously \cite{sigtia2020mtl,sigtia2020mtlspk}. The first objective is to assign the highest score to the correct sequence of context-independent phonetic labels. This is done by minimising the connectionist temporal classification (CTC) loss \cite{ctc_graves}. The output layer or \emph{head} corresponding to this objective comprises an affine transformation followed by a softmax non-linearity \cite{bridle1990probabilistic} over 54 output units. These units cover the set of context independent phonemes, word and sentence boundaries and the blank symbol used by the CTC loss. We refer to this loss as the \emph{phonetic loss} in the rest of the paper. The second objective used to train the model is a binary sequence classification loss. The positive class corresponds to utterances that are intended for the assistant. These examples are of the type ``\textbf{$\langle$trigger phrase$\rangle$, $\langle$payload$\rangle$}". The negative class corresponds to difficult examples that result in false detections. The output head for this loss contains an affine transformation and a softmax non-linearity over 2 units, the positive and negative classes. The sequence classification loss is defined as follows: \begin{equation*}\label{eq:pareto mle2} \begin{aligned} C_{pos} = C_{\text{CTC}}(\text{Trigger} | \mathbf{X}), \\ C_{neg} = -\log \sum_t y_t^{n}, \end{aligned} \end{equation*} where $C_{\text{CTC}}$ represents the CTC loss function for \emph{positive examples}, i.e. the input features $\mathbf{X}$ contain the trigger phrase, $y_t^{n}$ represents the network output for the negative class at time $t$, $C_{pos}$, $C_{neg}$ represent the losses for positive and negative classes respectively. We refer to this as the \emph{discriminative loss}. Given that the network architecture comprises bidirectional layers, the max operation in the objective ignores \emph{when} the network produces a large output, since the output at \emph{every time-step} is conditioned on the entire input sequence. The negative loss on the other hand encourages the network to produce a large value for the negative label at \emph{every} time-step. \subsection{Inference} Although the 2 branches of the network (phonetic and discriminative) are trained jointly with all the weights in the biLSTM layers shared, they learn to perform very different tasks. The phonetic branch learns to assign high probabilities to the correct sequence of output labels. During inference, the phonetic branch can be used to compute probabilities for a given phone sequence, e.g. $p(\text{VoiceTriggerPhoneSeq }|\text{ audio})$. Although useful as demonstrated in previous work \cite{sigtia2020mtl,adya2020hybrid}, the phonetic branch cannot be used to score the payload, where we are not sure what the phonetic content in the audio will be. The discriminative branch on other hand is trained to perform sequence classification. There are only 2 output classes and there is no concept of a phone sequence. During training, the network must learn which cues in the audio are useful predictors of the correct label. This design is compatible with the task at hand where we do not know what follows the trigger phrase, but we hope that as we input more audio to the network it is able to more accurately predict the correct binary label. Therefore in all experiments presented in this paper, we use the discriminative branch of the \emph{same model} for inference. The phonetic branch can be regarded as a \emph{regulariser} since it is only present during training. \subsection{Training Data} \label{sec:pagestyle} We use exactly the same phonetic training data as described in \cite{sigtia2020mtl,adya2020hybrid}. This dataset comprises 8000 hours of transcribed audio obtained from intended invocations of the voice assistant. We start with a clean dataset of near-field examples recorded on mobile phones. We then reverberate the clean dataset by mixing with a set of 3000 room impulse responses. Finally, we mix this dataset with echo residuals to simulate the effects of echo cancellation algorithms running on the device \cite{MLBlogFrontEnd}. The discriminative dataset on the other hand is significantly smaller. The positive set contains 140,000 examples while the negative set contains 40,000 utterances. In previous experiments \cite{sigtia2020mtl,adya2020hybrid}, we only used audio segments that correspond to the trigger phrase. However in this study, we are also interested in making use of the audio \emph{following} the trigger phrase. Therefore, for each example in the training set, we form the following segments: trigger phrase, trigger phrase + 0.5 seconds, trigger phrase + 1 second, trigger phrase + 1.5 seconds, trigger phrase + 2 seconds and the whole utterance. Given that the training set is relatively small, it is possible that the network could overfit to spurious acoustic cues in the training data. By showing the network multiple \emph{views} of the same utterance, we hope to reduce the chances of overfitting. \subsection{Model Training} We use large-batch synchronous stochastic gradient descent \cite{chen2016revisiting} for training the models. Each mini-batch per GPU comprises 128 training examples and we use 32 GPUs in parallel. We use the Adam optimiser for weight updates and an initial learning rate of 0.0008. We use gradient clipping to avoid gradient explosion in the early stages of training, clipping the norm of the gradient to a value of 20. \section{Experiments} \begin{figure}[t] \begin{minipage}[]{1.0\linewidth} \centering \centerline{\includegraphics[width=0.8\textwidth]{DETstt_v2.png}} \caption{DET curves as a function of post-trigger audio context. Secondary axes labels show False Alarm (FA) and False Reject (FR) counts.} \label{post_trigger} \end{minipage} \end{figure} We use the same evaluation dataset as described in \cite{sigtia2020mtl,adya2020hybrid,sigtia2020mtlspk} without any changes. The test set is the result of a structured data collection where subjects directed a series of voice commands towards the device. There were 100 subjects, approximately balanced between male and female adults. There are over 13k utterances overall, evenly divided between four acoustic conditions: (a) quiet room, (b) external noise from a TV or kitchen appliance in the room, (c) music playback from the recording device at medium volume, and (d) music playback from the recording device at loud volume, the most challenging condition. These examples are used to measure the proportion of false rejections or the False Rejection Rate (FRR). In addition to these recordings, we also use 2,000 hours of continuous audio recordings from TV, radio, and podcasts to measure the false-alarm (FA) rate in terms number of hours of playback per FA. As a first experiment, we investigate the effect of adding more audio after the trigger phrase on overall accuracies. As described before, we obtain start and end points for the trigger phrase from a first-pass DNN-HMM model. We then vary the amount of audio added after the end of the purported trigger phrase. We add segments of lengths \{0.3, 0.5, 1, 1.5, 2\} seconds, after the end of trigger phrase. These segments are then input to the discriminative branch of the \emph{same model} described in Section 2 to obtain a score $p(\text{true} | \text{audio\_segment})$. These scores are then used to produce the modified detection error trade-off (DET) curves in Figure \ref{post_trigger}. The x-axis represents number of hours per FA, the y-axis represents the proportion of falsely rejected examples. Figure \ref{delay} provides an alternative view where we compare false rejection rate at a fixed point on the the x-axis in Figure \ref{post_trigger}. From Figures \ref{post_trigger} and \ref{delay}, it is clear that accuracies do improve as we include more audio context after the trigger phrase. For example, by adding 2 seconds of post-trigger audio we are able to more than halve the errors compared to adding only 0.3 seconds of audio. However an additional latency of 2 seconds for every trigger is unacceptable for a practical on-device system. These results suggest that we can get \emph{progressively} more accurate predictions as we listen to more audio. In the extreme case, we could compute scores with every new frame of audio and use a per-frame streaming signal to make decisions. In the next section, we describe a simple two-stage approach where the model produces outputs at only two predefined time-steps. \begin{figure}[t] \begin{minipage}[]{1.0\linewidth} \centering \centerline{\includegraphics[width=0.8\textwidth]{frrbyD1_v2.png}} \caption{False Reject Rate at 100 hours per FA as a function of mean latency of accepted true triggers.} \label{delay} \end{minipage} \end{figure} \subsection{Two-Stage Design} We consider a model that produces scores at two intervals, an early score and a late score (Figure \ref{windows}). We choose 0.3 seconds of post-trigger audio for the early score and 2 seconds for the late score for this paper, though these intervals can be chosen differently for different devices and latency budgets. For a two-stage design, we need to consider the joint distribution of an early score and a later one. This joint distribution is presented in Figure \ref{scatter_plot} as a scatter plot. The x-axis represents the early score, while the y-axis represents the late score for every example in the test set. Green circles represent true examples, while red circles represent negative examples. The idea is to choose a threshold on the early score, and accept all candidate triggers if that early threshold is exceeded. We can represent this as a vertical line on the scatter plot. \begin{figure}[t!] \begin{minipage}[]{1.0\linewidth} \centering \centerline{\includegraphics[width=0.8\textwidth]{scat1_v2.png}} \caption{Scatter plot of early and late scores. Secondary axes labels show False Rejection Rate (FRR)\%. Green circles represent true triggers, red circles represent false triggers.} \label{scatter_plot} \end{minipage} \end{figure} The solid vertical line in Figure \ref{scatter_plot} represents a threshold where 3\% of true triggers are rejected (marked on secondary axes). Therefore for these 3\% of true triggers, we delay making a decision until we can compute a late score, which is 2 seconds after the initial trigger. We also need to decide a threshold on the late score, which is represented by the horizontal line in Figure \ref{scatter_plot}. We set the late score threshold such that only 1\% of true triggers are rejected. From the scatter plot is it is clear that with the two-stage design, we are able to recover all of the true triggers in the top-left quadrant of the figure, which were previously rejected when we used only the early score. In Figure \ref{scatter_plot}, for comparison the dashed vertical represents a threshold on the early score where only 1\% of true triggers are rejected. It is clear that there are more false (red) points to the right of the vertical 1\% line than there are to the right of the 3\% vertical plus above the 1\% horizontal line, although the number of true (green) points is about the same. Therefore the two-stage approach provides a way to recover a large number of false rejections \emph{without} significantly increasing the number of false alarms. Although the scatter plots explain what we propose to do, a more useful way to see the effects on accuracy is in terms of DET curves. In Figure \ref{two_stage_det} we see the DETs for the early and late scores by themselves (red and blue curves respectively). The accuracy of the late curve is clearly better (by a factor of two in regions of interest), but we do not want to wait that long before starting streaming audio. The magenta curve on the other hand shows the accuracy of the proposed two-stage system. It starts on the early curve and departs from it at the chosen FRR of 3\%. Note that two-stage DET curves turns out to be slightly more accurate than using only the late score. \begin{figure}[t] \begin{minipage}[]{1.0\linewidth} \centering \centerline{\includegraphics[width=0.7\textwidth]{DETs2_v2.png}} \caption{DET curves for two-stage model.} \label{two_stage_det} \end{minipage} \end{figure} \begin{figure}[t] \begin{minipage}[]{1.0\linewidth} \centering \centerline{\includegraphics[width=0.7\textwidth]{frrbyD3_v4.png}} \caption{X-axis plots the mean latency of accepted true triggers in seconds. Y-axis plots the proportion of false rejections. Bar colours correspond to DET curves in Figure \ref{two_stage_det}. } \label{latency_plot} \end{minipage} \end{figure} The effect on latency (for true triggers) of the proposed system can be seen be seen in Figure \ref{latency_plot}. Here, we plot the False Rejection rate at 100 hours per false alarm. The mean latency for the early score is 0.3 seconds (red) while for the late score is 2 seconds (blue). The mean latency for the two-stage model is only slightly greater (17\% relative increase) than the early score, while the FR rate is improved by 66\%. We listened to a handful of examples that are accepted by the late score but rejected based on the early score (top left quadrant in Figure \ref{scatter_plot}). We found that the late score helps in the following cases: a) when there is a lot of background noise present, processing longer segments results in better scores, b) when the user does not articulate the trigger phrase clearly but the payload is clearly directed towards the assistant, c) when the user repeats the trigger phrase after a failed first attempt. \vspace{-2.5mm} \section{Conclusions} We have presented an architecture for voice trigger detection that uses information in the speech/words that immediately follow the trigger phrase. We first show how to train a model that progressively yields better estimates as we add more audio to the end of the trigger phrase. We then presented a two-stage design, where the model produces an early and a late score. Our analysis shows that we are able to obtain a 66\% relative reduction in false rejections by delaying the decision for only 3\% of true examples in the test set. An obvious short-coming of the model presented here is the fact that we need to recompute the representations for \emph{all} time-steps for both early and late segments, due to the fact that we use bidirectional layers. In the future, we would like to design a model that can share computation between the early and late segments. \bibliographystyle{IEEEbib}
2,869,038,155,740
arxiv
\section{Introduction}\label{SecIntro} With the growing prevalence of Internet of things (IoT) devices, constantly collecting information about various physical phenomena, and the growth in the number and processing capability of mobile edge devices (phones, tablets, smart watches and activity monitors), there is a growing interest in enabling distributed machine learning (ML) to learn from data distributed across mobile devices. Centralized ML techniques are often developed, assuming that the datasets are offloaded to a central processor. In the case of wireless edge devices, centralized ML techniques are not desirable, since offloading such massive amounts of data to a central cloud may be too costly in terms of both energy and privacy. In many ML problems, the goal is to minimize a loss function, $F \left( \boldsymbol{\theta} \right)$, where $\boldsymbol{\theta} \in \mathbb{R}^d$ captures the model parameters to be optimized. The loss function $F \left( \boldsymbol{\theta} \right)$ represents the average of empirical loss functions computed at different data samples with respect to model parameter $\boldsymbol{\theta}$, $F \left( \boldsymbol{\theta} \right) = \frac{1}{\left| \mathcal{B} \right|} \sum\nolimits_{\boldsymbol{u} \in \mathcal{B}} f \left(\boldsymbol{\theta}, \boldsymbol{u} \right)$, where $\mathcal{B}$ is the set of available data points, and $\boldsymbol{u}$ represents a data sample and its label. We assume that an iterative stochastic gradient descent (SGD) algorithm is used to minimize the loss function $F \left( \boldsymbol{\theta} \right)$, in which the model parameter vector at iteration $t$, $\boldsymbol{\theta}_t$, is updated according to the stochastic gradient $\boldsymbol{g} \left( \boldsymbol{\theta}_t \right)$. SGD allows parallelization across multiple mobile devices. In distributed SGD (DSGD), devices process data locally with respect to a globally consistent parameter vector, and send their gradient estimates to the parameter server (PS). To be more precise, at iteration $t$, device $m$ computes the gradient estimate $\boldsymbol{g}_m \left( \boldsymbol{\theta}_t \right) \triangleq \frac{1}{\left| \mathcal{B}_{m} \right|} \sum\nolimits_{\boldsymbol{u} \in \mathcal{B}_{m}} \nabla f \left(\boldsymbol{\theta}_t, \boldsymbol{u} \right)$ with respect to its local dataset $\mathcal{B}_m$ and model parameter $\boldsymbol{\theta}_t$, and sends the result to the PS. Having $M$ devices in the system, the PS updates the model parameter vector according to \begin{align}\label{ParallelSGDModelUpdate} \boldsymbol{\theta}_{t+1} =\boldsymbol{\theta}_{t} - \eta_t \frac{1}{M} \sum\nolimits_{m=1}^{M} \boldsymbol{g}_m \left(\boldsymbol{\theta}_{t} \right), \end{align} where $\eta_t$ denotes the learning rate at iteration $t$, and shares the result with the devices for the computations at the following iterations. Although parallelism reduces the computation load at each device, communication from the devices to the PS becomes the main performance bottleneck \cite{DCAlistarhQSGD,DCOneBitQuan,DCLimitedPrecisionGupta,ScalableDNNStorm,DCMohammadDenizScheduling}, particularly for wireless edge learning due to limited bandwidth and power. Several architectures have been proposed in recent years to employ computational capabilities of edge devices, and train an ML model collaboratively with the help of a remote PS. However, these works ignore the physical characteristics of the communication channel from the devices to the PS, and consider interference-and-error-free links with a fixed capacity, which is hard to guarantee in most wireless environments. Collaborative ML taking into account the physical layer channel characteristics has recently been studied in \cite{MohammadDenizDSGDCS, KaibinParallelWork, YangFedLearOverAirComp, MohammadDenizSpawc19}. These works consider a wireless multiple access channel (MAC) from the edge devices to the PS, and propose over-the-air computation to average gradient vectors or estimated model parameters at the PS. In \cite{MohammadDenizDSGDCS} the authors focus on bandwidth efficient learning, and employ gradient sparsification followed by linear projection to design a communication efficient DSGD algorithm. This scheme has been extended to the fading MAC model in \cite{MohammadDenizSpawc19}. Distributed ML over a wireless fading MAC is studied in \cite{KaibinParallelWork}, where the wireless devices employ power allocation with perfect channel state information (CSI) to align the received signals at the PS. A single-input multiple-output (SIMO) wireless fading MAC is studied in \cite{YangFedLearOverAirComp}, where a beamforming technique is designed to maximize the number of devices participating in each iteration, while keeping the quality of the received signal at the PS above the specified threshold level. Our goal in this paper is to enable distributed learning over a wireless fading MAC, while removing the requirement of CSI at the transmitters (CSIT). This will be achieved by employing multiple antennas at the PS. Similarly to \cite{MohammadDenizDSGDCS, KaibinParallelWork, YangFedLearOverAirComp, MohammadDenizSpawc19} we considering uncoded transmission of gradient estimates and over-the-air computation. We design a receive beamformer at the PS in order to mitigate the fading effect and align the desired signals. We analytically show that the proposed scheme alleviates the destructive effects of interference and noise terms at the PS thanks to the utilization of multiple antennas, and, in the limit, due to channel hardening, it boils down to a deterministic channel with identical gains from all the devices. This result is validated by numerical experiments, where we investigate the impact of the number of antennas on the performance of the proposed scheme with no CSIT. It is worth noting that the CSI requirements of over-the-air computation with a multi-antenna receiver was also studied in \cite{ComOverAirNoCSIT}. The authors proposed a scheme that encodes the information on the energy of the transmitter signals, and hence, limited only to positive values, but requires CSI neither at the transmitters nor at the PS. Performance of this no-CSI scheme for DSGD will be studied in the extended version of this paper. \textit{Notations}: $\mathbb{R}$ and $\mathbb{C}$ represent the sets of real and complex values, respectively. We denote entry-wise complex conjugate of vector $\boldsymbol{x}$ by $\left( \boldsymbol{x} \right)^*$, and ${\rm{Re}} \{ \boldsymbol{x} \}$ and ${\rm{Im}} \{ \boldsymbol{x} \}$ return entry-wise real and imaginary components of $\boldsymbol{x}$, respectively. For $\boldsymbol{x}$ and $\boldsymbol{y}$ with the same dimension, $\boldsymbol{x} \cdot \boldsymbol{y}$ returns their inner product. We denote a zero-mean normal distribution with variance $\sigma^2$ by $\mathcal{N} \left( 0,\sigma^2 \right)$, and $\mathcal{C N} \left( 0,\sigma^2 \right)$ represents a circularly symmetric complex normal distribution with real and imaginary terms each distributed according to $\mathcal{N} \left( 0,\sigma^2 / 2 \right)$. We let $[i] \triangleq \{ 1, \dots, i \}$. We denote the cardinality of set $\cal X$ by $\left| \mathcal{X} \right|$, and $l_2$ norm of vector $\boldsymbol{x}$ by $\left\| \boldsymbol{x} \right\|_2$. \section{System Model}\label{SecProbFormul} We consider $M$ devices, where device $m$ has access to a local dataset $\mathcal{B}_m$, and employs SGD to compute the gradient estimate $\boldsymbol{g}_m \left( \boldsymbol{\theta}_t \right) \in \mathbb{R}^d$ at iteration $t$, $m \in [M]$. These local gradient estimates are transmitted to the PS, equipped with $K$ antennas, through a wireless shared medium. The PS updates the model parameter based on its received signal, and shares it with all the devices over an error-free shared link, so that all the devices have a globally consistent model parameter. We model the shared wireless channel from the edge devices to the PS as a wireless fading MAC, where OFDM is used to divide the available bandwidth into $s$ subchannels, $s \le d$ (in practice, we typically have $s \ll d$). We assume that $N$ OFDM symbols can be transmitted over each subchannel at each iteration of DSGD algorithm. The received vector corresponding to the $n$-th OFDM symbol in iteration $t$ at the $k$-th antenna of the PS is given by \begin{align}\label{ReceivedVectorPSGenAntennak} \boldsymbol{y}^n_k (t) = \sum\nolimits_{m = 1}^{M} \boldsymbol{h}^n_{m,k} (t) \cdot \boldsymbol{x}^n_{m} (t) + \boldsymbol{z}^n_k (t), \quad \mbox{$k \in [K]$}, \end{align} where $\boldsymbol{x}^n_{m} (t)$ is the $n$-th symbol of dimension $s$ transmitted by the $m$-th device, $\boldsymbol{h}^n_{m,k} (t) \in \mathbb{C}^s$ denotes the vector of channel gains from device $m$ to the $k$-th PS antenna, $m \in [M]$, and $\boldsymbol{z}^n_{k} (t) \in \mathbb{C}^s$ represents the circularly symmetric complex white Gaussian noise at the $k$-th antenna of the PS, $n \in [N]$. The $i$-th entry of channel vector $\boldsymbol{h}^n_{m,k} (t)$, denoted by $h^n_{m,k,i} (t)$, is distributed according to $\mathcal{C N} \left( 0, \sigma_h^2 \right)$, $i \in [s]$, and different entries of $\boldsymbol{h}^n_{m,k} (t)$ can be correlated, while the channel gains are assumed to be independent and identically distributed (i.i.d.) across PS antennas, OFDM symbols, and wireless devices, $k \in [K]$, $n \in [N]$, $m \in [M]$. Similarly, different entries of noise vector $\boldsymbol{z}^n_k (t)$ can be correlated, and its $i$-th entry, denoted by $z^n_{k,i} (t)$, distributed according to $\mathcal{C N} \left( 0, \sigma_z^2 \right)$, $i \in [s]$, $k \in [K]$, $n \in [N]$. Noise vectors are also assumed to be i.i.d. across PS antennas and OFDM symbols. We consider the following average power constraint imposed at each wireless device assuming a total of $T$ iterations of the DSGD algorithm: \begin{align}\label{AvePowerConsGen} \frac{1}{NT} \sum\nolimits_{t=1}^{T} \sum\nolimits_{n=1}^{N} \mathbb{E} \left[ ||\boldsymbol{x}^n_{m} (t)||^2_2 \right] \le \bar{P}, \quad \forall m \in [M], \end{align} where the expectation is taken with respect to the randomness of the communication channel. We assume that the PS has perfect CSI, while there is no CSI at the wireless devices. At each iteration, the goal at the PS is to estimate the average of the gradient estimates, $\frac{1}{M} \sum\nolimits_{m=1}^{M} \boldsymbol{g}_m \left(\boldsymbol{\theta}_{t} \right)$, denoted by $\hat{\boldsymbol{g}} \left(\boldsymbol{\theta}_{t} \right)$, and update the model parameter as in \eqref{ParallelSGDModelUpdate} at the end of each iteration based on the received symbols $\boldsymbol{y}^1_k (t), \ldots, \boldsymbol{y}^N_k (t)$, $\forall k$, and its knowledge of the CSI $\boldsymbol{h}^n_{m,k} (t)$, $\forall k, n, m$. We note that the PS is interested in the average of the gradient estimates computed by the devices rather than each individual estimate. Motivated by the additive nature of the wireless MAC, we consider an analog approach similarly to \cite{MohammadDenizDSGDCS,KaibinParallelWork,YangFedLearOverAirComp,MohammadDenizSpawc19}, where the devices transmit their gradient estimates simultaneously without employing any channel coding. \section{Analog DSGD without CSIT}\label{SecPeoposedAnalog} At iteration $t$ of DSGD, device $m$ transmits its gradient estimate ${\boldsymbol{g}}_m \left(\boldsymbol{\theta}_{t} \right) \in \mathbb{R}^{d}$ over $N = \left\lceil {d/2s} \right\rceil$ OFDM symbols across $s$ subchannels in an uncoded manner, $m \in [M]$. We denote the $i$-th entry of ${\boldsymbol{g}}_m \left(\boldsymbol{\theta}_{t} \right)$ by $g_{m,i} \left(\boldsymbol{\theta}_{t} \right)$, $i \in [d]$, and define, for $n \in [N]$, $m \in [M]$, \begin{subequations} \label{gnmESADSGDDef} \begin{align}\label{gnmRealESADSGDDef} {\boldsymbol{g}}^n_{m, {\rm{re}}} \left(\boldsymbol{\theta}_{t} \right)& \triangleq [ g_{m,2(n-1)s+1} \left(\boldsymbol{\theta}_{t} \right), \cdots, g_{m,(2n-1)s} \left(\boldsymbol{\theta}_{t} \right)]^T,\\ {\boldsymbol{g}}^n_{m, {\rm{im}}} \left(\boldsymbol{\theta}_{t} \right)& \triangleq [ g_{m,(2n-1)s+1} \left(\boldsymbol{\theta}_{t} \right), \cdots, g_{m,2ns} \left(\boldsymbol{\theta}_{t} \right)]^T, \label{gnmImagESADSGDDef}\\ {\boldsymbol{g}}^n_{m} \left(\boldsymbol{\theta}_{t} \right) & \triangleq {\boldsymbol{g}}^n_{m, {\rm{re}}} \left(\boldsymbol{\theta}_{t} \right) + j {\boldsymbol{g}}^n_{m, {\rm{im}}} \left(\boldsymbol{\theta}_{t} \right), \label{gnmRealImagESADSGDDef} \end{align} \end{subequations} where $j \triangleq \sqrt{-1}$, and we zero-pad ${\boldsymbol{g}}_m \left(\boldsymbol{\theta}_{t} \right)$ to have length $2sN$. The $i$-th entry of ${\boldsymbol{g}}^n_{m} \left(\boldsymbol{\theta}_{t} \right)$ is then given by \begin{align}\label{ithgmn} g^n_{m,i} \left(\boldsymbol{\theta}_{t} \right) = g_{m,2(n-1)s+i} & \left(\boldsymbol{\theta}_{t} \right) + j g_{m,(2n-1)s+i} \left(\boldsymbol{\theta}_{t} \right), \nonumber\\ & \mbox{for $i \in [s]$, $n \in [N]$, $m \in [M]$}. \end{align} According to \eqref{gnmESADSGDDef}, we have \begin{align}\label{gmwrtgmnReIm} {\boldsymbol{g}}_m \left(\boldsymbol{\theta}_{t} \right) = & \big[ {\boldsymbol{g}}^1_{m, {\rm{re}}} \left(\boldsymbol{\theta}_{t} \right), {\boldsymbol{g}}^1_{m, {\rm{im}}} \left(\boldsymbol{\theta}_{t} \right), \cdots, \big. \nonumber\\ & \qquad \qquad \qquad \quad \big. {\boldsymbol{g}}^N_{m, {\rm{re}}} \left(\boldsymbol{\theta}_{t} \right), {\boldsymbol{g}}^N_{m, {\rm{im}}} \left(\boldsymbol{\theta}_{t} \right) \big]^T, \end{align} with $N = \left\lceil {d/2s} \right\rceil$. At the $n$-th OFDM symbol of iteration $t$, device $m$ sends \begin{align}\label{workermSends} \boldsymbol{x}^n_{m} (t) = \alpha_t \boldsymbol{g}^n_{m} (t), \quad n \in [N], m \in [M]. \end{align} Accordingly, the average transmit power depends on $\alpha_t$, and is evaluated as follows: \begin{align}\label{AvePowerConsWorkerm} \frac{1}{NT} \sum\nolimits_{t=1}^{T} \alpha_t^2 \sum\nolimits_{n=1}^{N} ||\boldsymbol{g}^n_{m} (t)||^2_2 \le \bar{P}. \end{align} The PS observes the following signal at its $k$-th antenna, for $k \in [K], n \in [N]$: \begin{align}\label{ReceivedVectorPSScheAntennak} \boldsymbol{y}^n_k (t) = \alpha_t \sum\nolimits_{m = 1}^{M} \boldsymbol{h}^n_{m,k} (t) \cdot \boldsymbol{g}^n_{m} (t) + \boldsymbol{z}^n_k (t). \end{align} Having known the CSI, the PS combines the signals at different antennas in the following form: \begin{align}\label{ReceivedVectorPSScheCombAntennas} \boldsymbol{y}^n (t) \triangleq \frac{1}{K} \sum\nolimits_{k=1}^{K} \left( \sum\nolimits_{m = 1}^{M} \boldsymbol{h}^n_{m,k} (t) \right)^{*} \cdot \boldsymbol{y}^n_k (t), \end{align} whose $i$-th entry is given by \begin{align}\label{ReceivedVectorPSScheCombAntennasith} y^n_i (t) = \frac{1}{K} \sum\nolimits_{k=1}^{K} \sum\nolimits_{m = 1}^{M} \left( {h}^n_{m,k,i} (t) \right)^{*} {y}^n_{k,i} (t), \end{align} where ${y}^n_{k,i} (t)$ denotes the $i$-th entry of $\boldsymbol{y}^n_{k,i} (t)$, $i \in [s]$, $n \in [N]$. By substituting ${y}^n_{k,i} (t)$, given in \eqref{ReceivedVectorPSScheAntennak}, it follows that \begin{align}\label{ReceivedVectorPSScheCombAntennasReWrith} &{y}^n_i (t) = \underbrace{\alpha_t \sum\limits_{m=1}^{M} \left( \frac{1}{K} \sum\limits_{k=1}^{K} \left| {h}^n_{m,k,i} (t) \right|^2 \right) {g}^n_{m,i} (\boldsymbol{\theta}_t)}_{\text{\normalfont signal term}} \nonumber\\ & \quad + \underbrace{\frac{\alpha_t}{K} \sum\limits_{k=1}^{K} \sum\limits_{m=1}^{M} \sum\limits_{m'=1, m' \ne m}^{M} \left( {h}^n_{m,k,i} (t) \right)^{*} {h}^n_{m',k, i} (t) {g}^n_{m', i} (\boldsymbol{\theta}_t)}_{\text{\normalfont interference term}} \nonumber\\ & \quad + \underbrace{\sum\limits_{m=1}^{M} \left( \frac{1}{K} \sum\limits_{k=1}^{K} \left( {h}^n_{m,k, i} (t) \right)^{*} \right) z_{k,i}^n (t)}_{\text{\normalfont noise term}}. \end{align} There are three terms with ${y}^n_i (t)$ specified by signal, interference, and noise terms, respectively, in \eqref{ReceivedVectorPSScheCombAntennasReWrith}. With the law of large numbers, as the number of antennas at the PS $K \to \infty$, the signal term approaches \begin{align}\label{SigTermApproaches} y_{i, {\rm{sig}}}^n (t) \triangleq \alpha_t \sigma_h^2 \sum\nolimits_{m=1}^{M} {g}^n_{m,i} (\boldsymbol{\theta}_t), \quad i \in [s], n \in [N], \end{align} from which the PS can recover \begin{subequations}\label{PSSigTermRecReIm} \begin{align}\label{PSSigTermRecRe} \frac{1}{M} \sum\nolimits_{m=1}^{M} g_{m,2(n-1)s+i} \left(\boldsymbol{\theta}_{t} \right) &= \frac{ {\rm{Re}} \left\{ y_{i, {\rm{sig}}}^n (t) \right\} }{\alpha_t M \sigma_h^2},\\ \frac{1}{M} \sum\nolimits_{m=1}^{M} g_{m,(2n-1)s+i} \left(\boldsymbol{\theta}_{t} \right) &= \frac{ {\rm{Im}} \left\{ y_{i, {\rm{sig}}}^n (t) \right\} }{\alpha_t M \sigma_h^2}.\label{PSSigTermRecIm} \end{align} \end{subequations} However, the interference term in \eqref{ReceivedVectorPSScheCombAntennasReWrith} does not allow the exact recoveries of $\frac{1}{M} \sum\nolimits_{m=1}^{M} g_{m,2(n-1)s+i} \left(\boldsymbol{\theta}_{t} \right)$ and $\frac{1}{M} \sum\nolimits_{m=1}^{M} g_{m,(2n-1)s+i} \left(\boldsymbol{\theta}_{t} \right)$ from ${y}^n_i (t)$, which is observed at the PS. To analyze the interference term, we first define, for $i \in [s]$, $n \in [N]$, \begin{align}\label{IntTerAnDef} \mathfrak{h}_i^n (t) \triangleq \frac{1}{K} \sum\limits_{k=1}^{K} \sum\limits_{m=1}^{M} \sum\limits_{m'=1, m' \ne m}^{M} \left( {h}^n_{m,k,i} (t) \right)^{*} {h}^n_{m',k, i} (t). \end{align} It is easy to verify that the mean and the variance of $\mathfrak{h}_i^n (t)$ are given by \begin{subequations}\label{MeanVarMathFrakh} \begin{align}\label{MeanMathFrakh} \mathbb{E} \left[ \mathfrak{h}_i^n (t) \right] =& 0,\\ \mathbb{E} \left[ \left| \mathfrak{h}_i^n (t) \right|^2 \right] =& \frac{M(M-1) \sigma_h^4}{K},\label{VarMathFrakh} \end{align} \end{subequations} respectively. We note that the gradient values computed at each iteration are independent of the channel realizations experienced during the same iteration. Accordingly, by fixing the gradient values, from the analysis in \eqref{MeanVarMathFrakh}, we conclude that the interference term in \eqref{ReceivedVectorPSScheCombAntennasReWrith} has zero mean and a variance that scales with $M^2 / K$. Thus, for a fixed number of wireless devices $M$, the variance of the interference term in \eqref{ReceivedVectorPSScheCombAntennasReWrith} approaches zero as $K \to \infty$. In practice, it is feasible to employ sufficiently large number of antennas at the PS exploiting massive multiple-input multiple-output (MIMO) systems \cite{RusekScaleUpMassiveMIMO}. \iffalse \begin{algorithm}[t!] \caption{Analog DSGD without CSIT} \label{MUltiAntennaA_DSGD} \begin{algorithmic}[1] \Statex \For {$t = 1, \ldots, T$} \Statex \begin{itemize} \item \textbf{Devices do:} \end{itemize} \For{$m = 1, \ldots, M$ in parallel} \State{Compute $\boldsymbol{g}_m \left( \boldsymbol{\theta}_t \right)$ with respect to $\mathcal{B}_{m}$} \For{$n=1, \ldots, N$} \State{${\boldsymbol{g}}^n_{m} \left(\boldsymbol{\theta}_{t} \right) = {\boldsymbol{g}}^n_{m, {\rm{re}}} \left(\boldsymbol{\theta}_{t} \right) + j {\boldsymbol{g}}^n_{m, {\rm{im}}} \left(\boldsymbol{\theta}_{t} \right)$} \State{$\boldsymbol{x}^n_{m} (t) = \alpha_t \boldsymbol{g}^n_{m} (t)$} \EndFor \EndFor \Statex \begin{itemize} \item \textbf{PS does:} \end{itemize} \For{$n=1, \ldots, N$} \State{$\boldsymbol{y}^n (t) = \frac{1}{K} \sum\nolimits_{k=1}^{K} \left( \sum\nolimits_{m = 1}^{M} \boldsymbol{h}^n_{m,k} (t) \right)^{*} \cdot \boldsymbol{y}^n_k (t)$} \State{$\hat{\boldsymbol{g}}_{\rm{re}}^n \left(\boldsymbol{\theta}_{t} \right) = \frac{ {\rm{Re}} \left\{ \boldsymbol{y}^n (t) \right\} }{\alpha_t M \sigma_h^2}$} \State{$\hat{\boldsymbol{g}}_{\rm{im}}^n \left(\boldsymbol{\theta}_{t} \right) = \frac{ {\rm{Im}} \left\{ \boldsymbol{y}^n (t) \right\} }{\alpha_t M \sigma_h^2}$} \EndFor \State{$\hat{\boldsymbol{g}} \left(\boldsymbol{\theta}_{t} \right) = \left[ \hat{\boldsymbol{g}}_{\rm{re}}^1 \left(\boldsymbol{\theta}_{t} \right), \hat{\boldsymbol{g}}_{\rm{im}}^1 \left(\boldsymbol{\theta}_{t} \right), \cdots, \hat{\boldsymbol{g}}_{\rm{re}}^N \left(\boldsymbol{\theta}_{t} \right), \hat{\boldsymbol{g}}_{\rm{im}}^N \left(\boldsymbol{\theta}_{t} \right) \right]^T$} \State{$\boldsymbol{\theta}_{t+1} =\boldsymbol{\theta}_{t} - \eta_t \hat{\boldsymbol{g}} \left(\boldsymbol{\theta}_{t} \right)$} \EndFor \vspace{.2cm} \end{algorithmic} \end{algorithm} \fi \iffalse \begin{figure*}[t!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.62,trim={20pt 7pt 36pt 40pt},clip]{Fig_HB_noise10_perfectCSI.ps} \caption{Noise variance, $\sigma_z^2 = 20$} \label{FigTestAccNoise10Perfect} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.62,trim={20pt 7pt 36pt 40pt},clip]{Fig_HB_noise50_perfectCSI.ps} \caption{Noise variance, $\sigma_z^2 = 100$} \label{FigTestAccNoise50Perfect} \end{subfigure} \caption{Test accuracy of the proposed multi-antenna analog DSGD algorithm without CSIT for different number of antennas values $\left( K \in \{ 1,5,2M,2M^2 \} \right)$ and noise variances $\sigma_z^2$.} \label{FigTestAccNoise1050Perfect} \end{figure*} \fi \begin{figure*}[t!] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.61,trim={20pt 7pt 36pt 40pt},clip]{Fig_HB_noise10_perfectCSI_2.ps} \caption{Noise variance, $\sigma_z^2 = 20$} \label{FigTestAccNoise10Perfect_2} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale=0.61,trim={20pt 7pt 36pt 40pt},clip]{Fig_HB_noise50_perfectCSI_2.ps} \caption{Noise variance, $\sigma_z^2 = 100$} \label{FigTestAccNoise50Perfect_2} \end{subfigure} \caption{Test accuracy of the proposed multi-antenna analog DSGD algorithm without CSIT for different number of antennas values $\left( K \in \{ 1,5,2M,2M^2 \} \right)$ and noise variances $\sigma_z^2$.} \label{FigTestAccNoise1050Perfect_2} \end{figure*} According to the above analysis, the PS estimates $\frac{1}{M} \sum\nolimits_{m=1}^{M} g_{m,2(n-1)s+i} \left(\boldsymbol{\theta}_{t} \right)$ and $\frac{1}{M} \sum\nolimits_{m=1}^{M} g_{m,(2n-1)s+i} \left(\boldsymbol{\theta}_{t} \right)$, for $i \in [s]$, $n \in [N]$, through \begin{subequations}\label{PSSigTermRecReImEst} \begin{align}\label{PSSigTermRecReEst} \hat{g}_{2(n-1)s+i} \left(\boldsymbol{\theta}_{t} \right) &= \frac{ {\rm{Re}} \left\{ y_{i}^n (t) \right\} }{\alpha_t M \sigma_h^2},\\ \hat{g}_{(2n-1)s+i} \left(\boldsymbol{\theta}_{t} \right) &= \frac{ {\rm{Im}} \left\{ y_{i}^n (t) \right\} }{\alpha_t M \sigma_h^2},\label{PSSigTermRecImEst} \end{align} \end{subequations} respectively. It then utilizes the estimated vector $\hat{\boldsymbol{g}} (\boldsymbol{\theta}_t) \triangleq \left[ \hat{g}_{1} \left(\boldsymbol{\theta}_{t} \right), \cdots, \hat{g}_{d} \left(\boldsymbol{\theta}_{t} \right) \right]^T$, which can provide a good estimate of the actual average of gradients if a sufficiently large number of PS antennas are employed, to update the model parameters. \begin{remark}\label{RemPowerDecay} We note that with SGD the empirical variances of the gradient estimates decay over time and approach zero asymptotically \cite{BottouLargeScaleSGD,ScalableDNNStorm,DCLimitedPrecisionGupta,MohammadDenizDSGDCS,UseLocalSGDLin}. Thus, for robust communication of the gradient estimates against noise at each iteration of the DSGD algorithm, it is reasonable to increase the power allocation factor $\alpha_t$ over time. \end{remark} \begin{remark}\label{RemCompression} We remark that the main focus in this paper is to develop techniques to perform a DSGD algorithm at the wireless edge with no CSIT. We propose to employ multiple antennas at the PS, which can help to mitigate the effect of fading, and, in the limit, align the received signals at the PS. We can further employ some of the existing schemes in the literature providing more efficient communication over the limited bandwidth wireless MAC, such as the idea of linear projection proposed in \cite{MohammadDenizDSGDCS}. We leave the analysis of such combined techniques to future work. \end{remark} \section{Numerical Experiments}\label{SecExperiments} Here we evaluate the performance of the proposed analog DSGD algorithm with no CSI available at the wireless devices. We are particularly interested in investigating the impact of the number of PS antennas on the performance of the proposed scheme. We run experiments on MNIST dataset \cite{LeCunMNIST} with $60000$ training and $10000$ test samples, and train a single layer neural network with $d=7850$ parameters utilizing ADAM optimizer \cite{ADAMDC}. We train the network for $T=800$ iterations. We consider $M=20$ wireless devices in the system. To have a realistic model of data distribution across the devices for the wireless edge learning model, we assume that each device has access to $1000$ training data samples selected at random from the training dataset. Thus, some of the training data samples are not assigned to any device, and the data samples across different devices may not be independent. For simplicity, we assume that the $s$ channel gains associated with each OFDM symbol from each device to each PS antenna are i.i.d., and $\sigma_h^2 = 1$. The performance is measured as the accuracy with respect to the test samples based on the updated model parameters at each DSGD iteration. For numerical comparison, we also consider the benchmark scenario, in which the PS receives the actual average of the gradient estimates $\frac{1}{M} \sum\nolimits_{m=1}^{M} \boldsymbol{g}_m \left(\boldsymbol{\theta}_{t} \right)$, and updates the parameter vector according to this noiseless observation at each DSGD iteration. We refer to this as the error-free shared link scenario, and its accuracy can serve as an upper bound on the performance of the proposed analog DSGD scheme. In Fig. \ref{FigTestAccNoise1050Perfect_2} we illustrate the performance of the proposed analog DSGD scheme with no CSIT for different $K$ values and different noise levels. We consider $K \in \{ 1, 5, 2M, 2M^2 \}$, and investigate the performance of the proposed scheme for $\sigma_z^2 = 20$ and $\sigma_z^2 = 100$ in Figures \ref{FigTestAccNoise10Perfect_2} and \ref{FigTestAccNoise50Perfect_2}, respectively. We also include the performance of the error-free shared link scenario. We set the power allocation factor $\alpha_t = 1 + t/1000$, $t \in [T]$, and for simplicity, we assume that $s = d/2$ resulting in $N=1$. We note that, for a fixed power allocation $\alpha_t$, $\forall t$, the value of $s$ does not have any impact on the accuracy of the considered schemes; instead, any change in $s$ scales the average transmit power, whose value is proportional to $N$. As it can be seen, employing more antennas at the PS results in a higher accuracy with the improvement more highlighted when the noise level is higher. This is due to the fact that increasing $K$ mitigates the effects of both the interference and noise terms, inferred from \eqref{ReceivedVectorPSScheCombAntennasReWrith}. Thus, the advantage of having more PS antennas is more pronounced when the channel is noisier. For example, even when $\sigma_z^2 = 100$, the proposed scheme with $K = 2M^2$ PS antennas and average power $\bar{P}=0.21$ provides a slightly smaller accuracy than that of the error-free shared link scenario; this result indicates the success of the proposed scheme in mitigating the noise term even when the ratio $\bar{P}/{\sigma_{z}^2}$ is relatively small. We further observe that, compared to having a single-antenna PS, the accuracy improves by exploiting even a few antennas at the PS, e.g., $K=5$, where the improvement is much higher when the channel is noisier, i.e., $\sigma_z^2 = 100$ case. We note that, with all the other parameters fixed, the required average transmit power reduces with $K$, which verifies a faster convergence rate with higher $K$ resulting in a faster reduction in the empirical gradients' variances over time. The same observation is made by reducing $\sigma_z^2$ from $100$ to $20$ while all the other parameters are fixed. \section{Conclusions}\label{SecConc} We have studied DSGD at the wireless edge, where wireless devices compute the gradient estimates based on their available limited datasets, and transmit their estimates to the PS over a wireless fading MAC. To make the model more realistic, we have assumed that the devices do not have CSI for the underlying fast fading channel. With the goal of recovering the average gradient estimates at the PS, we have developed an analog DSGD technique, where the effect of fading, which cannot be cancelled at the transmitters due to the lack of CSIT, is alleviated by employing multiple antennas at the PS. Theoretical analysis, corroborated with numerical results, indicates that, with the proposed approach, increasing the number of PS antennas provides a better estimate of the average gradients through a better alignment of the desired signals, as well as elimination of the interference and noise terms. Asymptotically, the proposed DSGD scheme guarantees, despite the lack of CSIT, that the wireless MAC becomes deterministic, and both the fading and noise effects disappear. \bibliographystyle{IEEEtran}
2,869,038,155,741
arxiv
\section{Introduction} Besides conventional hadrons, QCD suggests the existence of states containing gluonic excitations, such as glueballs and hybrid hadrons. Although conventional hadrons are reasonably well described by the constituent quark model, states with excited glue are still poorly understood. As experiments begin to focus on the search for glueballs and hybrid mesons, a better understanding of these states from theory is needed. Due to the highly nonperturbative nature of the gluonic excitations in these states, lattice simulations offer at present the best means of theoretically probing glueballs and hybrid mesons. A great advantage in studying hybrid mesons comprised of heavy quarks is that such systems can be studied not only by direct numerical simulation, but also using the Born-Oppenheimer (BO) expansion. In this approach, the hybrid meson is treated analogous to a diatomic molecule: the slow heavy quarks correspond to the nuclei and the fast gluon field corresponds to the electrons\cite{hasenfratz}. First, one treats the quark $Q$ and antiquark $\overline{Q}$ as spatially-fixed colour sources and determines the energy levels of the glue as a function of the $Q\overline{Q}$ separation $r$; each of these energy levels defines a potential $V_{Q\overline{Q}}(r)$ adiabatically. The quark motion is then restored by solving the Schr\"odinger equation in each of these static potentials. Conventional quarkonia arise from the lowest-lying static potential; hybrid quarkonium states emerge from the excited potentials. Once the static potentials have been determined (via lattice simulations), it is a simple matter to determine the complete conventional and hybrid quarkonium spectrum in the leading Born-Oppenheimer (LBO) approximation. This is a distinct advantage over meson simulations which yield only the very lowest-lying states, often with large statistical uncertainties. Here, we present results for the spectrum of gluonic excitations in the presence of a static quark-antiquark pair. Some of these potentials have been studied before\cite{michael}. This study is the first to comprehensively survey the spectrum in SU(3) gauge theory. Due to our use of anisotropic lattices and an improved action, we have been able to determine the static potentials for much larger values of $Q\overline{Q}$ separation than previously studied. Using our potentials, we also determine the hybrid quarkonium spectrum. Results from a preliminary nonrelativistic lattice QCD simulation are also presented. \section{Computation of the potentials} The first step in the Born-Oppenheimer expansion is the determination of the rich spectrum of energy levels of the gluons in the presence of the quark and antiquark, fixed in space some distance $r$ apart. At this point in the approximation, the quark and antiquark simply act as static colour sources. The gluonic energies (or static potentials) may be labelled by the magnitude (denoted by $\Lambda$) of the projection of the total angular momentum of the gluons onto the molecular axis, by the sign of this projection (chirality or handedness), and by the behaviour under the combined operations of charge conjugation and spatial inversion about the midpoint between the quark and the antiquark. States with $\Lambda=0,1,2,\dots$ are typically denoted by the capital Greek letters $\Sigma, \Pi, \Delta, \dots$, respectively. States which are even (odd) under the above-mentioned parity--charge-conjugation operation are denoted by the subscripts $g$ ($u$). The energy of the gluons is unaffected by reflections in a plane containing the molecular axis; since such a reflection interchanges states of opposite handedness, such states must necessarily be degenerate ($\Lambda$ doubling). However, this doubling does not apply to the $\Sigma$ states; $\Sigma$ states which are even (odd) under a reflection in a plane containing the molecular axis are denoted by a superscript $+$ $(-)$. Hence, the low-lying levels are labelled $\Sigma_g^+$, $\Sigma_g^-$, $\Sigma_u^+$, $\Sigma_u^-$, $\Pi_g$, $\Pi_u$, $\Delta_g$, $\Delta_u$, and so on. For convenience, we use $\Gamma$ to denote these labels in general. \begin{table} \setlength{\tabcolsep}{3mm} \caption{Simulation parameters, including the coupling $\beta$, input aspect ratio $\xi$, approximate lattice spacing $a_s$, lattice size, and spatial link smearing parameters ($\lambda$ and $n_\lambda$ are defined in Ref.~\protect\cite{peardon}).} \begin{center} \begin{tabular}{ccccc} \hline $\beta$ & $\xi$ & $a_s$ (fm) & Lattice & $(\lambda, n_\lambda)$ \\ \hline $2.2$ & 5 & 0.27 & $12^3\times48$ & $(0.10,4)$ \\ & & & & $(0.20,4)$ \\ & & & & $(0.30,4)$ \\ \hline $2.4$ & 5 & 0.23 & $14^3\times56$ & $(0.10,8)$ \\ & & & & $(0.15,8)$ \\ & & & & $(0.20,8)$ \\ & & & & $(0.25,8)$ \\ \hline $2.6$ & 3 & 0.19 & $10^3\times30$ & $(0.15,8)$ \\ & & & & $(0.30,8)$ \\ \hline \end{tabular} \end{center} \label{table:simparams} \end{table} \epsfverbosetrue \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.9in\epsfbox[0 80 585 576]{hybrid_ops.eps} \end{center} \caption[figops]{Examples of the paths from the quark to the antiquark used to construct the gauge field operators. \label{fig:ops}} \end{figure} Static potentials were extracted from Monte Carlo estimates of generalized Wilson loops. On a starting time slice $t_0$, the quark and antiquark were fixed at lattice sites a distance $r$ apart. Several paths along the links of the lattice connecting the quark and the antiquark were then chosen, and our gluonic operators $O_i^\Gamma(t_0)$ were defined as linear combinations of the path-ordered exponentials of the gauge field along these paths. Examples of these paths are shown in Fig.~\ref{fig:ops}. The linear combinations were chosen such that the operators transformed irreducibly under all symmetry operations. To reduce the mixings of our operators with excited states, the gluonic operators were constructed from iteratively-smeared spatial links. We used the single-link smearing algorithm described in Ref.~\cite{peardon}. The quark and antiquark then evolved in time, remaining fixed at their original spatial locations. To reduce statistical noise, the static quark propagators, which are simply temporal Wilson lines, were constructed from thermally-averaged temporal links\cite{thermal}, whenever possible. The thermal averaging was done using the pseudo-heat-bath method (40 updates). At some final time slice $t_0\!+\!\tau$, evaluation of the gluonic operators $O^{\Gamma\dagger}_i(t_0\!+\!\tau)$ then completed the construction of the Wilson loops $W^\Gamma_{ij}(r,\tau)$. \epsfverbosetrue \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.9in\epsfbox[18 244 592 690]{plotSigma.ps} \end{center} \caption[figsig]{The static quark potential $V_{\Sigma_g^+}(r)$ and some of its gluonic excitations in terms of the hadronic scale parameter $r_0$ against the quark-antiquark separation $r$.} \label{fig:sigma} \end{figure} \epsfverbosetrue \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.9in\epsfbox[18 244 592 690]{plotExc.ps} \end{center} \caption[figexc]{The static quark potential $V_{\Sigma_g^+}(r)$ and selected gluonic excitations (see Fig.~\protect\ref{fig:sigma}). \label{fig:excited}} \end{figure} \epsfverbosetrue \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.9in\epsfbox[18 244 592 690]{plotDelta.ps} \end{center} \caption[figdelta]{The static quark potential $V_{\Sigma_g^+}(r)$ and selected gluonic excitations (see Fig.~\protect\ref{fig:sigma}). \label{fig:delta}} \end{figure} Monte Carlo estimates of the correlation matrices $W^\Gamma_{ij}(r,\tau)$ were obtained in three simulations. In each simulation, several spatial link smearing schemes were used: one scheme was typically chosen to work well for small $r$, another was optimized for large $r$. Various run parameters for the three simulations are given in Table~\ref{table:simparams}; other relevant parameters can be found in Ref.~\cite{peardon}. We used the improved action described in Ref.~\cite{peardon}. Our use of anisotropic lattices in which the temporal lattice spacing $a_t$ was much smaller than the spatial spacing $a_s$ was crucial. Configuration ensembles were generated using a mixture of Cabibbo-Marinari (CM) pseudo-heat-bath and SU(2) subgroup over-relaxation (OR) methods. The matrices $W^\Gamma_{ij}(r,\tau)$ were reduced in the data fitting phase to single correlators and $2\times 2$ correlation matrices using the variational method described in Ref.~\cite{peardon}. These reduced correlators were fit using a single exponential and a sum of two exponentials in various ranges $t_{\rm min}$ to $t_{\rm max}$ of the source-sink separation. The two-exponential fits were used to check for consistency with the single-exponential fits, and in cases of favourable statistics, to extract the first-excited state energy. Best fit values were obtained using the correlated $\chi^2$ method. Error estimates were calculated using a $1024$-point bootstrap procedure. Our results for the static potential and its gluonic excitations are shown in Figs.~\ref{fig:sigma}-\ref{fig:delta}. Results from the $\beta=2.2$, $\beta=2.4$, and $\beta=2.6$ runs are shown using solid circles, squares, and triangles, respectively. The results are expressed in terms of the hadronic scale parameter $r_0$. The definition of this parameter and a description of its calculation are given in Ref.~\cite{peardon}. The familiar static potential is shown as $\Sigma_g^+$; the solid curve is a fit to the data using a Coulomb plus linear form $V_0+e_c/r+\kappa r$. The curves for all other potentials are fits using $c_0+\sqrt{b_0 + b_1 r + b_2 r^2}$. For all $r$ studied, the first-excited potential is the $\Pi_u$; hence, the lowest lying hybrid mesons must emerge from this potential. As $r$ becomes very large, the linearly rising $\Sigma_g^+$ potential suggests that the ground state of the glue may be modelled as a fluctuating tube or string of colour flux; in this picture, the gluonic excitations are expected to be phonon-like with energy gaps proportional to $1/r$. However, it appears that for $r$ below about $1.5$ fm, the gluonic spectrum cannot be explained in terms of a simple string model. In Ref.~\cite{hasenfratz}, a QCD motivated bag model was successfully used to describe both the $\Sigma_g^+$ and $\Pi_u$ potentials for a large range of $r$. In this picture, the strong chromoelectric fields of the quark and antiquark repel the physical vacuum (dual Meissner effect), creating a bubble inside which perturbation theory is applicable. In the ground state, the inward pressure on the bubble from the physical vacuum balances the outward chromostatic force in such a way to produce a linearly confining potential. The addition of one or more gluons into the bag produces the excited potentials; the kinetic energy of the gluons inside the bubble is a key factor in determining the form of these potentials. This model has recently been revisited and results (in the ellipsoidal approximation) for almost all of the potentials studied here are in remarkable agreement with our findings from the lattice simulations (see Ref.~\cite{kuti}). \section{Hybrid quarkonium} The next step in the BO expansion is to restore the quark motion by sol\-ving the radial Schr\"odinger equation, \begin{equation} \frac{d^2u(r)}{dr^2}+2\mu [E-V_{\rm eff}(r)]\ u(r)=0, \label{eqn:schrodinger} \end{equation} where $V_{\rm eff}(r) = V_{Q\overline{Q}}(r) + \langle {\bf L}_{Q\overline{Q}}^2\rangle / (2\mu r^2)$, $\mu$ is the reduced mass, and $\varphi(r)=u(r)/r$ is the radial wavefunction. The total angular momentum of the meson is given by ${\bf J}={\bf L}+{\bf S}$, where ${\bf S}$ is the sum of the spins of the quark and antiquark, and the orbital factor ${\bf L}= {\bf L}_{Q\overline{Q}} + {\bf J}_g$, where ${\bf J}_g$ is the total angular momentum of the glue and ${\bf L}_{Q\overline{Q}}$ is the orbital angular momentum of the quark and antiquark. In the LBO approximation, the eigenvalues $L(L+1)$ and $S(S+1)$ of ${\bf L}^2$ and ${\bf S}^2$ are good quantum numbers. The centrifugal factor is then written as \begin{equation} \langle {\bf L}_{Q\overline{Q}}^2\rangle = L(L+1) - 2\Lambda^2 + \langle {\bf J}_g^2 \rangle. \end{equation} For the $\Sigma_g^+$ potential, $\langle {\bf J}_g^2 \rangle=0$. For the $\Pi_u$ and $\Sigma_u^-$ potentials, we assumed that the one gluon state was dominant with $\langle {\bf J}_g^2 \rangle=2$. Mesonic eigenstates of parity and charge-conjugation are linear combinations of left- and right-handed glue states: $\vert {\rm left} \rangle + \epsilon\ \vert {\rm right}\rangle$, where $\epsilon=\pm 1$. Let $\eta=\pm 1$ denote the $PC$ quantum number of the glue. Then in the LBO approximation, the parity (P) and charge conjugation (C) of each meson is given in terms of $L$ and $S$ according to \begin{eqnarray} P &=& \epsilon\ (-1)^{L+\Lambda+1},\\ C &=& \epsilon\ \eta\ (-1)^{L+\Lambda+S}. \end{eqnarray} \epsfverbosetrue \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.95in\epsfbox[18 244 550 680]{spectrum.ps} \end{center} \caption[figspectrum]{Spin-averaged $b\bar b$ spectrum in the leading Born-Oppenheimer and quenched approximations. Solid lines indicate experimental measurements. Short dashed lines indicate the $S$ and $P$ state masses obtained by solving the appropriate Schr\"odinger equation in the $\Sigma_g^+$ potential using $r_0^{-1}=0.430$ GeV and $M_b=4.60$ GeV for the heavy quark mass. Long dashed and dashed-dotted lines indicate the hybrid quarkonium states obtained from the $\Pi_u$ $(L=1,2)$ and $\Sigma_u^-$ $(L=0,1,2)$ potentials, respectively. \label{fig:spectrum}} \end{figure} \epsfverbosetrue \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.9in\epsfbox[18 244 592 690]{wavefunc.ps} \end{center} \caption[figwf]{Static potentials and radial probability densities against quark-antiquark separation $r$. The $\Sigma_g^+$ static potential and the $\Pi_u$ and $\Sigma_u^-$ excitations are indicated by the dashed-dotted lines. The solid and short-dashed curves indicate the radial probability densities for the $1S$ and $1P$ states, respectively, corresponding to the results shown in Fig.~\protect\ref{fig:spectrum}. The extended nature of the lowest-lying $\Pi_u$ hybrid state is shown by the radial probability density indicated by the long-dashed curve. \label{fig:wf}} \end{figure} The potentials computed from Wilson loops in lattice simulations contain the self-energies of the temporal Wilson lines. These self-energy contributions are common to all of the static potentials and must be removed in order to obtain the $V_{Q\overline{Q}}(r)$ appearing in Eq.~\ref{eqn:schrodinger}. Ideally, this can be done by measuring the $\Sigma_g^+$ potential for very small $r$ and comparing with the running Coulomb law as predicted from perturbation theory. In practice, this is difficult to do. Instead, we fit our results for the $\Sigma_g^+$ potential to the form $V_0+e_c/r+\kappa r$; the constant $V_0$ is then our estimate of the self-energy contributions to be removed. Results for the $b\bar b$ spectrum are shown in~Fig.~\ref{fig:spectrum}. The scale was set using $r_0^{-1}\!=\!430$ MeV as suggested from NRQCD simulations of $b\bar b$ and $c\bar c$ mesons (see Table XX of Ref.~\cite{peardon}). The heavy quark mass $M_b$ was tuned in order to reproduce the experimentally-known $\Upsilon(1S)$ mass: $M_\Upsilon =2M_b+E_0$, where $E_0$ is the energy of the lowest-lying state in the $\Sigma_g^+$ potential. In the LBO approximation, many mesons are degenerate: the $J^{PC}=0^{-+}$,$1^{--}$ $S$-waves from the $\Sigma_g^+$ potential are degenerate; the $0^{++}$,$1^{++}$,$2^{++}$,$1^{+-}$ $P$-waves from the $\Sigma_g^+$ potential have equal masses; states such as $0^{-+}$,$0^{+-}$,$1^{++}$,$1^{-+}$,$1^{--}$,$1^{+-}$ from the $\Pi_u$ potential are also degenerate. Below the $B\overline{B}$ threshold, the LBO results are in very good agreement with the spin-averaged experimental measurements. Note that these results make use of the quenched potentials (which ignore the light quarks) and do not include spin, retardation, and other relativistic effects. Above the threshold, agreement with experiment is lost, suggesting significant corrections from either the light quarks, relativistic effects, or possibly mixings between the states from the different adiabatic potentials. Note that the mass of the lowest-lying hybrid (from the $\Pi_u$ potential) is about 10.8~GeV. Hybrid mesons from all other static potentials are significantly higher lying. Above 11 GeV, the LBO approximation based on the quenched $V_{Q\overline{Q}}$ potentials predicts a very dense population of hybrid states. The radial probability densities for the $1S$ and $1P$ conventional states are compared with that of the lowest-lying $\Pi_u$ hybrid state in Fig.~\ref{fig:wf}. \epsfverbosetrue \begin{figure}[t] \begin{center} \leavevmode \epsfxsize=2.9in\epsfbox[18 244 592 690]{T1mp.ps} \end{center} \caption[fignrqcd]{Effective mass plot showing the results of a single-exponential fit to the correlation function of the hybrid $T_1^{-+}$ quarkonium state. The heavy quark propagates according to a spin-independent NRQCD action. \label{fig:nrqcd}} \end{figure} \section{NRQCD simulations} Hybrid quarkonium states may also be studied directly in numerical simulations. The nonrelativistic formulation of lattice QCD (NRQCD)\cite{nrqcd} is a particularly efficient means of carrying out such simulations. We have recently begun an investigation of the hybrid quarkonium states using a spin-independent version of the lattice NRQCD action. The action included only a covariant temporal derivative and the leading kinetic energy operator (with two other operators to remove $O(a_t)$ and $O(a_s^2)$ errors); relativistic corrections depending on spin, ${\bf E}$ and ${\bf B}$, and higher derivatives were not included. The action was chosen to correspond as closely as possible to the LBO approximation. In so doing, our treatments of the hybrid mesons using the Born-Oppenheimer expansion and using NRQCD simulations differed in primarily two aspects: lattice spacing errors and retardation effects. The NRQCD simulations included retardation effects since the covariant Laplacian of the quark kinetic energy operator was treated exactly. In contrast, the LBO approximation ignored all retardation effects by keeping only the leading term in the $1/c$ expansion of the covariant Laplacian. A comparison of results from the two approaches should afford a test of the adiabatic approximation. Because the signal-to-noise ratio was expected to be poor and the masses of the hybrid mesons were expected to be large, it was crucial to use an anisotropic lattice in which the temporal lattice spacing was much smaller than the spatial spacing. The operators used to calculate the static potentials were incorporated into our NRQCD simulation code. We constructed large sets of gauge-invariant operators in order to minimize excited-state contamination of our hybrid meson correlators using variational techniques. We have completed an initial run on a $12^3\times 60$ lattice using an aspect ratio $\xi=5$ (that is, $a_s=5a_t$). As the purpose of this run was merely to discover the quality of signal which was possible, the quark mass was not tuned and a value $a_sM_b=4.50$ was used. The effective mass from this run for the $1^{-+}$ hybrid meson is shown in Fig.~\ref{fig:nrqcd}. A convincing plateau was observed from 457 measurements, and we were able to extract the mass with a $3\%$ statistical uncertainty. \section{Conclusion} A first comprehensive survey of the spectrum of quenched SU(3) gluonic excitations in the presence of a static $Q\overline{Q}$ pair was presented. The hybrid quarkonium states were calculated in the leading Born-Oppenheimer approximation. The effective mass for the $1^{-+}$ hybrid from a preliminary NRQCD simulation was also shown. This work was supported by the U.S.~DOE, Grant No.\ DE-FG03-90ER40546.
2,869,038,155,742
arxiv
\section{Introduction} One of the fascinating realizations in the interplay of gravitational scattering amplitudes and dynamics of compact binary systems, is the equivalence of minimally coupled spinning particle and rotating black holes. In the analysis of three-point amplitudes of particles with general spin, a unique amplitude for massive spin-$s$ particle emitting a massless graviton, was defined kinematically in~\cite{Arkani-Hamed:2017jhn} and termed \textit{minimal coupling}. The term reflects its matching to minimal derivative coupling when taking the high energy limit for $s\leq 2$. Since massless particles have spins bounded by 2 in flat space, the role of minimal coupling with $s>2$ was initially not clear. Through a series of subsequent analysis~\cite{Guevara:2017csg, Chung:2018kqs, Guevara:2018wpp, Arkani-Hamed:2019ymq,Aoude:2020onz}, it was understood that the spin multipoles generate by minimal coupling are exactly that of a spinning black hole, i.e. the spin moments in the effective stress-energy tensor of the linearized Kerr solution. This was verified by reproducing the Wilson coefficients of one-particle effective theory (EFT)~\cite{Goldberger:2004jt, Porto:2008tb} for Kerr black hole~\cite{Chung:2018kqs}, and the classical scattering angle at leading order in the Newton constant $G$ to all orders in spin~\cite{Guevara:2018wpp}. While the equivalence can be established through various direct matching, the principle that underlies such correspondence remains unclear. In this letter, we seek to answer this by studying the spin entanglement entropy. We will use the action of the $2\rightarrow 2$ $\mathcal{S}$-matrix in the Eikonal limit on two particle spin-states. By measuring the relative entanglement entropy for the final state, defined as \begin{equation}\label{Relative} \Delta S\equiv - {\rm tr} \left[\rho^{\rm out} \log \rho^{\rm out} \right] {+} {\rm tr} \left[\rho^{\rm in} \log \rho^{\rm in} \right] \,, \end{equation} where $\rho^{\rm in,out}$ is the reduced density matrix for the in and out-state, remarkably we find that $\Delta S \approx 0$ when the $\mathcal{S}$-matrix is given associated with minimal coupling, or equivalently, when the EFT Wilson coefficients are set to the black hole value, unity. Any deviation from unity significantly increases the relative entropy. \section{Entanglement via S-matrix} \label{sec:entSmatrix}\vspace*{-1mm} The study of entanglement in scattering events has a long history, which, for recent developments we refer to~\cite{Cervera-Lierta:2017tdt, Beane:2018oxh, Afik:2020onf, Bose:2020shm}. Denote the two particle Hilbert space by $\mathcal{H}=\mathcal{H}_a \otimes \mathcal{H}_b$, for each subsystem we can further divide into spin and momentum degrees of freedom, e.g. $\mathcal{H}_{\rm a}=\mathcal{H}_{s_a} \otimes \mathcal{H}_{p_a}$. In computing the entanglement from scattering, there are two sources of difficulty. First, the trace over momentum states lead to divergences due to the infinite space-time volume, and introducing cut-off leads to regulator dependent results, see e.g.~\cite{Peschanski:2016hgk, Peschanski:2019yah}. Second, under Lorentz rotations, the spin undergoes Thomas-Wigner rotation and one does not have a Lorentz invariant definition of the reduced density matrix~\cite{PhysRevLett.88.230402, Lindner:2003ve} (see \cite{He:2007du} for further discussions). On the other hand, the same difficulty also appears in the extraction of conservative Hamiltonian of binary systems from relativistic scattering amplitudes. In particular, in a $2\rightarrow 2$ scattering process, the spin (little group) space of the incoming particles are invariably distinct from the outgoing space, as their momentum are distinct. However, by augmenting the $\mathcal{S}$-matrix with Thomas-Wigner rotation factors, the final state can be mapped back into the spin Hilbert space of the incoming state. Indeed such \emph{Hilbert space matching} procedure was used heavily in the computation of the spin-dependent part of the conservative Hamiltonian~\cite{Chung:2019duq, Chung:2020rrz, Bern:2020buy}. We thus consider elastic scattering in the spin Hilbert space $\mathcal{H}=\mathcal{H}_{s_a} \otimes \mathcal{H}_{s_b}$. With a given in-state, we can obtain the out-state via the amplitude as: \begin{equation}\label{e:out} |{\rm out} \rangle=(U_a\otimes U_b )\,M\,|{\rm in}\rangle\,, \end{equation} where $U_{a,b}$ are the Hilbert space matching factors which will be discuss in the next section~\cite{Fan}. The total density matrix of the out-state is then simply $\rho^{{\rm out}}_{a,b} = |{\rm out}\rangle \langle {\rm out}|$ and the reduced density matrix is given by $\rho_{\rm a} = {\rm tr}_b \rho_{\rm a,b}$. Equipped with $\rho_{\rm a}$ we can consider a variety of entanglement quantifiers. A canonical choice is the entanglement entropy, i.e. the Von Neumann entropy of the reduced density matrix $S_{\rm VN} = - {\rm tr}_a \left[\rho_a \log \rho_a \right]$. Note that here, $S_{\rm VN}$ in principle depends on the in-state. For a quantifier that is independent of the in-state, we can consider the entanglement power \cite{nielsen_chuang_2010} given by \begin{align} \label{e:EP} \mathcal{E}_a= 1- \int \frac{d\Omega_a}{4\pi}\frac{d\Omega_b}{4\pi} {\rm tr}_a \rho_a^2. \end{align} where $\Omega$ represents the spin-$s$ phase space. In the following, we will consider the elastic $\mathcal{S}$-matrix acting on $|{\rm in}\rangle=|s_a\rangle\otimes|s_b\rangle$, i.e. the in-state is set up as a pure state. Thus by computing the entanglement entropy of the out-state, we obtain the entanglement enhancement of the scattering process. \section{The Eikonal Amplitude in Spin Space} In this section, we compute the leading order amplitude $a b \rightarrow a' b'$ for general massive spinning particles in the Eikonal limit. Working in the center of mass frame where $p_a = (E_a, 0, 0, \vec{p})$, $p_b = (E_b, 0, 0, -\vec{p})$, and the momentum transfer $q = p_a - p_a^{\prime} = (0, \,\vec{q})$, the Eikonal limit correspond to $q^2 \rightarrow 0$. After Fourier transform to the impact parameter space, we obtain the Eikonal phase whose exponentiation yields to the $\mathcal{S}$-matrix in the Eikonal limit. \subsection{Spin-$s$ Amplitudes and Hilbert Space Matching} \label{s:hilbertM} We begin with the scattering of spinning particles induced by gravitational interactions. At leading order in the Newton constant $G$, the four point amplitude for the $a b\rightarrow a' b'$ illustrated in fig. \ref{Sfig}, can be written as~\cite{Chung:2020rrz}: \begin{align}\label{eq:4pt} &M_{\rm{tree}}(q^2) = {-}8\pi G \frac{m_a^2 m_b^2}{q^2} \times \\ & \times \sum_{\eta = \pm 1} e^{2\eta\Theta} [\varepsilon_{a^{\prime}}^* W_a(\eta \tau_a) \varepsilon_{a}] [\varepsilon_{b^{\prime}}^{*}W_2(\eta \tau_b)\varepsilon_{b}] + \mathcal{O}(q^0)\, ,\nonumber \end{align} where $q^\mu$ is the the transfer momenta, $\varepsilon_i$ is the polarization tensor of the spinning particle, $\tau_{a,b} = \frac{q\cdot S}{m_{a,b}}$ and the exponential parameters are defined as $\cosh\Theta \equiv \frac{p_a \cdot p_b}{m_a m_b}$ and $\eta = \pm 1$ labelling the exchanged graviton's helicity. The function $W(\eta \tau)$ is defined as: \begin{equation} \label{e:Wi} W_{a,b}(\eta \tau_{a,b})=\left[ \sum_{n=0}^{2s_{a,b}} \frac{C_n}{n!} \left(\eta \frac{q\cdot S}{m_{a,b}} \right)^n\right]\,, \end{equation} where $S$ is the Pauli-Lubanski spin-vector and $C_{a,n},\,C_{b,n}$ parametrizes the possible distinct couplings for particle $a,b$. These are the $2s$ multi-pole moments carried by a spin-$s$ particle, and can be directly matched to the Wilson coefficients of the one-particle effective action (see \cite{Levi:2015msa} for the all order in spin action). For rotating black holes $C_{a,n}=C_{b,n}=1$, and the classical spin is recovered in the limit $s\rightarrow \infty$, $\hbar\rightarrow 0$ while keeping the classical spin $S\equiv s \hbar$ fixed (see \cite{Maybee:2019jus} for a more detailed discussion). \begin{figure} \begin{center} \includegraphics[scale=0.5]{ScatteringGraph} \caption{We consider the $2\rightarrow 2$ scattering of two spinning objects exchanging gravitons. (I) Process in the leading order of the Newton constant $G$. (II) Eikonal approximation, which re-sums the ladder diagrams.} \label{Sfig} \end{center} \end{figure} As shown in ref.~\cite{Chung:2019duq}, we can transform the spin-vector $S$ in an operator acting in the little group space through the insertion of a complete set of polarization tensors associated to the incoming particles: \begin{align} \mathbb{S}_{a,b} \equiv \varepsilon_{a,b, \{I_s\}}^* S^{\mu} \varepsilon_{a,b}^{\{J_s\}}\, , \end{align} where $\{I_s\},\,\{J_s\}$ are the $SU(2)$ indices of particle $a,\,b$. In components, we have that \begin{align}\label{eq:S component} \mathbb{S}^{\mu}_{a,b} = \left( \frac{\vec{p}_{a,b} \cdot \vec{\Sigma}}{m_{a,b}},~ \vec{\Sigma} + \frac{\vec{p}_{a,b}\cdot \vec{\Sigma}}{m_{a,b}(m_{a,b}+E_{a,b})}\vec{p}_{a,b}\right), \end{align} where $\vec{\Sigma}$ is the spin-$s$ rest frame spin operator satisfying the commutation relation $[\Sigma_i, \Sigma_j] = i \epsilon_{ijk}\Sigma_k$. Then the operator $\tau$ in the little group space is given by \begin{align} \mathbb{T}_{a,b} \equiv \varepsilon_{a,b, \{I_s\}}^* \frac{q\cdot S}{m_{a,b}} \varepsilon_{a,b}^{\{J_s\}} \equiv \frac{q\cdot \mathbb{S}_{a,b}}{m_{a,b}}. \end{align} Writing eq.~\eqref{eq:4pt} in term of $\mathbb{T}$ leads to an amplitude that corresponds to an operator acting on states in distinct little group space, as the momenta of $a,b$ are distinct from $a',b'$. This can be rectified by the so called \textit{Hilbert space matching} procedure which utilize the Lorentz transformation that relates the momenta of the in-states to the out-states, to convert the out-states Hilbert space back to the in-states~\cite{Chung:2019duq, Chung:2020rrz}. The result is the additional Thomas-Wigner rotation factors for each of the two particles. For example, for particle $a$ this factor, in leading order in $q^2$, is written as \begin{equation} U_a = \exp\left[-i \frac{m_a m_b \,\mathbb{E}_a }{(m_a + E)E} \right], \end{equation} where $\mathbb{E}_a \equiv \epsilon(q, u_a, u_b, a_a) = \epsilon_{\mu\nu\rho\sigma}q^{\mu}u_a^{\nu}u_b^{\rho}a_a^{\sigma}$, $a_a = \mathbb{S}_a/m_a$, $u_{a,b} = p_{a,b}/m_{a,b}$ and $E = E_a + E_b$. In summary, the amplitude after the Hilbert space matching, denoted by $\overline{M}$, is given by \begin{align}\label{eq:q Amplitude} \overline{M}_{\rm tree}&(q^2) = -8\pi G \frac{m_a^2 m_b^2}{q^2}\times \\ &\times \sum_{\eta = \pm 1} e^{2\eta\Theta} W_a(\eta \mathbb{T}_a) W_b(\eta \mathbb{T}_b) U_a U_b + \mathcal{O}(q^0)\,. \nonumber \end{align} Expanding eq.~\eqref{eq:q Amplitude} up to order $\mathcal{O}(\mathbb{S}^{2s_i})$ gives \begin{align}\label{eq:A basis} &\overline{M}_{{\rm tree}}(q^2) = -\frac{16\pi Gm_a^2 m_b^2}{q^2} \\ &\times \left\lbrace \sum_{m=0}^{\floor{s_a}} \sum_{n=0}^{\floor{s_b}} A_{2m,2n} \left( \mathbb{T}_{a}^{2m} \otimes \mathbb{T}_{b}^{2n}\right) \right. \nonumber \\ \quad &+ \frac{m_a^2 m_b}{E} \sum_{m=0}^{\ceil{s_a}-1} \sum_{n=0}^{\floor{s_b}} A_{2m+1, 2n} \left(\text{Sym} \left[\mathbb{E}_a \mathbb{T}_{a}^{2m} \right]\otimes \mathbb{T}_{b}^{2n} \right)\nonumber \\ \quad &+ \frac{m_a m_b^2}{E} \sum_{m=0}^{\floor{s_a}}\sum_{n=0}^{\ceil{s_b}-1} A_{2m, 2n+1} \left(\mathbb{T}_{a}^{2m} \otimes \text{Sym} \left[\mathbb{E}_b \mathbb{T}_{b}^{2n} \right] \right) \nonumber \\ &+ \left.\sum_{m=0}^{\ceil{s_a}-1} \sum_{n=0}^{\ceil{s_b}-1} A_{2m+1, 2n+1} \left(\mathbb{T}_{a}^{2m+1} \otimes \mathbb{T}_{b}^{2n+1}\right) \right\rbrace \nonumber \, , \end{align} where we used the shorthand notation $\mathbb{T}_{a,b} \equiv \left(q\cdot a_{a,b}\right)$ and \begin{align} &\text{Sym} \left[\mathbb{E}_{i} \mathbb{T}_{i}^{2n} \right] \equiv \\ &\frac{1}{2n+1} \left[\mathbb{E}_i\mathbb{T}_{i}^{2n} + \mathbb{T}_{i}\mathbb{E}_i \mathbb{T}_{i}^{2n-1} + \cdots + \mathbb{T}_{i}^{2n} \mathbb{E}_i \right] \nonumber \,, \end{align} for $i=a,b$. The explicit form of the coefficients $A_{m,n}$ in eq.~\eqref{eq:A basis}, up to $m,n=2$, is given by \begingroup \allowdisplaybreaks \begin{align} A_{0,0} &= c_{2\Theta}\,, \quad A_{1,0} = \frac{i(2E r_a c_{\Theta} - m_b c_{2\Theta})}{m_a^2 m_b r_a} \,, \\ A_{1,1} &= \frac{c_{2 \Theta } s_{\Theta }^2 m_a m_b}{E^2 r_a r_b}+c_{2 \Theta }-\frac{2 m_b c_{\Theta } s_{\Theta }^2}{E r_a}-\frac{2 m_a c_{\Theta }s_{\Theta }^2}{E r_b}\,, \nonumber \\ A_{2,0} &=\frac{C_{a,2} c_{2 \Theta }}{2}+\frac{m_b^2 c_{2 \Theta } s_{\Theta }^2}{2 E^2 r_a^2}-\frac{2 m_b c_{\Theta } s_{\Theta }^2}{E r_a}\,,\nonumber \\ \begin{split} A_{2,1} &= i \left( \frac{E C_{a,2} c_{\Theta }}{m_a m_b^2} -\frac{C_{a,2} c_{2 \Theta }}{2 m_b^2 r_b} +\frac{c_{2 \Theta }}{4E^2 r_a^2r_b} \right. \\ & \left. -\frac{c_{4 \Theta }}{8 E^2 r_a^2 r_b} -\frac{c_{\Theta }}{2 E r_a m_b r_b} +\frac{c_{3 \Theta }}{2 E r_a m_br_b} \right. \\ & \left. -\frac{c_{2 \Theta }}{m_a r_a m_b} -\frac{1}{8 E^2 r_a^2 r_b} -\frac{c_{\Theta }}{4 E m_a r_a^2} +\frac{c_{3 \Theta }}{4 E m_a r_a^2} \right)\,, \end{split} \nonumber\\ \begin{split} A_{2,2} &= \frac{C_{a,2} C_{b,2} c_{2 \Theta }}{4} -\frac{C_{a,2} c_{\Theta } s_{\Theta }^2 m_a}{E r_b} -\frac{C_{b,2} c_{\Theta }s_{\Theta }^2 m_b}{E r_a }\\ & +\frac{c_{2 \Theta } s_{\Theta }^4m_a^2 m_b^2}{4 E^4 r_a^2 r_b^2} -\frac{c_{\Theta } s_{\Theta }^4m_a m_b^2}{E^3 r_a^2r_b} -\frac{c_{\Theta } s_{\Theta }^4m_a^2 m_b}{E^3 r_a r_b^2} \\ & +\frac{c_{2 \Theta } s_{\Theta }^2m_a m_b}{E^2 r_a r_b} +\frac{C_{b,2} c_{2 \Theta }s_{\Theta }^2 m_b^2}{4E^2 r_a^2} +\frac{C_{a,2} c_{2 \Theta } s_{\Theta }^2 m_a^2 }{4 E^2 r_b^2}\,, \nonumber \end{split} \end{align} \endgroup where $C_{a,2}$ and $C_{b,2}$ are the Wilson coefficients for each particles, $(c_{\Theta}, s_{\Theta} )\equiv (\cosh \Theta, \sinh \Theta)$ and $r_{a,b} \equiv 1+ E_{a,b}/m_{a,b}$. We can see that the Wilson coefficients $C_{a,n}$ and $C_{b,n}$ starts to appear at $A_{2,0}$, which means that we need to go to at least to spin-1 to compare the difference between black holes and other objects. \subsection{Eikonal Phase} The Eikonal phase, at order $\mathcal{O}(G)$, is given simply by the Fourier transform of the tree-level amplitude in eq.~\eqref{eq:q Amplitude} to the impact parameter space: \begin{equation}\label{eq:Eikonal phase} \chi(b)= \frac{1}{4|\vec{p}|E}\int \frac{d^2\vec{q}}{(2\pi)^2}\;\; e^{i \vec{q}\cdot \vec{b}}\overline{M}_{{\rm tree}}(q^2)\,. \end{equation} Since $q^2 \rightarrow 0$ in the Eikonal limit, we have $\vec{q}\cdot\vec{p} = q^2/2 \rightarrow 0$. This orthogonality between $\vec{q}$ and $\vec{p}$ defines the impact parameter space, which is the plane perpendicular to the incoming momentum, i.e. $\vec{b} = (b_x, b_y, 0)$. Note that, in this limit, we can simply replace all $\vec{\mathbb{S}}$ in eq.~\eqref{eq:A basis} by $\vec{\Sigma}$, which is the rest frame spin operator. The $\mathcal{S}$-matrix in the Eikonal approximation is then the exponential of the phase: \begin{equation} \mathcal{S}_{{\rm Eikonal}}= e^{i\chi(b)}\,. \end{equation} This allow us to write the out-state in the Eikonal approximation, replacing the matrix element of $U_a U_b\,M$ by the ones of $\mathcal{S}_{\rm Eikonal}$ in eq.~\eqref{e:out}: \begin{equation} \label{e:outEik} |{\rm out}\rangle =\mathcal{S}_{\rm Eikonal}| {\rm in}\rangle\,. \end{equation} \section{The entanglement entropy of binary systems} We now have all the ingredients necessary to compute the entanglement entropy and the entanglement power for the out-state in the Eikonal approximation. We first compute the entanglement entropy for spin-$1$ particles, which corresponds to keep spin operators up to second power for each particle in the Eikonal phase. Starting with a pure state $\lvert{\rm in}\rangle=\lvert\upuparrows\rangle$, the entanglement entropy for the resulting out-state yields directly to the relative entropy $\Delta S$ in eq.~(\ref{Relative}). The result is plotted in fig.~\ref{fig:spin1} against the Wilson coefficients pair $(C_{a,2}, C_{b,2})$. Remarkably, the minimum is exactly at the Kerr black hole value $C_{a,2}, =C_{b,2}=1$ and deviating from this point raises the entropy of the system. This is unchanged for different choice of in-states, which is illustrated by the computation of entanglement power given by eq.~\eqref{e:EP} and shown in fig.~\ref{fig:spin1}. We've also obtained similar result with mixed instates. \begin{figure} \centering \begin{tabular}{@{}c@{}} \includegraphics[width=8cm]{spin1} \\[\abovecaptionskip] \small (I) Relative Von Neumann entropy. \end{tabular} \vspace{\floatsep} \begin{tabular}{@{}c@{}} \includegraphics[width=8cm]{spin1EP} \\[\abovecaptionskip] \small (II) Entanglement power. \end{tabular} \caption{ (I) Relative entanglement entropy $\Delta S$ and (II) the entanglement power $\mathcal{E}_a$ for massive spin-1 particles. The initial state is set to $\lvert{\rm in}\rangle=\lvert\upuparrows\rangle$ and the kinematic parameters are given by $|\vec{p}_a| = |\vec{p}_b| = |\vec{p}|$, $m_a = m_b = m$, $\vec{b} = (b, 0,0)$, $Gm^2 = 10^{-4}$, $|\vec{p}|b = 1000$, $|\vec{p}|/m = 100$. The minimum, represented by the black point, corresponds to the Wilson coefficient value $(C_{a,2}, C_{b,2})=(1,1)$, $\Delta S \approx 1.54\times10^{-9}$ and $\mathcal{E}_a \approx 1.10\times10^{-10}$. }\label{fig:spin1} \end{figure} In order to show that this is indeed a robust result, we also consider higher spins. Using the same set up we calculate the relative Von Neumann entropy for spin-3 massive particles, which has a total of $5+5=10$ Wilson coefficients. In our extensive scan, we find that the black hole value, $C_{a,i}=C_{b,i}=1$ for $i=2,\cdots,6$, is the unique point that gives the minimum value. As an illustrative example, we set all Wilson coefficients to one except the pair $(C_{a,2}, C_{b,2})$ and plot $\Delta S$ with respect to $(C_{a,2}, C_{b,2})$ in fig.~\ref{fig:spin3}. The results show the minimum at $(1,1)$, while the two orthogonal valleys represent keeping only one of the coefficient at one. In fig.~\ref{fig:spin3ca2ca3}, we plot $C_{a,2}=C_{b,2}=C_2$ and $C_{a,3}=C_{b,3}=C_3$, while keeping all remaining coefficients one. Once again the corresponding black hole point gives near zero entanglement. While the deformation of each Wilson coefficient away from the unity raises the entanglement entropy, comparatively, the effect of $C_{2}$ is dominant. This is illustrated in fig.~\ref{fig:spin2} that compares $\Delta S$ for deforming the three different pairs of Wilson coefficients in the spin-2 system. We can observe that deforming $(C_{a, 2},C_{b,2})$ has the dominant effect in generating entanglement. Finally, we expect that including higher spins do not change the main result. The minimum of the relative entropy is always at the Kerr black hole Wilson coefficient point. Moving away from this point quickly increase the entanglement entropy. A comparison between the spin-1, spin-2 and spin-3 cases keeping all Wilson coefficients one except $C_{a,2}$ can be seen in fig.~\ref{fig:1d}.\\ \begin{figure} \includegraphics[width=8cm]{spin2ci} \centering \caption{Relative entanglement entropy for massive spin-2 particles. The initial state is set to $\lvert{\rm in}\rangle=\lvert\upuparrows\rangle$ and the kinematic parameters are given by $|\vec{p}_a| = |\vec{p}_b| = |\vec{p}|$, $m_a = m_b = m$, $\vec{b} = (b, 0,0)$, $Gm^2 = 10^{-4}$, $|\vec{p}|b = 1000$, $|\vec{p}|/m = 100$. The planes corresponds respectively to $(C_{a,2},C_{b,2})$, $(C_{a,3},C_{b,3})$ and $(C_{a,4},C_{b,4})$, while all others Wilson coefficients are set to one. In any of the cases, the minimum, represented by the black point, corresponds to the Wilson coefficients set to one and $\Delta S \approx 5.84\times10^{-9}$.} \label{fig:spin2} \end{figure} \begin{figure} \includegraphics[width=8cm]{spin3} \centering \caption{Relative entanglement entropy for massive spin-3 particles. The initial state is set to $\lvert{\rm in}\rangle=\lvert\upuparrows\rangle$ and the kinematic parameters are give by $|\vec{p}_a| = |\vec{p}_b| = |\vec{p}|$, $m_a = m_b = m$, $\vec{b} = (b, 0,0)$, $Gm^2 = 10^{-4}$, $|\vec{p}|b = 1000$, $|\vec{p}|/m = 100$. The Wilson coefficients $(C_{a, i \neq 2},C_{b, j \neq 2})$ are set to one. The minimum, represented by the black point, is at $\Delta S \approx 1.26\times10^{-8}$ and corresponds to the Wilson coefficient value $(C_{a,2},C_{b,2})=(1,1)$.} \label{fig:spin3} \end{figure} \begin{figure} \includegraphics[width=8cm]{Spin3C2C3} \centering \caption{Relative entanglement entropy for massive spin-3 particles. The initial state is set to $\lvert{\rm in}\rangle=\lvert\upuparrows\rangle$ and the kinematic parameters are given by $|\vec{p}_a| = |\vec{p}_b| = |\vec{p}|$, $m_a = m_b = m$, $\vec{b} = (b, 0,0)$, $Gm^2 = 10^{-4}$, $|\vec{p}|b = 1000$, $|\vec{p}|/m = 100$, $C_{a,2}=C_{b,2}=C_2$, $C_{a,3}=C_{b,3}=C_3$. All others Wilson coefficients are set to one. The minimum is at $\Delta S \approx 1.26\times10^{-8}$ and corresponds to the Wilson coefficient value $(C_{2},C_{3})=(1,1)$ . } \label{fig:spin3ca2ca3} \end{figure} \begin{figure} \includegraphics[width=8cm]{plot1d} \centering \caption{Comparison between the relative entanglement entropy for spin-1, spin-2 and spin-3. All Wilson coefficients are set to one except $C_{a,2}$. } \label{fig:1d} \end{figure} \section{Conclusions and outlook} In this letter, we consider the entanglement entropy generated by gravitationally coupled binary systems. By considering the Hilbert space of spin states, we demonstrate that minimal coupling for massive arbitrary spin particle have the unique feature of generating nearly zero entanglement in the scattering process. Given the correspondence between minimal coupling and rotating black holes, the result suggests that such feature can also be attributed to the entanglement properties of spinning black holes. Note that such phenomenon is reminiscent of what was found in strong interactions, where entanglement suppression is associated with symmetry enhancement~\cite{Beane:2018oxh}. While the relative entropy is near zero, it is not zero, which may be an artifact of confining ourselves to leading order in Eikonal approximation. This makes clear investigation at NLO is desirable. As mentioned in the introduction, there is a general correspondence between minimal coupling and black-hole like solutions in four-dimensions. This includes Reissner Nordstrom, Kerr Newman~\cite{Moynihan:2019bor, Chung:2019yfs}, Taub-NUT~\cite{Huang:2019cja} and Kerr-Taub NUT~\cite{NewPaper}. Furthermore, gravitationally induced spin-multipoles has also been studied recently in the context of fuzzball microstates~\cite{Bena:2020see, Bena:2020uup, Bianchi:2020bxa}. For Kerr Newman there are additional electromagnetic spin multipoles, while for Kerr-Taub NUT and fuzzballs, the minimal couplings are dressed with additional complex phase factors. It will be fascinating to explore their features through the prism of spin entanglement. Finally, it will also be interesting to understand quantum corrections, in particular whether or not they generate anomalous gravitational multipole moments. \section*{Acknowledgements} We would especially like to thank Jung-Wook Kim, for discussions on the computation of Eikonal phase for spin effects. Also Bo-Ting Chen, Tzu-Chen Huang, Jun-Yu Liu and Andreas Helset for enlightening discussions. C.S.M. thanks Hugo Marrochio for the encouragement in the early stages of the project. The work of C.S.M.\ and R.A.\ is supported by the Alexander von Humboldt Foundation, in the framework of the Sofja Kovalevskaja Award 2016, endowed by the German Federal Ministry of Education and Research and also supported by the Cluster of Excellence ``Precision Physics, Fundamental Interactions, and Structure of Matter'' (PRISMA$^+$ EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy (Project ID 39083149). Mz C, Yt H, and Mk T is supported by MoST Grant No. 106-2628-M-002-012-MY3. Yt H is also supported by Golden Jade fellowship. \vspace{1.8cm} \bibliographystyle{apsrev4-1_title}
2,869,038,155,743
arxiv
\section{Introduction \label{sec:intro}} Precision electroweak measurements suggest that the mass of the Higgs boson in the standard model (SM) should be of ${\cal O}(100)$ GeV. However, this leads to a serious problem, the so-called hierarchy problem, because the Higgs boson mass generally suffers from quadratic divergence at the quantum level. It is thus unnatural for the Higgs boson to be so light if the theory cutoff scale is high, unless some mechanism is introduced for stabilization. Such a problem generally calls for some symmetry ({\it e.g.}, supersymmetry) to control the scalar sector and leads to physics beyond the SM. In the late 70's, an alternative method to stabilize the Higgs boson mass had been proposed. The basic idea was to embed the Higgs field as the extra-dimensional components of gauge field in a higher dimensional space, with an enlarged gauge symmetry broken down to the SM gauge group in 4D spacetime \cite{Manton:1979kb,Forgacs:1979zs,Fairlie:1979at}. This idea of gauge-Higgs unification had recently been revived \cite{Dvali:2001qr,ArkaniHamed:2001nc,Csaki:2002ur,Scrucca:2003ra,Antoniadis:2001cv,Hall:2001zb,% Burdman:2002se,Haba:2002vc,Choi:2003kq, Sakamura:2007qz, Medina:2007hz, Lim:2007jv, Hosotani:2008tx}. A desirable feature of such models is that the gauge origin ensures that the Higgs mass in the bulk is protected from quadratic divergence. Moreover, by compactifying the theory on orbifolds, unwanted fields can be projected out from low-energy spectrum. The compactification scale would be taken as TeV scale \cite{Antoniadis:1990ew, Antoniadis:1993jp}. A simple implementation of the idea in 5D, however, encounters the difficulty of a small Higgs mass due to the absense of a tree-level Higgs potential. One is then led to consider 6D models because a quartic Higgs interaction term can arise from the gauge kinetic term \cite{Scrucca:2003ut}. The Higgs mass can also be enhanced through the introduction of a warped spacetime \cite{Contino:2003ve, Hosotani:2005nz} or by choosing a suitable bulk matter content \cite{Haba:2004qf}. However, the quadratic mass term here is still radiatively generated and possibly divergent. A more successful 6D model based on the SO(12) gauge group was proposed, where a monopole background exists to break the higher dimensional symmetry and results in a negative squared mass \cite{Nomura:2008sx}. Nevertheless, a set of symmetries relating the SU(2) isometry transformation on $S^2$ to the gauge transformation of the gauge fields has to be imposed in order to carry out dimensional reduction of the gauge sector. This approach of dimensional reduction is known as the coset space dimensional reduction, and leads to a stronger constraint on the four dimensional Lagrangian after dimensional reduction \cite{Manton:1979kb, Forgacs:1979zs,Kapetanakis:1992hf}. We consider a gauge-Higgs unification model defined on the 6D spacetime where the extra spatial dimensions are compactified on a 2-sphere $S^2$. The gauge symmetry in the model constructed here is assumed to be $E_6$. With a background field configuration and suitable boundary conditions of $S^2$ on the fields, we obtain the full SM particle contents as the zero modes in the model. In particular, no relation between extra-dimensional isometry and gauge symmetries is needed. We are able to identify a Higgs boson doublet coming from the two extra-spatial components of the gauge fields in the adjoint representation. Unwanted modes are either projected out by compactification or given masses due to the interaction with the background field. The Higgs potential in the effective 4D theory has the desired form to break the electroweak symmetry. The compactification scale is fixed with the input of the $W$ boson mass. A mass relation between the Higgs and $W$ bosons is obtained. The Weinberg angle is the same as the usual SU(5) grand-unified theory (GUT). Moreover, the Higgs particle is a Kaluza-Klein (KK) mode with an odd KK-parity. It is stable under a $Z_2$ symmetry and thus a potential dark matter candidate. Discussions about the dark matter candidate are also given in other gauge-Higgs unification models \cite{Panico:2008bx, Carena:2009yt, Hosotani:2009jk, Haba:2009xu}. This paper is organized as follows. In Section~\ref{sec:model}, we describe the 6D model compactified on the $S^2/Z_2$ orbifold. A consistent set of parity assignments of fields in both representations is given, followed by reviewing the branching of the $E_6$ group and the reduction of its fundamental and adjoint representations. We then work out the details of obtaining the SM particle contents as the zero modes of gauge and fermion fields in the model. In Section~\ref{sec:higgs}, we identify a KK mode of an appropriate representation of the extra-dimensional components of gauge field as the Higgs field in the SM. After obtaining the required commutation relations of gauge generators, we compute the tree-level Higgs potential. The result is then used to obtain a relation between the Higgs mass and the $W$ boson mass. In Section~\ref{KKmass}, we discuss the KK mode mass spectra for fermions and gauge bosons in the existence of the background gauge field. We find that the Higgs boson in the model is a potential candidate of dark matter due to its odd KK-parity. Our findings are summarized in Section~\ref{sec:summary}. \section{Model \label{sec:model}} In this section, we develop the model based on E$_6$ gauge symmetry in six-dimensional spacetime with $S^2/Z_2$ extra space. On the orbifold $S^2/Z_2$, a set of non-trivial boundary conditions is imposed to restrict the gauge symmetry and massless particle contents in four-dimensional spacetime. We also introduce in this model a background gauge field, which corresponds to a Dirac monopole configuration, to obtain chiral fermions in four dimensions. We then show how the E$_6$ gauge symmetry is reduced to the SM gauge symmetry with some extra U(1)'s, {\it i.e.}, SU(3) $\times$ SU(2) $\times$ U(1)$_Y$ $\times$ U(1)$_X$ $\times$ U(1)$_Z$, and how the massless gauge bosons and the SM Higgs boson in four dimensions are obtained in the model. We note in passing that all gauge groups of lower ranks ({\it e.g.}, SO(10), SO(11), SU(6), etc) either cannot give a Higgs field in the right representation or do not support SM chiral fermions in four dimensions. \subsection{Action in six-dimensional spacetime} We start by considering the E$_6$ gauge symmetry group in six-dimensional spacetime, which is assumed to be a direct product of the four-dimensional Minkowski spacetime $M^4$ and the compactified two-sphere orbifold $S^2/Z_2$, {\it i.e.}, $M^4 \times S^2/Z_2$. The two-sphere has a radius of $R$. We denote the six-dimensional spacetime coordinates by $X^M = (x^{\mu}, y^{\theta}=\theta, y^{\phi}=\phi)$, where $x^{\mu}$ and $\{ \theta, \phi \}$ are the $M^4$ coordinates and spherical coordinates of $S^2$, respectively. The spacetime index $M$ runs over $\mu \in \{0,1,2,3 \}$ and $\alpha \in \{ \theta, \phi \}$. The orbifold $S^2/Z_2$ is defined by the identification of $(\theta,\phi)$ and $(\pi - \theta,-\phi)$ \cite{Maru:2009wu}. The two fixed points are $(\pi/2,0)$ and $(\pi/2,\pi)$. The spacetime metric of $M^6$ is \begin{equation} g_{MN} = \begin{pmatrix} \eta_{\mu \nu} & 0 \\ 0 & -g_{\alpha \beta} \end{pmatrix} ~, \end{equation} where $\eta_{\mu \nu} = \mbox{diag}(1,-1,-1,-1)$ and $g_{\alpha \beta} = R^2 \mbox{diag}(1,\sin^2 \theta)$ are the metrics associated with $M^4$ and $S^2$, respectively. The action in six-dimensional spacetime is then \begin{equation} S_6 = \int dx^4 d R^2 \Omega \biggl[ \bar{\Psi} i \Gamma^{\mu} D_{\mu} \Psi + \bar{\Psi} i \Gamma^{a} e^{\alpha}_{a} D_{\alpha} \Psi - \frac{1}{4g^2} Tr[F_{MN}F^{MN}] \biggr] \end{equation} where $D_{M}$ ($M=0,1,2,3,\theta,\phi$) are covariant derivatives, $\Gamma^{\mu,a}$ are the Dirac gamma matrices in six dimensions, and $e^{\alpha}_a$ are the vielbeins on the two-sphere. Explicitly, \begin{equation}\begin{array}{lll} D_{\mu} = \partial_{\mu} - iA_{\mu}, & D_{\theta} = \partial_{\theta} -i A_{\theta}, & D_{\phi} = \partial_{\phi} -i \frac{\Sigma_3}{2} \cos \theta -iA_{\phi}, \\ \Gamma_{\mu} = \gamma_{\mu} \otimes \mathbf{I}_2, & \Gamma_4 = \gamma_{5} \otimes \sigma_1, & \Gamma_5 = \gamma_{5} \otimes \sigma_2, \\ e^1_{\theta} = R, & e^2_{\phi} = R \sin \theta, & e^1_{\phi} = e^2_{\theta} = 0, \end{array}\end{equation} where $\sigma_i \, (i=1,2,3)$ are the Pauli matrices, $\mathbf{I}_d$ is the $d \times d$ identity matrix, and $\Sigma_3$ is defined as $\Sigma_3=\mathbf{I}_4 \otimes \sigma_3$. The gauge field strength is $F_{MN}=\partial_M A_N -\partial_N A_M -i[A_M,A_N]$. Note that the covariant derivative $D_{\phi}$ has a spin connection term $i \frac{\Sigma_3}{2} \cos \theta$ for fermions because of the nonzero curvature of the two-sphere. This term generally induces a fermion mass in the four-dimensional effective action after integrating out the extra space. This mass term, as we will see in Section~\ref{sec:fermions}, can be avoided by introducing a background gauge field $A^B_{\phi} \equiv {\tilde A}^B_\phi \sin\theta$ that corresponds to a Dirac monopole \cite{RandjbarDaemi:1982hi} \begin{equation} \label{background} {\tilde A}^B_{\phi} = - Q \frac{\cos \theta \mp 1}{\sin \theta} ~, \quad (-: 0 \leq \theta< \frac{\pi}{2} ~, \quad +: \frac{\pi}{2} \leq \theta \leq \pi) \end{equation} where $Q$ is proportional to the generator of a U(1) subgroup of the original gauge group E$_6$. \subsection{Boundary conditions on the two-sphere orbifold \label{sec:bc}} On the two-sphere orbifold, one can consider two parity operations $P_1: \, (\theta,\phi) \to (\pi-\theta,\phi)$ and $P_2: \, (\theta,\phi) \to (\pi-\theta,2\pi-\phi)$, which are related to each other by an azimuthal translation $\phi \to \phi+2\pi$. We impose the following boundary conditions on both gauge and fermion fields under the two parity operations: \begin{eqnarray} \label{boundary-condition1} A_{\mu} (x,\pi-\theta,-\phi) &=& P_1 A_{\mu}(x,\theta,\phi) P_1 ~, \\ \label{boundary-condition2} A_{\theta,\phi}(x,\pi-\theta,-\phi) &=& - P_1 A_{\theta,\phi}(x,\theta,\phi) P_1 ~, \\ \label{boundary-condition3} \Psi (x,\pi-\theta,-\phi) &=& \pm \gamma_5 P_1 \Psi(x,\theta,\phi) ~, \\ \label{boundary-condition4} A_{\mu} (x,\pi-\theta,2\pi-\phi) &=& P_2 A_{\mu}(x,\theta,\phi) P_2 ~, \\ \label{boundary-condition5} A_{\theta,\phi}(x,\pi-\theta,2\pi-\phi) &=& - P_2 A_{\theta,\phi}(x,\theta,\phi) P_2 ~, \\ \label{boundary-condition6} \Psi (x,\pi-\theta,2\pi-\phi) &=& \pm \gamma_5 P_2 \Psi(x,\theta,\phi) ~. \end{eqnarray} These boundary conditions are determined by requiring the invariance of the six -dimensional action under the transformation $(\theta,\phi) \rightarrow (\pi-\theta,-\phi)$. The projection matrices $P_{1,2}$ act on the gauge group representation space and have eigenvalues $\pm 1$. They assign different parities for different representation components. For fermion boundary conditions, the sign in front of $\gamma_5$ can be either $+$ or $-$ since the fermions always appear in bilinear forms in the action. The 4-dimensional action is then restricted by these parity assignments and our choice of the background gauge field. \subsection{Gauge group reduction \label{sec:group}} We consider the following gauge group reduction \begin{eqnarray} \label{group-red} E_6 &\supset& SO(10) \times U(1)_Z \nonumber \\ &\supset& SU(5) \times U(1)_X \times U(1)_Z \nonumber \\ &\supset& SU(3) \times SU(2) \times U(1)_Y \times U(1)_X \times U(1)_Z ~. \end{eqnarray} The background gauge field in Eq.~(\ref{background}) is chosen to belong to the U(1)$_Z$ group. This choice is needed in order to obtain chiral SM fermions in four dimensions to be discussed later. There are two other symmetry reduction schemes. One can prove that the results in those two schemes are effectively the same as the one considered here once we require the correct U(1) combinations for the hypercharge and the background field. We then impose the parity assignments with respect to the fixed points, Eqs.~(\ref{boundary-condition1})-(\ref{boundary-condition6}). The parity assignments for the fundamental representation of E$_6$ is chosen to be \begin{eqnarray} \label{d27} {\bf 27} &=& (1,2)(-3,-2,-2)^{(+,+)}+(1,2)(3,2,-2)^{(-,-)}+(1,2)(-3,3,1)^{(+,-)} \nonumber \\ && + (1,1)(6,-1,1)^{(+,+)}+(1,1)(0,-5,1)^{(-,-)}+(1,1)(0,0,4)^{(-,+)} \nonumber \\ && + (3,2)(1,-1,1)^{(-,+)}+(3,1)(-2,2,-2)^{(+,-)} + (\bar{3},1)(-4,-1,1)^{(+,+)} \nonumber \\ && +(\bar{3},1)(2,3,1)^{(+,+)}+(\bar{3},1)(2,-2,2)^{(-,+)}, \end{eqnarray} where, for example, $(+,-)$ means that the parities under $P_1$ and $P_2$ are (even,odd). By the requirement of consistency, we find that the components of $A_{\mu}$ in the adjoint representation have the parities under $A_{\mu} \rightarrow P_1 A_{\mu} P_1$ $(P_2 A_{\mu} P_2)$ as follows: \begin{eqnarray} \label{d78} {\bf 78}|_{A_{\mu}} &=& \underline{(8,1)(0,0,0)^{(+,+)}+(1,3)(0,0,0)^{(+,+)}} \nonumber \\ && + \underline{(1,1)(0,0,0)^{(+,+)} +(1,1)(0,0,0)^{(+,+)}+(1,1)(0,0,0)^{(+,+)}} \nonumber \\ && + (3,2)(-5,0,0)^{(-,+)}+(\bar{3},2)(5,0,0)^{(-,+)}+ (3,2)(1,4,0)^{(+,-)}+ (\bar{3},2)(-1,-4,0)^{(+,-)} \nonumber \\ && +(3,1)(4,-4,0)^{(-,-)}+(\bar{3},1)(-4,4,0)^{(-,-)}+(1,1)(-6,-4,0)^{(-,-)}+(1,1)(6,4,0)^{(-,-)} \nonumber \\ && + (3,2)(1,-1,-3)^{(+,+)}+ (\bar{3},2)(-1,1,3)^{(+,+)}+(3,1)(4,1,3)^{(-,+)}+(\bar{3},1)(-4,-1,-3)^{(-,+)} \nonumber \\ && + (3,1)(-2,-3,3)^{(+,-)}+ (\bar{3},1)(2,3,-3)^{(+,-)}+(1,2)(-3,3,-3)^{(-,-)}+(1,2)(3,-3,3)^{(-,-)} \nonumber \\ && +(1,1)(-6,1,3)^{(-,+)}+(1,1)(6,-1,-3)^{(-,+)}+(1,1)(0,-5,-3)^{(+,-)}+(1,1)(0,5,3)^{(+,-)}, \nonumber \\ \end{eqnarray} where the underlined components correspond to the adjoint representations of SU(3) $\times$ SU(2) $\times$ U(1)$_Y$ $\times$ U(1)$_X$ $\times$ U(1)$_Z$, respectively. We note that the components with parity $(+,+)$ can have massless zero modes in four dimensions. Such components include the adjoint representations of SU(3) $\times$ SU(2) $\times$ U(1)$^3$, $(3,2)(1,-1,-3)$ and its conjugate. The latter components seem problematic since they do not appear in the low-energy spectrum. In fact, these components acquire masses due to the background field from the term proportional to $F_{\mu \phi} F^{\mu}_{\ \phi}$ \begin{eqnarray} && Tr\left[ -\frac{1}{4} F_{\mu \nu}F^{\mu \nu} + \frac{1}{2R^2 \sin^2 \theta} F_{\mu \phi}F^{\mu}_{\ \phi} \right] \nonumber \\ && \quad \rightarrow Tr\left[ -\frac{1}{4} (\partial_{\mu} A_{\nu}-\partial_{\nu} A_{\mu})(\partial^{\mu} A^{\nu}-\partial^{\nu} A^{\mu}) - \frac{1}{2R^2 \sin^2 \theta} [A_{\mu},A^B_{\phi}][A^{\mu},A^B_{\phi}] \right] ~. \end{eqnarray} For the components of $A_{\mu}$ with nonzero U(1)$_Z$ charge, we have \begin{equation} A_{\mu}^i Q_i+ A_{i\mu} Q^{i} \in A_{\mu} ~, \end{equation} where $Q_i \, (Q^i = Q_i^{\dagger})$ are generators corresponding to distinct components in Eq.~(\ref{d78}) that have nonzero U(1)$_Z$ charges, and $A_{i\mu} \, (A_{\mu}^i = A_{i \mu}^{\dagger})$ are the corresponding components of $A_{\mu}$. We then find the term \begin{eqnarray} \frac{1}{\sin^2 \theta}Tr[[A_{\mu},A^B_\phi][A^{\mu},A^B_\phi]] &=& \frac{(\cos \theta \mp 1)^2}{\sin^2 \theta} Tr[[A_{\mu}^i Q_i+A_{i \mu} Q^i,Q][ A^{i \mu} Q_i+A_{i}^{\mu} Q^i,Q]] \nonumber \\ &=& -2 |q|^2 \frac{(\cos \theta \mp 1)^2}{\sin^2 \theta} A^{i \mu} A_{i \mu} ~, \end{eqnarray} where $q$ is the $Q$ charge of the relevant component. Use of the facts that $A_{\phi}^B$ belongs to U(1)$_Z$ and that $Tr[Q_i Q^i]=2$ has been made in the above equation. A mass is thus associated with the lowest modes of those components of $A_{\mu}$ with nonzero U(1)$_Z$ charges: \begin{eqnarray} && \int d \Omega Tr\left.\left[ -\frac{1}{4}(\partial_{\mu} A_{\nu}-\partial_{\nu} A_{\mu})(\partial^{\mu} A^{\nu}-\partial^{\nu} A^{\mu}) - \frac{1}{2R^2 \sin^2 \theta} [A_{\mu},A_B][A^{\mu},A_B] \right] \right|_{\rm lowest} \nonumber \\ && \quad \rightarrow -\frac{1}{2} \left[ \partial_{\mu} A_{i \nu}(x) - \partial_{\nu} A_{i \mu}(x) \right] \left[ \partial^{\mu} A^{i \nu}(x) - \partial^{\nu} A^{i \mu}(x) \right] + m^2_B A_{\mu}^i(x) A^{i\mu}(x) ~, \end{eqnarray} where the subscript `lowest' means that only the lowest KK modes are kept. Here the lowest KK modes of $A_{\mu}$ correspond to the term $A_{\mu}(x)/\sqrt{4 \pi}$ in the KK expansion. In summary, any representation of $A_\mu$ carrying a nonzero U(1)$_Z$ charge acquires a mass $m_B$ from the background field contribution after one integrates over the extra spatial coordinates. More explicitly, \begin{equation} \label{eq:nonSMgaugeMass} m^2_B = \frac{|q|^2}{4 \pi R^2} \int d \Omega \frac{(\cos \theta \mp 1)^2}{\sin^2 \theta} \simeq 0.39 \frac{ |q|^2}{R^2} \end{equation} for the zero mode. Therefore, the $(3,2)(1,-1,-3)$ representation and its conjugate are elevated in mass to disappear from the low-energy spectrum. In the end, the correct symmetry reduction is achieved since only the components of 4-dimensional gauge field $A_{\mu}$ in the adjoint representation of SU(3) $\times$ SU(2) $\times$ U(1)$_Y$ $\times$ U(1)$_X$ $\times$ U(1)$_Z$ are allowed to have zero modes. A general discussion about the KK mode masses of $A_{\mu}$ will be given in Section~\ref{KKmass}. \subsection{Scalar field contents in four dimensions \label{sec:scalar}} The scalar contents in four dimensions are obtained from the extra-dimensional components of the gauge field $\{ A_{\theta}, A_{\phi} \}$ after integrating out the extra spatial coordinates. The kinetic term and potential term of $\{A_{\theta}, A_{\phi} \}$ are obtained from the gauge sector containing these components \begin{eqnarray} \label{action-scalar} S_{\rm scalar} &=& \int dx^4 d \Omega \Bigl( \frac{1}{2 g^2} Tr[ F_{\mu \theta} F^{\mu}_{\ \theta} ] + \frac{1}{2 g^2 \sin^2 \theta} Tr[ F_{\mu \phi} F^{\mu}_{\ \phi} ] \nonumber \\ && \qquad \qquad -\frac{1}{2 g^2 R^2 \sin^2 \theta} Tr[ F_{\theta \phi} F_{\theta \phi} ] \Bigr) \nonumber \\ &\rightarrow& \int dx^4 d \Omega \Bigl( \frac{1}{2 g^2} Tr[(\partial_{\mu} A_{\theta}-i[A_{\mu},A_{\theta}])^2] + \frac{1}{2 g^2} Tr[(\partial_{\mu} A_{\theta}-i[A_{\mu},\tilde{A}_{\phi}])^2 ] \nonumber \\ && \qquad \qquad -\frac{1}{2 g^2 R^2} Tr \biggl[ \biggl( \frac{1}{\sin \theta} \partial_{\theta} (\sin \theta \tilde{A}_{\phi} + \sin \theta \tilde{A}^B_{\phi}) -\frac{1}{\sin \theta} \partial_{\phi} A_{\theta} - i[A_{\theta},\tilde{A}_{\phi}+\tilde{A}^B_{\phi}] \biggr)^2 \biggr] ~, \nonumber \\ \end{eqnarray} where we have taken $A_{\phi} = \tilde{A}_{\phi} \sin \theta + \tilde{A}_{\phi}^B \sin \theta$. In the second step indicated by the arrow in Eq.~(\ref{action-scalar}), we have omitted terms which do not involve $A_{\theta}$ and $\tilde{A}_{\phi}$ from the right-hand side of the first equality. It is known that one generally cannot obtain massless modes for physical scalar components in four dimensions \cite{Maru:2006, Dohi:2010vc}. One can see this by noting that the eigenfunction of the operator $\frac{1}{ \sin \theta } \partial_{\theta} \sin \theta$ with zero eigenvalue is not normalizable \cite{Maru:2006}. In other words, these fields have only KK modes. However, an interesting feature is that it is possible to obtain a negative squared mass when taking into account the interactions between the background gauge field $\tilde{A}_{\phi}^B$ and $\{A_{\theta}, \tilde{A}_{\phi} \}$. This happens when the component carries a nonzero U(1)$_Z$ charge, as the background gauge field belongs to U(1)$_Z$. In this case, the $(\ell=1,m=1)$ modes of these real scalar components are found to have a negative squared mass in four dimensions. They can be identified as the Higgs fields once they are shown to belong to the correct representation under the SM gauge group. Here the numbers $(\ell,m)$ are the angular momentum quantum number on $S^2/Z_2$, and each KK mode is characterized by these numbers. One can show that the $(\ell=1,m=0)$ mode has a positive squared mass and is not considered as the Higgs field. A discussion of the KK masses with general $(\ell,m)$ will be given in Section~\ref{KKmass}~. With the parity assignments with respect to the fixed points, Eqs.~(\ref{boundary-condition2}) and (\ref{boundary-condition5}), we have for the $A_{\theta}$ and $A_{\phi}$ fields \begin{eqnarray} \label{78scalar} {\bf 78}|_{A_{\theta,\phi}} & = & (8,1)(0,0,0)^{(-,-)}+(1,3)(0,0,0)^{(-,-)} \nonumber \\ && +(1,1)(0,0,0)^{(-,-)} +(1,1)(0,0,0)^{(-,-)}+(1,1)(0,0,0)^{(-,-)} \nonumber \\ && + (3,2)(-5,0,0)^{(+,-)}+(\bar{3},2)(5,0,0)^{(+,-)}+ (3,2)(1,4,0)^{(-,+)}+ (\bar{3},2)(-1,-4,0)^{(-,+)} \nonumber \\ && +(3,1)(4,-4,0)^{(+,+)}+(\bar{3},1)(-4,4,0)^{(+,+)}+(1,1)(-6,-4,0)^{(+,+)}+(1,1)(6,4,0)^{(+,+)} \nonumber \\ && + (3,2)(1,-1,-3)^{(-,-)}+ (\bar{3},2)(-1,1,3)^{(-,-)}+(3,1)(4,1,3)^{(+,-)}+(\bar{3},1)(-4,-1,-3)^{(+,-)} \nonumber \\ && + (3,1)(-2,-3,3)^{(-,+)}+ (\bar{3},1)(2,3,-3)^{(-,+)}+(1,2)(-3,3,-3)^{(+,+)}+(1,2)(3,-3,3)^{(+,+)} \nonumber \\ && +(1,1)(-6,1,3)^{(+,-)}+(1,1)(6,-1,-3)^{(+,-)}+(1,1)(0,-5,-3)^{(-,+)}+(1,1)(0,5,3)^{(-,+)} ~. \nonumber \\ \end{eqnarray} Components with $(+,-)$ or $(-,+)$ parity do not have KK modes since they are odd under $\phi \rightarrow \phi+2\pi$ and the KK modes of gauge field are specified by integer angular momentum quantum numbers $\ell$ and $m$ on the two-sphere. We then concentrate on the components which have either $(+,+)$ or $(-,-)$ parity and nonzero U(1)$_Z$ charges as the candidate for the Higgs field. These include $\{ (1,2)(3,-3,3) + {\rm h.c.} \}$ and $\{(3,2)(1,-1,-3) + {\rm h.c.} \}$ with parities $(+,+)$ and $(-,-)$, respectively. The representations $(1,2)(-3,3,-3)$ and $(1,2)(3,-3,3)$ have the correct quantum numbers for the SM Higgs doublet. Therefore, we identify the $(1,1)$ mode of these components as the SM Higgs fields in four dimensions. \subsection{Chiral fermions in four dimensions \label{sec:fermions}} We introduce fermions as the Weyl spinor fields of the six-dimensional Lorentz group SO(1,5). They can be written in terms of the SO(1,3) Weyl spinors as \begin{eqnarray} \label{chiralR} \Psi_+ = \begin{pmatrix} \psi_R \\ \psi_L \end{pmatrix} ~, \\ \label{chiralL} \Psi_- = \begin{pmatrix} \psi_L \\ \psi_R \end{pmatrix} ~. \end{eqnarray} In general, fermions on the two-sphere do not have massless KK modes because of the positive curvature of the two-sphere. The massless modes can be obtained by incorporating the background gauge field (\ref{background}) though, for it can cancel the contribution from the positive curvature. In this case, the condition for obtaining a massless fermion mode is \begin{equation} \label{massless-condition} Q \Psi = \pm \frac{1}{2} \Psi ~, \end{equation} where $Q$ comes from the background gauge field and is proportional to the U(1)$_Z$ generator \cite{RandjbarDaemi:1982hi,Maru:2009wu,Dohi:2010vc}. We observe that the upper [lower] component on the RHS of Eq.~(\ref{chiralR}) [(\ref{chiralL})] has a massless mode for the $+$ $(-)$ sign on the RHS of Eq.~(\ref{massless-condition}). In our model, we choose the fermions as the Weyl fermions $\Psi_-$ belonging to the {\bf 27} representation of E$_6$. The {\bf 27} representation is decomposed as in Eq.~(\ref{d27}) under the group reduction, Eq.~(\ref{group-red}). In this decomposition, we find that our choice of the background gauge field of U(1)$_Z$ is suitable for obtaining massless fermions since all such components have U(1)$_Z$ charge 1. In the fundemantal representation, the U(1)$_Z$ generator is \begin{eqnarray} Q_Z = \frac{1}{6} \mbox{diag} (-2,-2,-2,-2,1,1,1,1,4,1,1,1,1,1,1,-2,-2,-2,1,1,1,1,1,1,-2,-2,-2) ~, \nonumber \\ \end{eqnarray} according to the decomposition Eq.~(\ref{d27}). By identifying $Q=3Q_Z$, we readily obtain the condition \begin{equation} Q \Psi_- = \frac{1}{2} \Psi_-. \end{equation} Therefore, the chiral fermions $\psi_L$ in four dimensions have zero modes. Next, we consider the parity assignments for the fermions with respect to the fixed points of $S^2/Z_2$. The boundary conditions are given by Eqs.~(\ref{boundary-condition3}) and (\ref{boundary-condition6}). It turns out that four ${\bf 27}$ fermion copies with different boundary conditions are needed in order to obtain an entire generation of massless SM fermions. They are denoted by $\Psi^{(1,2,3,4)}$ with the following parity assignments \begin{eqnarray} \Psi_{\pm}^{(i)} (x,\pi-\theta,-\phi) &=& \xi \gamma_5 P_1 \Psi_{\pm}^{(i)}(x,\theta,\phi) ~, \\ \Psi_{\pm}^{(i)} (x,\pi-\theta,2\pi-\phi) &=& \eta \gamma_5 P_2 \Psi_{\pm}^{(i)}(x,\theta,\phi) ~, \end{eqnarray} where $\gamma_5$ is the chirality operator, and $(\xi,\eta) = (+,+)$, $(-,-)$, $(+,-)$ and $(-,+)$ for $i = 1,2,3,4$, respectively. From these fermions we find that $\psi_{1,2,3,4}$ have the parity assignments \begin{eqnarray} {\bf 27}_{\psi_L^{(1)}} &=& (1,2)(-3,-2,-2)^{(-,-)}+(1,2)(3,2,-2)^{(+,+)}+(1,2)(-3,3,1)^{(-,+)} \nonumber \\ && + (1,1)(6,-1,1)^{(-,-)}+ \underline{(1,1)(0,-5,1)^{(+,+)}}+(1,1)(0,0,4)^{(+,-)} \nonumber \\ && + (3,2)(1,-1,1)^{(+,-)}+(3,1)(-2,2,-2)^{(-,+)} + (\bar{3},1)(-4,-1,1)^{(-,-)} \nonumber \\ && +(\bar{3},1)(2,3,1)^{(-,-)}+(\bar{3},1)(2,-2,2)^{(+,-)} \\ {\bf 27}_{\psi_L^{(2)}} &=& (1,2)(-3,-2,-2)^{(+,+)}+(1,2)(3,2,-2)^{(-,-)}+(1,2)(-3,3,1)^{(+,-)} \nonumber \\ && + \underline{(1,1)(6,-1,1)^{(+,+)} } + (1,1)(0,-5,1)^{(-,-)}+(1,1)(0,0,4)^{(-,+)} \nonumber \\ && + (3,2)(1,-1,1)^{(-,+)}+(3,1)(-2,2,-2)^{(+,-)} + \underline{ (\bar{3},1)(-4,-1,1)^{(+,+)} } \nonumber \\ && + \underline{(\bar{3},1)(2,3,1)^{(+,+)} }+(\bar{3},1)(2,-2,2)^{(-,+)} \\ {\bf 27}_{\psi_L^{(3)}} &=& (1,2)(-3,-2,-2)^{(-,+)}+(1,2)(3,2,-2)^{(+,-)}+(1,2)(-3,3,1)^{(-,-)} \nonumber \\ && + (1,1)(6,-1,1)^{(-,+)}+(1,1)(0,-5,1)^{(+,-)}+(1,1)(0,0,4)^{(+,+)} \nonumber \\ && + \underline{ (3,2)(1,-1,1)^{(+,+)} } +(3,1)(-2,2,-2)^{(-,-)} + (\bar{3},1)(-4,-1,1)^{(-,+)} \nonumber \\ && +(\bar{3},1)(2,3,1)^{(-,+)}+(\bar{3},1)(2,-2,2)^{(+,+)} \\ {\bf 27}_{\psi_L^{(4)}} &=& (1,2)(-3,-2,-2)^{(+,-)}+(1,2)(3,2,-2)^{(-,+)}+ \underline{ (1,2)(-3,3,1)^{(+,+)} } \nonumber \\ && + (1,1)(6,-1,1)^{(+,-)}+(1,1)(0,-5,1)^{(-,+)}+(1,1)(0,0,4)^{(-,-)} \nonumber \\ && + (3,2)(1,-1,1)^{(-,-)}+(3,1)(-2,2,-2)^{(+,+)} + (\bar{3},1)(-4,-1,1)^{(+,-)} \nonumber \\ && +(\bar{3},1)(2,3,1)^{(+,-)}+(\bar{3},1)(2,-2,2)^{(-,-)} ~, \end{eqnarray} where the underlined components have even parities and U(1)$_Z$ charge 1. One can readily identify one generation of SM fermions, including a right-handed neutrino, as the zero modes of these components. A long-standing problem in the gauge-Higgs unification framework is the Yukawa couplings of the Higgs boson to the matter fields. This is because the couplings here all arise from gauge interactions. It is therefore extremely difficult to derive the observed rich fermion mass spectrum purely from the gauge coupling. In order to have flavor-dependent Yukawa couplings, one promising solution is to consider SM matter fields localized at orbifold fixed points and make use of nonlocal interactions with Wilson lines \cite{Csaki:2002ur}. \section{Higgs potential \label{sec:higgs}} \subsection{Higgs sector \label{sec:Higgssector}} The Lagrangian for the Higgs sector is derived from the gauge sector that contains extra-dimensional components of the gauge field $\{A_{\theta}, \tilde{A}_{\phi} \}$, as given in Eq.~(\ref{action-scalar}), by considering the lowest KK modes of them. The kinetic term and potential term are, respectively, \begin{eqnarray} L_{K} &=& \frac{1}{2 g^2} \int d \Omega \left. \Bigl( Tr[(\partial_{\mu} A_{\theta}-i[A_{\mu},A_{\theta}])^2] + Tr[(\partial_{\mu} A_{\theta}-i[A_{\mu},\tilde{A}_{\phi}])^2 ] \Bigr) \right|_{\textrm{lowest}} ~, \\ V &=& \frac{1}{2 g^2 R^2} \int d \Omega \left. Tr \biggl[ \biggl( \frac{1}{\sin \theta} \partial_{\theta} (\sin \theta \tilde{A}_{\phi} + \sin \theta \tilde{A}^B_{\phi}) -\frac{1}{\sin \theta} \partial_{\phi} A_{\theta} - i[A_{\theta},\tilde{A}_{\phi}+\tilde{A}^B_{\phi}] \biggr)^2 \biggr] \right|_{\textrm{lowest}} ~. \nonumber \\ \end{eqnarray} Consider the $(1,1)$ mode of the $ \{ (1,2)(3,-3,3) + {\rm h.c.} \}$ representation in Eq.~(\ref{78scalar}) as argued in the previous section. The gauge fields are given by the following KK expansions \begin{eqnarray} \label{expansion1} A_{\theta} &=& - \frac{1}{\sqrt{2}} [ \Phi_1(x) \partial_{\theta} Y_{11}^-( \theta, \phi) + \Phi_2(x) \frac{1}{\sin \theta} \partial_{\phi} Y_{11}^-( \theta, \phi) ] + \cdots ~, \\ \label{expansion2} \tilde{A}_{\phi} &=& \frac{1}{\sqrt{2}}[ \Phi_2(x) \partial_{\theta} Y_{11}^-( \theta,\phi)-\Phi_1(x) \frac{1}{\sin \theta} \partial_{\phi} Y_{11}^-( \theta,\phi)] + \cdots ~, \end{eqnarray} where $\cdots$ represents higher KK mode terms \cite{Maru:2009wu}. The function $Y_{11}^- = -1/\sqrt{2} [Y_{11}+Y_{1-1}]$ is odd under $(\theta,\phi) \rightarrow (\pi/2-\theta,-\phi)$ . We will discuss their higher KK modes and masses in the existence of the background gauge field in Section~\ref{KKmass}. With Eqs.~(\ref{expansion1}) and (\ref{expansion2}), the kinetic term becomes \begin{eqnarray} L_{K}(x) = \frac{1}{2 g^2} \Bigl( Tr[D_{\mu} \Phi_1(x) D^{\mu} \Phi_1(x)] + Tr[D_{\mu} \Phi_2(x) D^{\mu} \Phi_2(x)] \Bigr) , \end{eqnarray} where $D_{\mu} \Phi_{1,2} = \partial_{\mu} \Phi_{1,2} -i[A_{\mu},\Phi_{1,2}]$ is the covariant derivative acting on $\Phi_{1,2}$. The potential term, on the other hand, is \begin{eqnarray} V &=& \frac{1}{2 g^2 R^2} \int d \Omega Tr \biggl[ \biggl( -\sqrt{2} Y_{11}^- \Phi_2(x) + Q + \frac{i}{2} [\Phi_1(x), \Phi_2(x)] \{ \partial_{\theta} Y_{11}^- \partial_{\theta} Y_{11}^- + \frac{1}{\sin^2 \theta} \partial_{\phi} Y_{11}^- \partial_{\phi} Y_{11}^- \} \nonumber \\ && \qquad \qquad +\frac{ i}{\sqrt{2}} [\Phi_1(x), \tilde{A}^B_{\phi}] \partial_{\theta} Y_{11}^- +\frac{ i}{\sqrt{2}} [\Phi_2(x), \tilde{A}^B_{\phi}] \frac{1}{\sin \theta} \partial_{\phi} Y_{11}^- \biggr)^2 \biggr] ~, \end{eqnarray} where $\partial_{\theta} (\sin \theta \tilde{A}_{\phi}^B) = Q \cos \theta$ from Eq.~(\ref{background}) is used. Expanding the square in the trace, we get \begin{eqnarray} \label{potential} V &=& \frac{1}{2 g^2 R^2} \int d \Omega Tr \biggl[ 2 (Y_{11}^+)^2 \Phi_2^2(x) + Q^2 - \frac{1}{4} [\Phi_1(x),\Phi_2(x)]^2 \left( \partial_{\theta} Y_{11}^- \partial_{\theta} Y_{11}^- + \frac{1}{\sin^2 \theta} \partial_{\phi} Y_{11}^- \partial_{\phi} Y_{11}^- \right)^2 \nonumber \\ && \qquad \qquad \qquad -\frac{1}{2} [\Phi_1(x),\tilde{A}^B_{\phi}]^2 (\partial_{\theta} Y_{11}^- )^2 -\frac{1}{2} [\Phi_2(x),\tilde{A}^B_{\phi}]^2 \left( \frac{1}{\sin \theta} \partial_{\phi} Y_{11}^- \right)^2 \nonumber \\ && \qquad \qquad \qquad -2 i \Phi_2(x) [\Phi_1(x), \tilde{A}_{\phi}^B] Y_{11}^- \partial_{\theta} Y_{11}^- - [\Phi_1(x),\tilde{A}_{\phi}^B] [\Phi_2(x),\tilde{A}_{\phi}^B] \partial_{\theta} Y_{11}^- \frac{1}{\sin \theta} \partial_{\phi} Y_{11}^- \nonumber \\ && \qquad \qquad \qquad + i Q [\Phi_1(x), \Phi_2(x)] \left( \partial_{\theta} Y_{11}^- \partial_{\theta} Y_{11}^- + \frac{1}{\sin^2 \theta} \partial_{\phi} Y_{11}^- \partial_{\phi} Y_{11}^- \right) ~\biggr] ~, \end{eqnarray} where terms that vanish after the $d\Omega$ integration are directly omitted. In the end, the potential is simplified to \begin{eqnarray} V = \frac{1}{2 g^2 R^2} Tr \biggl[ 2 \Phi_2^2(x) + 4 \pi Q^2 - \frac{3}{10 \pi} [\Phi_1(x),\Phi_2(x)]^2 + \frac{5i}{2} Q [\Phi_1(x), \Phi_2(x)] \nonumber \\ + \mu_1 [Q, \Phi_1(x)]^2 + \mu_2 [Q, \Phi_2(x)]^2 \biggr] ~, \end{eqnarray} where use of $\tilde{A}_{\phi}^B = -Q (\cos \theta \mp 1) / \sin \theta$ has been made and $\mu_1 = 1-\frac{3}{2} \ln 2$ and $\mu_2 = \frac{3}{4}(1-2\ln2)$. We now take the following linear combination of $\Phi_1$ and $\Phi_2$ to form a complex Higgs doublet, \begin{eqnarray} \label{okikae1} \Phi(x) &=& \frac{1}{\sqrt{2}} (\Phi_1(x)+i\Phi_2(x)) ~, \\ \label{okikae2} \Phi(x)^{\dagger} &=& \frac{1}{\sqrt{2}} (\Phi_1(x)-i\Phi_2(x)) ~. \end{eqnarray} It is straightforward to see that \begin{eqnarray} [\Phi_1(x),\Phi_2(x)] = i [\Phi(x), \Phi^{\dagger}(x)] ~. \end{eqnarray} The kinetic term and the Higgs potential now become \begin{eqnarray} \label{kinetic-t} L_{K} &=& \frac{1}{g^2} Tr[D_{\mu} \Phi^{\dagger}(x) D^{\mu} \Phi(x) ] ~, \\ \label{potential-t} V &=& \frac{1}{2 g^2 R^2} Tr \biggl[ 2 \Phi_2^2(x) + 4 \pi Q^2 + \frac{3}{10 \pi} [\Phi(x),\Phi^{\dagger}(x)]^2 - \frac{5}{2} Q [\Phi(x), \Phi^{\dagger}(x)] \nonumber \\ && \qquad + \mu_1[Q, \Phi_1(x)]^2 + \mu_2[Q, \Phi_2(x)]^2 \biggr] ~. \end{eqnarray} To further simplify the above expressions, we need to find out the algebra of the gauge group generators. Note that the E$_6$ generators are chosen according to the decomposition of the adjoint representation given in Eq.~(\ref{d78}) \begin{eqnarray} &&\{ Q_i, Q_{\alpha}, Q_Y, Q_X, Q_Z, \nonumber \\ && \quad Q_{ax (-5,0,0)}, Q^{ax(5,0,0)}, Q_{ax(1,4,0)}, Q^{ax(-1,-4,0)}, \nonumber \\ && \quad Q_{a(4,-4,0)}, Q^{a(-4,4,0)}, Q_{(-6,-4,0)}, Q_{(6,4,0)}, \nonumber \\ && \quad Q_{ax(1,-1,-3)}, Q^{ax(-1,1,3)}, Q_{a(4,1,3)}, Q^{a(-4,-1,-3)}, \nonumber \\ && \quad Q_{a(-2,-3,3)}, Q^{a(2,3,-3)}, Q_{x(3,-3,3)}, Q^{x(-3,3,-3)}, \nonumber \\ && \quad Q_{(-6,1,3)}, Q_{(6,-1,-3)}, Q_{(0,-5,-3)},Q_{(0,5,3)} \} ~, \end{eqnarray} where the generators are listed in the corresponding order of the terms in Eq.~(\ref{d78}) and the indices \begin{eqnarray} \label{generators} && i=1,...,8: \textrm{SU(3) adjoint representation index} \Rightarrow Q_i: \textrm{SU(3) generators} ~, \\ && \alpha=1,2,3: \textrm{SU(2) adjoint representation index} \Rightarrow Q_{\alpha}: \textrm{SU(2) generators} ~, \\ && Q_{X,Y,Z}: \textrm{$U(1)_{X,Y,Z}$ generators} ~, \\ && x=1,2: \textrm{SU(2) doublet index} ~, \\ && a=1,2,3: \textrm{SU(3) color index} ~. \end{eqnarray} Here we take the standard normalization for generators, $Tr[Q Q^{\dagger}] = 2$. The Higgs fields are in the representations of $(1,2)(3,-3,3)$ and $(1,2)(-3,3,-3)$. We write \begin{equation} \label{Higgs} \Phi(x) = \phi^{x} Q_{x(3,-3,3)} \quad (\Phi^{\dagger}(x) = \phi_x Q^{x(-3,3,-3)}) ~. \end{equation} Likewise, the gauge field $A_{\mu}(x)$ in terms of the $Q$'s in Eq.~(\ref{generators}) is \begin{equation} \label{gauge} A_{\mu}(x) = A_{\mu}^i Q_i+A_{\mu}^{\alpha} Q_{\alpha}+B_{\mu} Q_Y+C_{\mu} Q_X+E_{\mu} Q_Z. \end{equation} The commutation relations between the generators $Q_{\alpha}$, $Q_{X,Y,Z}$, $Q_{x(3,-3,3)}$ and $Q^{x(-3,3,-3)}$ are summarized in Table.~\ref{commutators}. \begin{center} \begin{table} \begin{tabular}{lll} \hline\hline & \multicolumn{2}{c}{ $\left[ Q_{x(3,-3,3)},Q^{y(-3,3,-3)} \right] = \frac{1}{2} \delta_x^y Q_Z-\frac{1}{2} \sqrt{\frac{3}{5}} \delta_x^y Q_X + \frac{1}{\sqrt{10}} \delta_x^y Q_Y + \frac{1}{\sqrt{6}} (\sigma_{\alpha})^y_x Q_{\alpha}$} \\ & $\left[ Q_{\alpha},Q_{x(3,-3,3)} \right] = \frac{1}{\sqrt{6}} (\sigma_{\alpha} )^y_x Q_{y(3,-3,3)}$ \qquad \qquad & $\left[ Q_{\alpha},Q^{ x(-3,3,-3) } \right] = - \frac{1}{\sqrt{6}} (\sigma_{\alpha}^* )^y_x Q^{y(-3,3,-3) }$ \\ & $\left[ Q_{x(3,-3,3)},Q_{y(3,-3,3)} \right] = 0$ & $\left[ Q_Z, Q_{x(3,-3,3)} \right] = \frac{1}{2} Q_{x(3,-3,3)}$ \\ & $\left[ Q_X, Q_{x(3,-3,3)} \right] = -\frac{1}{2} \sqrt{\frac{3}{5}} Q_{x(3,-3,3)}$ & $\left[ Q_Y, Q_{x(3,-3,3)} \right] = \frac{1}{\sqrt{10}} Q_{x(3,-3,3)}$ \\ \hline\hline \end{tabular} \caption{Commutation relations of $Q_{\alpha}$, $Q_{X,Y,Z}$, $Q_{x(3,-3,3)}$ and $Q^{x(-3,3,-3)}$, where $\sigma_i$ are the Pauli matrices.} \label{commutators} \end{table} \end{center} Finally, we obtain the Lagrangian associated with the Higgs field by applying Eqs.~(\ref{Higgs}, \ref{gauge}) to Eqs.~(\ref{kinetic-t}, \ref{potential-t}) and carrying out the trace. Furthermore, to obtain the canonical form of kinetic terms, the Higgs field, the gauge field, and the gauge coupling need to be rescaled in the following way: \begin{eqnarray} \label{notation} && \phi \rightarrow \frac{g}{\sqrt{2}} \phi \\ && A_{\mu} \rightarrow \frac{g}{R}A_{\mu} \\ && \frac{g}{\sqrt{6 \pi R^2}} = g_2 ~, \end{eqnarray} where $g_2$ denotes the SU(2) gauge coupling. The Higgs sector is then given by \begin{equation} {\cal L}_{\rm Higgs} = |D_{\mu} \phi|^2 - V(\phi) \end{equation} where \begin{eqnarray} \label{cova} D_{\mu} \phi &=& \left[ \partial_{\mu} + i g_2 \frac{\sigma_{\alpha}}{2} A_{\alpha \mu} + ig \frac{1}{\sqrt{40 \pi R^2}} B_{\mu} - ig \frac{1}{2 } \sqrt{\frac{3}{20 \pi R^2}} C_{\mu} + i g \frac{1}{2 \sqrt{4 \pi R^2}} E_{\mu} \right] \phi ~, \\ \label{H-potential} V &=& -\frac{\chi}{8 R^2} \phi^{\dagger} \phi + \frac{3 g^2}{40 \pi R^2} \left(\phi^{\dagger} \phi \right)^2 ~, \end{eqnarray} where $\chi=7+9\mu_1+9\mu_2$. We have omitted the constant term in the Higgs potential. Comparing the potential derived above with the standard form $\mu^2\phi^\dagger\phi + \lambda (\phi^\dagger\phi)^2$ in the SM, we see that the model has a tree-level $\mu^2$ term that is negative and proportional to $R^{-2}$. Moreover, the quartic coupling $\lambda = 3 g^2 / (40 \pi R^2)$ is related to the six-dimensional gauge coupling $g$ and grants perturbative calculations because it is about $0.16$, using the value of $R$ to be extracted in the next section. Therefore, the order parameter in this model is controlled by a single parameter $R$, the compactification scale. In fact, the $(1,1)$ mode of the $\{(3,2)(1,-1,-3) + {\rm h.c.}\}$ representation also has a negative squared mass term because it has the same $Q_z$ charge as the $\{(1,2)(3,-3,3) + {\rm h.c.}\}$ representation. Therefore, it would induce not only electroweak symmetry breaking but also color symmetry breaking. This undesirable feature can be cured by adding brane terms \begin{eqnarray} \frac{\alpha}{R^2\sin^2\theta} \left( F_{\theta\phi}^a F^{a\theta\phi} \right)^2 \delta\left( \theta-\frac{\pi}2 \right) \left[ \delta(\phi) + \delta(\phi-\pi) \right] ~, \end{eqnarray} where $a$ denotes the group index of the $\{(3,2)(1,-1,-3) + {\rm h.c.}\}$ representation. These brane terms preserve the $Z'_2$ symmetry which corresponds to the symmetry under the transformation $(\phi \rightarrow \phi+\pi)$. With an appropriate choice of the dimensionless constant $\alpha$, the squared mass of the $(1,1)$ can be lifted to become positive and sufficiently large. \subsection{Spontaneous symmetry breaking and Higgs mass} Due to a negative mass term, the Higgs potential in Eq.~(\ref{H-potential}) can induce the spontaneous symmetry breakdown: SU(2) $\times$ U(1)$_Y$ $\rightarrow$ U(1)$_{\rm EM}$ in the SM. The Higgs field acquires a vaccum expectation value (VEV) \begin{eqnarray} \langle \phi \rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 0 \\ v \end{pmatrix} \mbox{ with } v = \sqrt{\frac{5 \pi \chi}{3}} \frac{1}{g} \simeq \frac{4.6}{g} ~. \end{eqnarray} One immedialtey finds that the $W$ boson mass \begin{equation} m_W = \frac{g_2}{2} v = \frac{1}{6}\sqrt{ \frac{5 \chi}{2} } \frac{1}{R} \simeq \frac{0.53}{R}, \end{equation} from which the compactification scale $R^{-1} \simeq 152$ GeV is inferred. Moreover, the Higgs boson mass at the tree level is \begin{equation} m_H = \sqrt{\frac{3}{20 \pi}} \frac{g v}{R} = 3 \sqrt{\frac{2}{5}} m_W = \frac{\sqrt{\chi}}{2} \frac{1}{R} ~, \end{equation} which is about $152$ GeV, numerically very close to the compactification scale. Since the hypercharge of the Higgs field is $1/2$, the U(1)$_Y$ gauge coupling is derived from Eq.~(\ref{cova}) as \begin{equation} g_Y = \frac{g}{\sqrt{10 \pi R^2}} ~. \end{equation} The Weinberg angle is thus given by \begin{eqnarray} \sin^2 \theta_W = \frac{g_Y^2}{g_2^2+g_Y^2} = \frac{3}{8} ~, \end{eqnarray} and the $Z$ boson mass \begin{eqnarray} m_Z = \frac{m_W}{\cos \theta_W} = m_W \sqrt{\frac{8}{5}} ~, \end{eqnarray} both at the tree level. These relations are the same as the SU(5) GUT at the unification scale. This is not surprising because this part only depends on the group structure. \section{KK mode spectrum of each field \label{KKmass}} In this section, we compute the KK mass spectra of both fermion and gauge fields in the existence of the background gauge field. The masses are basically conrtrolled by the compactification radius $R$ of the two-sphere. They receive two kinds of contributions: one arising from the angular momentum in the $S^2$ space, and the other coming from the interactions with the background field. \subsection{KK masses of fermions} The KK masses for fermions have been given in Refs.~\cite{RandjbarDaemi:1982hi, Maru:2009wu, Dohi:2010vc}. We give them in terms of our notation here: \begin{eqnarray} \label{KKmass-fermion} M_{\ell m}^{KK}(\psi_L) = \frac{1}{R} \sqrt{\ell(\ell+1)-\frac{4q^2-1}{4} } ~, \end{eqnarray} where $q$ is proportional to the U(1)$_Z$ charge of fermion and determined by the action of $Q=3Q_Z$ on fermions as $Q \Psi = q \Psi = 3q_Z \Psi$. Note that the mass does not depend on the quantum number $m$. The lightest KK mass, corresponding to $\ell = 1$ and $q_Z = 1/6$, is about 214 GeV at the tree level. The range of $\ell$ is \begin{equation} \frac{2q \pm 1}{2} \leq \ell \qquad (+: \ \textrm{for} \ \psi_{R(L)} \ \textrm{in} \ \Psi_{+(-)}, \quad - : \ \textrm{for} \ \psi_{L(R)} \ \textrm{in} \ \Psi_{-(+)} ) ~. \end{equation} We thus can have zero mode for $Q \Psi = \pm \frac{1}{2} \Psi$, where this condition is given in Eq.~(\ref{massless-condition}). \subsection{KK masses of $A_{\mu}$} For the four-dimensional gauge field $A_{\mu}$, its kinetic term and KK mass term are obtained from the terms \begin{equation} \label{FF} L=\int d \Omega Tr \biggl[ -\frac{1}{4}F_{\mu \nu} + \frac{1}{2 R^2} F_{\mu \theta} F^{\mu}_{\ \theta}+\frac{1}{2 R^2 \sin^2 \theta} F_{\mu \phi} F^{\mu}_{\ \phi} \biggr] ~. \end{equation} Taking terms quadratic in $A_{\mu}$, we get \begin{eqnarray} L_{\rm quad} &=& \int d \Omega Tr \biggl[ -\frac{1}{4}(\partial_{\mu} A_{\nu}-\partial_{\nu}A_{\mu} )(\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu} ) +\frac{1}{2 R^2} \partial_{\theta} A_{\mu} \partial_{\theta} A^{\mu} \nonumber \\ && \qquad \qquad + \frac{1}{2 R^2 \sin^2 \theta} \partial_{\phi} A_{\mu} \partial_{\phi} A^{\mu} -\frac{1}{2 R^2} [A_{\mu}, \tilde{A}_{\phi}^B][A^{\mu},\tilde{A}_{\phi}^B] \biggr] ~, \end{eqnarray} where $\tilde{A}^B_{\phi}$ is the background gauge field given in Eq.~(\ref{background}). The KK expansion of $A_{\mu}$ is \begin{equation} A_{\mu} = \sum_{\ell m} A_{\mu}^{\ell m}(x) Y_{\ell m}^{\pm}(\theta,\phi) \end{equation} where $Y_{\ell m}^{\pm}(\theta,\phi)$ are the linear combinations of spherical harmonics satisfying the boundary condition $Y_{\ell m}^{\pm}(\pi-\theta,-\phi) = \pm Y_{\ell m}^{\pm}(\theta,\phi)$. Their explicit forms are \cite{Maru:2009wu} \begin{eqnarray} \label{modef1} Y_{\ell m}^+(\theta, \phi) &\equiv& \frac{(i)^{\ell+m}}{\sqrt{2}}[Y_{\ell m}(\theta, \phi) + (-1)^{\ell} Y_{\ell-m}(\theta, \phi)] \quad \textrm{for} \quad m \not=0 \\ \label{modef2} Y_{\ell m}^-(\theta, \phi) &\equiv& \frac{(i)^{\ell+m+1}}{\sqrt{2}}[Y_{\ell m}(\theta, \phi) - (-1)^{\ell} Y_{\ell-m}(\theta, \phi)] \quad \textrm{for} \quad m \not=0 \\ \label{modef3} Y_{\ell0}^{+(-)}(\theta) &\equiv& \left\{\begin{array}{l} Y_{\ell0}(\theta) \quad \textrm{for} \quad m=0 \ \textrm{and} \ \ell=\textrm{even (odd)} \\ 0 \qquad \quad \textrm{for} \quad m=0 \ \textrm{and} \ \ell=\textrm{odd (even)}. \end{array}\right. \end{eqnarray} Note that we do not have KK mode functions that are odd under $\phi \rightarrow \phi + 2 \pi$ since the KK modes are specified by the integer angular momentum quantum numbers $\ell$ and $m$ of gauge field $A_M$ on the two-sphere. Thus, the components of $A_{\mu}$ and $A_{\theta,\phi}$ with $(+,-)$ or $(-,+)$ parities do not have corresponding KK modes. Applying the KK expansion and integrating about $d \Omega$, we obtain the kinetic and KK mass terms for the KK modes of $A_{\mu}$ \begin{eqnarray} \label{masstermAm} L_M &=& -\frac{1}{2} \left[ \partial_{\mu} A^{\ell m}_{\nu}(x)-\partial_{\nu}A^{\ell m}_{\mu}(x) \right] \left[ \partial^{\mu}A^{\ell m \nu}(x)-\partial^{\nu}A^{\ell m \mu}(x) \right] + \frac{\ell(\ell+1)}{R^2} A_{\mu}^{\ell m}(x) A^{\ell m \mu}(x) \nonumber \\ && \qquad + \frac{9 q_Z^2}{R^2} \biggl[ \int d \Omega \frac{(\cos \theta \pm 1)^2}{\sin^2 \theta} (Y_{\ell m}^{\mp})^2 \biggr] A_{\mu}^{\ell m}(x) A^{\ell m \mu}(x) ~, \end{eqnarray} where we have used $Tr[Q_i Q^i]=2$ and $[A_{\mu}(x),Q_Z] = q_Z (A_{\mu}^i(x) Q_i - A_{i \mu}(x) Q^i )$. Therefore, the KK masses of $A_\mu$ are \begin{eqnarray} \label{KKmass-gauge} M_{\ell m}^{KK}(A_\mu) &=& \frac{1}{R} \sqrt{\ell(\ell+1)+(m^B_{\ell m})^2} ~, \\ (m^B_{\ell m})^2 &=& 9 q_Z^2 \int d \Omega \frac{(\cos \theta \pm 1)^2}{\sin^2 \theta} (Y_{\ell m}^{\mp})^2 ~, \end{eqnarray} where $m^B_{\ell m}$ corresponds to the contribution from the background gauge field. Note that Eq.~(\ref{KKmass-gauge}) agrees with Eq.~(\ref{eq:nonSMgaugeMass}) when $\ell = 0$. Also, since the SM gauge bosons have $q_Z = 0$, their KK masses are simply $\sqrt{\ell(\ell+1)}/R$ at the tree level. \subsection{KK masses of $A_{\theta,\phi}$} The kinetic and KK mass terms of $A_{\theta}$ and $A_{\phi}$ are obtained from the terms in the higher dimensional gauge sector \begin{eqnarray} \label{scalar} L &=& \frac{1}{2 g^2} \int d \Omega \Biggl\{ \Bigl( Tr[(\partial_{\mu} A_{\theta}-i[A_{\mu},A_{\theta}])^2] + Tr[(\partial_{\mu} A_{\theta}-i[A_{\mu},\tilde{A}_{\phi}])^2 ] \Bigr) \nonumber \\ && \qquad \qquad - \frac{1}{R^2} Tr \biggl[ \biggl( \frac{1}{\sin \theta} \partial_{\theta} (\sin \theta \tilde{A}_{\phi} + \sin \theta \tilde{A}^B_{\phi}) -\frac{1}{\sin \theta} \partial_{\phi} A_{\theta} - i[A_{\theta},\tilde{A}_{\phi}+\tilde{A}^B_{\phi}] \biggr)^2 \biggr] \Biggr\} ~. \nonumber \\ \end{eqnarray} The first line on the right-hand side of Eq.~(\ref{scalar}) corresponds to the kinetic terms, and the second line corresponds to the potential term. Applying the background gauge field Eq.~(\ref{background}), the potential becomes \begin{eqnarray} L_V = -\frac{1}{2 g^2 R^2} \int d \Omega Tr \biggl[ \biggl( \frac{1}{\sin \theta} \partial_{\theta} (\sin \theta \tilde{A}_{\phi}) + Q - \frac{1}{\sin \theta} \partial_{\phi} A_{\theta} -i [A_{\theta}, \tilde{A}_{\phi}+\tilde{A}_{\phi}^B] \biggr)^2 \biggr] \end{eqnarray} For $A_{\theta}$ and $A_{\phi}$ we use the following KK expansions to obtain the KK mass terms, \begin{eqnarray} \label{expansion3} A_{\theta}(x,\theta,\phi) &=& \sum_{\ell m (\neq 0)} \frac{-1}{\sqrt{\ell(\ell+1)}} \bigl[ \Phi_1^{\ell m}(x) \partial_{\theta} Y_{\ell m}^{\pm}(\theta,\phi) + \Phi_2^{\ell m}(x) \frac{1}{\sin \theta} \partial_{\phi} Y_{\ell m}^{\pm}(\theta,\phi) \bigr] ~, \\ \label{expansion4} A_{\phi}(x,\theta,\phi) &=& \sum_{\ell m (\neq 0)}\frac{1}{\sqrt{\ell(\ell+1)}} \bigl[ \Phi_2^{\ell m}(x) \partial_{\theta} Y_{\ell m}^{\pm}(\theta,\phi) - \Phi_1^{\ell m}(x) \frac{1}{\sin \theta} \partial_{\phi} Y_{\ell m}^{\pm}(\theta,\phi) \bigr] ~, \end{eqnarray} where the factor of $1/\sqrt{\ell(\ell+1)}$ is needed for normalization. These particular forms are convenient in giving diagonalized KK mass terms \cite{Maru:2009wu}. Applying the KK expansions Eq.~(\ref{expansion3}) and Eq.~(\ref{expansion4}), we obtain the kinetic term \begin{eqnarray} L_{K} = \frac{1}{2 g^2} \sum_{\ell m (\neq 0)} Tr \biggl[ \partial_{\mu} \Phi_1^{\ell m}(x) \partial^{\mu} \Phi_1^{\ell m}(x) + \partial_{\mu} \Phi_2^{\ell m}(x) \partial^{\mu} \Phi_2^{\ell m}(x) \biggr] \end{eqnarray} where only terms quadratic in $\partial_{\mu} \Phi$ are retained. The potential term \begin{eqnarray} L_V &=& -\frac{1}{2 g^2 R^2} \sum_{\ell m (\neq 0)} \int d \Omega Tr \biggl[ \biggl( \frac{\Phi_{2}^{\ell m}}{\sqrt{\ell(\ell+1)}} \frac{1}{\sin \theta} \partial_{\theta} (\sin \theta \partial_{\theta} Y_{\ell m}^{\pm} ) +Q + \frac{\Phi_{2}^{\ell m}}{\sqrt{\ell(\ell+1)}} \frac{1}{\sin^2 \theta} \partial_{\phi}^2 Y_{\ell m}^{\pm} \nonumber \\ && \qquad - \frac{i}{\ell(\ell+1)} \Bigl[ - \Phi_{1}^{\ell m} \partial_{\theta} Y_{\ell m}^{\pm} - \Phi_{2}^{\ell m} \frac{1}{\sin \theta} \partial_{\phi} Y_{\ell m}^{\pm}, \Phi_{2}^{\ell m} \partial_{\theta} Y_{\ell m}^{\pm} - \Phi_{1}^{\ell m} \frac{1}{\sin \theta} \partial_{\phi} Y_{\ell m}^{\pm} \nonumber \\ && \qquad + \sqrt{\ell(\ell+1)} A_{\phi}^B \Bigr] \biggr)^2 \biggr] ~, \nonumber \\ \end{eqnarray} where only terms diagonal in $(\ell,m)$ are consider. Using the relation $\frac{1}{\sin \theta} \partial_{\theta} (\sin \theta \partial_{\theta} Y_{\ell m}) + \frac{1}{\sin^2 \theta} \partial_{\phi}^2 Y_{\ell m} = -\ell(\ell+1)Y_{\ell m}$, the potential term is simplified as \begin{eqnarray} L_V &=& -\frac{1}{2 g^2 R^2} \sum_{\ell m(\neq 0)} \int d \Omega Tr \biggl[ \biggl( -\sqrt{\ell(\ell+1)} \Phi_2^{\ell m}Y_{\ell m}^{\pm} +Q \nonumber \\ && \qquad \qquad \qquad +\frac{i}{\ell(\ell+1)} [\Phi_1^{\ell m}, \Phi_2^{\ell m}] \bigl( \partial_{\theta} Y_{\ell m}^{\pm} \partial_{\theta} Y_{\ell m}^{\pm} +\frac{1}{\sin^2 \theta} \partial_{\phi} Y_{\ell m}^{\pm} \partial_{\phi} Y_{\ell m}^{\pm} \bigr) \nonumber \\ && \qquad \qquad \qquad +\frac{i}{\sqrt{\ell(\ell+1)}} [\Phi_1^{\ell m}, \tilde{A}_{\phi}^B] \partial_{\theta} Y_{\ell m}^{\pm} + \frac{i}{\sqrt{\ell(\ell+1)}} [\Phi_2^{\ell m}, \tilde{A}_{\phi}^B] \frac{\partial_{\phi} Y_{\ell m}^{\pm}}{\sin \theta} \biggr)^2 \biggr] ~. \end{eqnarray} To obtain the mass term, we focus on terms quadratic in $\Phi_{1,2}$: \begin{eqnarray} \label{massterm} L_M &=& -\frac{1}{2 g^2 R^2} \int d \Omega Tr \biggl[ \ell(\ell+1) (\Phi_2^{\ell m})^2 (Y_{\ell m}^{\pm})^2 \nonumber \\ && \qquad + \frac{2i Q}{\ell(\ell+1)} [\Phi_1^{\ell m},\Phi_2^{\ell m}] \Bigl( \partial_{\theta} Y_{\ell m}^{\pm} \partial_{\theta} Y_{\ell m}^{\pm} + \frac{1}{\sin^2 \theta} \partial_{\phi} Y_{\ell m}^{\pm} \partial_{\phi} Y_{\ell m}^{\pm} \Bigr) \nonumber \\ && \qquad +2 i \tilde{A}_{\phi}^B [\Phi_1^{\ell m},\Phi_2^{\ell m}] Y_{\ell m}^{\pm} \partial_{\theta} Y_{\ell m}^{\pm} - \frac{1}{\ell(\ell+1)} [\Phi_1^{\ell m},\tilde{A}_{\phi}^B]^2 (\partial_{\theta} Y_{\ell m}^{\pm})^2 \nonumber \\ && \qquad - \frac{1}{\ell(\ell+1)} [\Phi_2^{\ell m},\tilde{A}_{\phi}^B]^2 \frac{(\partial_{\phi} Y_{\ell m}^{\pm})^2}{\sin^2 \theta} \biggr]. \nonumber \\ \end{eqnarray} Note that we have dropped the term proportional to $[\Phi_1,\tilde{A}_{\phi}^B] [\Phi_2,\tilde{A}_{\phi}^B]$ because this term vanishes after turning the field into the linear combinations of $\Phi$ and $\Phi^\dagger$, Eqs.~(\ref{okikae1}) and (\ref{okikae1}): \begin{eqnarray} Tr[[\Phi_1,\tilde{A}_{\phi}^B][\Phi_1,\tilde{A}_{\phi}^B]] &\rightarrow& Tr[[(\Phi+\Phi^{\dagger}),Q] [(\Phi-\Phi^{\dagger}),Q] ] \nonumber \\ &\propto& Tr[(\Phi-\Phi^\dagger)(\Phi+\Phi^{\dagger})] \nonumber \\ &\propto& Tr[\Phi \Phi^{\dagger}] - Tr[\Phi^{\dagger} \Phi] =0 \end{eqnarray} Integrating the second term of Eq.~(\ref{massterm}) by part, we obtain \begin{eqnarray} L_M &=& -\frac{1}{2g^2 R^2} \biggl( \ell(\ell+1) Tr[(\Phi_2^{\ell m})^2] +2i Tr[Q [\Phi_1^{\ell m},\Phi_2^{\ell m}] ] \nonumber \\ && \qquad \qquad -2 i Tr[Q [\Phi_1^{\ell m},\Phi_2^{\ell m}] ] \int d \Omega \frac{\cos \theta \mp 1}{\sin \theta} Y_{\ell m}^{\pm} \partial_{\theta} Y_{\ell m}^{\pm} \nonumber \\ && \qquad \qquad -\frac{1}{\ell(\ell+1)} [\Phi_1^{\ell m},Q]^2 \int d \Omega \frac{(\cos \theta \mp1)^2}{\sin^2 \theta} (\partial_{\theta} Y_{\ell m}^{\pm})^2 \nonumber \\ && \qquad \qquad - \frac{1}{\ell(\ell+1)} [\Phi_2^{\ell m},Q]^2 \int d \Omega \frac{(\cos \theta \mp 1)}{\sin^2 \theta} \frac{(\partial_{\phi} Y_{\ell m}^{\pm})^2}{\sin^2 \theta} \biggr) ~. \nonumber \\ \end{eqnarray} Therefore, the KK masses depend on the U(1)$_Z$ charges of the scalar fields. For components with zero U(1)$_Z$ charge, we write $\Phi_{1(2)}(x)$ as $\phi_{1(2)}(x) Q$ where $Q$ is the corresponding generator of E$_6$ in Eq.~(\ref{d78}) with zero U(1)$_Z$ charge. Taking the trace, we have the following kinetic and KK mass terms instead: \begin{eqnarray} L = \sum_{\ell m(\neq 0)} \biggl( \partial_{\mu} \phi_1^{\ell m}(x) \partial^{\mu} \phi_1^{\ell m}(x) + \partial_{\mu} \phi_2^{\ell m}(x) \partial^{\mu} \phi_2^{\ell m}(x) + \ell(\ell+1) \phi_2^{\ell m}(x) \phi_2^{\ell m}(x) \biggr) \end{eqnarray} where we have made the substitution $\phi_i \rightarrow g \phi_i$. Note that $\phi_1$ is considered as a massless Nambu-Goldstone (NG) boson in this case. For components with nonzero U(1)$_Z$ charge, we use Eq.~(\ref{okikae1}) and (\ref{okikae2}) and write $\Phi(x)$ as $\phi^i(x)Q_i$ where $Q_i$ is the corresponding generator of $E_6$ in Eq.~(\ref{d78}) with nonzero U(1)$_Z$ charge. The commutator between $Q$ and $\Phi$ is \begin{equation} [Q,\Phi] = 3[Q_Z,Q_i] \phi^i = 3 q_Z \phi^i ~, \end{equation} where we have used $Q=3Q_Z$ and that $q_Z$ is a constant determined by the U(1)$_Z$ charge of the corresponding component. Finally, the Lagrangian becomes \begin{eqnarray} L &=& \sum_{\ell m(\neq 0)} \biggl\{ \partial_{\mu} \phi^{\dagger}_{\ell m} \partial^{\mu} \phi_{\ell m} \nonumber \\ && \qquad \quad -\frac{1}{4 R^2} \biggl[ 2 \ell(\ell+1) \phi_{\ell m}^\dagger \phi_{\ell m} -12 q_Z \phi_{\ell m}^\dagger \phi_{\ell m} +12 q_Z \phi_{\ell m}^\dagger \phi_{\ell m } \int d \Omega \frac{\cos \theta \mp 1}{\sin \theta} Y_{\ell m}^{\pm} \partial_{\theta} Y_{\ell m}^{\pm} \nonumber \\ && \qquad \qquad \qquad +\frac{18 q_Z^2}{ \ell(\ell+1)} \phi_{\ell m}^\dagger \phi_{\ell m} \int d \Omega \frac{(\cos \theta \mp1)^2}{\sin^2 \theta} \left( (\partial_{\theta} Y_{\ell m}^{\pm})^2 + \frac{(\partial_{\phi} Y_{\ell m}^{\pm})^2}{\sin^2 \theta} \right) \biggr] \biggr\}. \nonumber \\ \end{eqnarray} where the subscript $i$ is omitted for simplicity. The KK masses of the complex scalar field $\phi$ are then \begin{eqnarray} \label{KKmass-scalar} M_{\ell m}^{KK}(\phi) &=& \frac{1}{R} \sqrt{\frac{\ell(\ell+1)}{2}+(m_{\ell m}^B)^2} ~, \nonumber \\ (m_{\ell m}^B)^2 &=& -3 q_Z +3 q_Z \int d \Omega \frac{\cos \theta \mp 1}{\sin \theta} Y_{\ell m}^{\pm} \partial_{\theta} Y_{\ell m}^{\pm} +\frac{9 q_Z^2}{ 2\ell(\ell+1)} \int d \Omega \frac{(\cos \theta \mp1)^2}{\sin^2 \theta} (\partial_{\theta} Y_{\ell m}^{\pm})^2 \nonumber \\ && \qquad + \frac{9 q_Z^2}{2\ell(\ell+1)} \int d \Omega \frac{(\cos \theta \mp 1)^2}{\sin^2 \theta} \frac{(\partial_{\phi} Y_{\ell m}^{\pm})^2}{\sin^2 \theta} ~. \end{eqnarray} The squared KK mass $\left( M_{\ell m}^{KK} \right)^2$ is always positive except for the lowest mode ($\ell=1,m=1$). In fact, the squared KK mass of the $(1,1)$ mode agrees with the coefficient of quadratic term in the Higgs potential (\ref{H-potential}). \subsection{Dark matter candidate} In our model, each KK particle is associated with a KK parity derived from an additional $Z_2'$ discrete symmetry of $(\theta,\phi) \rightarrow (\theta,\phi + \pi)$, corresponding to the exchange of the two fixed points on the orbifold \cite{Maru:2009wu}. The KK-parity is given by $(-1)^m$, and is conserved as a consequence of the $Z_2'$ symmetry of the Lagrangian in six-dimensional spacetime. Therefore, the lightest KK particle with an odd $m$ will be stable. A comparison among Eqs.~(\ref{KKmass-fermion}), (\ref{KKmass-gauge}) and (\ref{KKmass-scalar}) indicates that the lightest KK particles are the $(\ell=1,m=1)$ modes of the scalar components with non-zero U(1)$_Z$ charges since their masses receive a negative contribution from the background gauge field. They include the components $\{(3,2)(1,-1,-3)^{(-,-)} + {\rm h.c.} \}$ and $\{(1,2)(-3,3,-3)^{(+,+)} + {\rm h.c.} \}$ in Eq.~(\ref{78scalar}) since the other components either have zero U(1)$_Z$ charge or are odd under $\phi \rightarrow \phi + 2 \pi$. At the tree level, both of them have the same and negative KK squared mass since their U(1)$_Z$ charges are same ($q_Z = 1/2$). As argued at the end of Section~\ref{sec:Higgssector}, the squared mass of the former representation can be lifted by brane terms to be sufficiently large to avoid color symmetry breaking. Its mass depends on the parameter $\alpha$ in the brane terms. The latter representation actually gives the Higgs field that has a mass about 152 GeV. We assume that the mass of the $(1,1)$ mode of the $\{(3,2)(1,-1,-3)^{(-,-)} + {\rm h.c.} \}$ components is heavier than the Higgs boson mass since a colored particle is not suitable for dark matter candidate. Therefore, the model has the $(1,1)$ mode of the $\{(1,2)(-3,3,-3)^{(+,+)} + {\rm h.c.} \}$ representation as the lightest and stable KK particle, which is simply the Higgs boson. \section{Summary \label{sec:summary}} The gauge-Higgs unification is an attractive idea because it can unify the SM gauge bosons and Higgs boson under a higher dimensional spacetime symmetry. The gauge invariance prevents the Higgs boson in the bulk from receiving radiative corrections that diverge with the cutoff scale, thus easing the gauge hierarchy problem. However, one still encounters the difficulty in getting an appropriate Higgs potential to break the electroweak symmetry in five dimensional spacetime. Extra particles are generally needed in order to generate such a potential and a sufficiently large Higgs mass radiatively. When one goes to six spacetime dimensions and considers the $S^2/Z_2$ orbifold, it is possible to render a suitable Higgs potential by incorporating a background gauge field in the extra-dimensional components. To fully achieve that, nevertheless, one has to assume a special symmetry that relates the SU(2) isometry transformation in $S^2$ to the gauge transformation. We consider in this paper a six-dimensional gauge-Higgs unification model, in which the gauge group is enlarged to E$_6$ and the extra space is the $S^2/Z_2$ orbifold. By specifying two sets of parity transformation properties for the fields and employing a Dirac monopole configuration for the background gauge field, we have a successful symmetry reduction to the SM gauge group plus two extra U(1)'s. In our model, the background gauge field $A_{\phi}^B$ plays important roles in several aspects. First, it renders massless chiral fermions by canceling the spin-connection term in the covariant derivative. Secondly, it elevates the masses of unwanted representations of $A_\mu$ to roughly the compactification scale in four dimensional spacetime. Finally, from the gauge kinetic term, it gives rise to a negative mass square term for the Higgs potential at tree level. At the low energy, we obtain only the SM particles. The SM gauge bosons all originate from a single adjoint representation of the E$_6$ group. The chiral fermions, including a right-handed neutrino, are derived from four copies of fundamental representation, each of which have a distinct parities under the two parity transformations. We also obtain exactly one complex Higgs doublet from the extra dimensional components of the gauge field. We have computed the Higgs potential in this model. The squared mass is related to the compactification radius, and the quartic coupling to the E$_6$ gauge coupling. The radius of the compactified two-sphere is extracted to be around (152 GeV)$^{-1}$. The Higgs boson mass is predicted to be about 150 GeV at tree level. Due to the gauge group structure, we obtain $\sin^2\theta_W = 3/8$, the same as in the SU(5) GUT at the unification scale. Through KK expansions, we have calculated the mass spectra of the gauge and fermion fields. In general, these masses involve two contributions: one related to the angular momentum eigenvalues in the extra dimensions $\ell(\ell+1)$, and the other due to the interactions between the KK modes and the background gauge field. Finally, the model can have a dark matter candidate due to the KK parity under the $Z'_2$ symmetry. It is the Higgs boson of the model. \section*{Acknowledgments} This research was supported in part by the National Science Council of Taiwan, R.O.C.\ under Grant No.~NSC~97-2112-M-008-002-MY3 and the NCTS.
2,869,038,155,744
arxiv
\section{Introduction} When quantum particles interact, they can team up to form new particles with fractional charge or spin as observed in two-dimensional electron gases or low-dimensional spin lattices. A particularly fruitful model system to study emerging new particles and their interactions has been the antiferromagnetic (AF) $S$=$\frac{1}{2}$ Heisenberg chain. It does not order even at $T=0\;\mathrm{K}$ due to strong quantum fluctuations and its elementary excitations are spinons carrying fractional $S$=$\frac{1}{2}$, which interact only weakly and are unbound particles.\cite{Faddeev_Takhtajan} Spinons were observed in several materials, including ${\rm KCuF_3}$ \cite{Tennant95}, ${\rm BaCu_2Si_2O_7}$\cite{Kenzelmann_BaCu2Si2O7} and copper pyrazine dinitrate.\cite{Stone} Strong interactions between spinons can arise from the breaking of spin rotational symmetry as for example in the three-dimensional coupled chain antiferromagnet where the mean field associated with the long-range ordered ground state restricts spin fluctuations. This creates an attractive potential for the spinons and at low energies they condense into spin-wave excitations carrying $S$=$1$,\cite{Schulz96} as observed in ${\rm KCuF_3}$ \cite{Tennant95/2} and ${\rm BaCu_2Si_2O_7}$.\cite{Zheludev_BaCu2Si2O7}\par Recently excitations were observed in the $S$=$\frac{1}{2}$ chain antiferromagnet ${\rm CuCl_2 \cdot 2((CD_3)_2SO)}$ (CDC) subject to uniform and staggered magnetic fields which were interpreted as bound spinon states.\cite{Kenzelmann_CDC_PRL} In the long-wave-length limit, the experiment demonstrated that the low-energy excitations of a $S$=$\frac{1}{2}$ chain in a staggered field correspond to the soliton and breather excitations of the quantum sine-Gordon model.\cite{Affleck_Oshikawa} However, the sine-Gordon model is only valid for a very restricted range of wave-vectors and does not apply for excitations at smaller length scales where the discreteness of the lattice becomes important. More importantly, it does not fully describe the mechanism by which the staggered field produces an attractive potential that binds spinons into the long-lived dispersive $S$=$1$ excitations as observed in the experiment.\par In this paper, we analyze the dispersion of the observed bound spinon states \cite{Kenzelmann_CDC_PRL} in detail, and we report the observation of a bound-spinon state at high energies in CDC. In this system which has a nearest neighbor exchange $J=1.5\;\mathrm{meV}$, a staggered field of the order of $1\;\mathrm{T}$, which corresponds to a Zeeman energy $g\mu_B H \sim 0.1\;\mathrm{meV}$, qualitatively affects the excitation spectrum to energies more than twice $J$ at $3.4\;\mathrm{meV}$. This is in stark contrast to spinon binding in coupled chain magnets where the effects are only apparent for $\hbar \omega \approx k_B T_N$.\cite{Zheludev_BaCu2Si2O7} A simple mean-field theory of fermions carrying $S$=$\frac{1}{2}$ in one dimension captures the wave-vector dependence of the bound spinon states, and explains the high-energy excitation through the opening of a gap at the Fermi surface. This model represents a first step towards a comprehensive description of the incommensurate excitations in AF $S$=$\frac{1}{2}$ chains subject to staggered fields.\par \section{Experimental} CDC was identified as an AF $S$=$\frac{1}{2}$ chain system in which a staggered $g$-tensor and/or Dzyaloshinskii-Moriya (DM) interactions \cite{Landee,Chen_CDC} lead to a staggered field $H_{\rm st}$ upon application of a uniform field $H$. In CDC, a uniform magnetic field ${\bf H}=(0,0,H)$ along the c-axis generates a staggered field ${\bf H}_{\rm st}=(H_{\rm st},0,0)$ along the a-axis, and the Hamiltonian can be written as \begin{equation} \mathcal{H}=\sum_i J {\bf S}_i\, {\bf S}_{i+1} - g_c \mu_B H S^z_i - g_a\mu_B H_{\rm st} (-1)^i S^x_i\, , \end{equation}where $g_c$ and $g_a$ are the uniform part of the gyromagnetic tensor along the c and a-axis, respectively. The staggered field is given by \begin{equation} H_{\rm st}=\frac{1}{2J} \frac{g_c}{g_a}D H + \frac{g_s}{g_a} H\, , \end{equation}where $g_s$ is the staggered gyromagnetic form factor and $D=|{\bf D}|$ is the length of the DM vector, which points along the b-axis in CDC. The nearest-neighbor spin exchange along the chain is accurately known from susceptibility measurements \cite{Landee} and inelastic neutron scattering.\cite{Kenzelmann_CDC_PRL} The spin chains run along the ${\bf a}$-axis of the orthorhombic crystal structure ({\it Pnma}),\cite{Willett_Chang} with the ${\rm Cu^{2+}}$ ions separated by $0.5{\bf a} \pm 0.22{\bf c}$. Wave vector transfer is indexed in the corresponding reciprocal lattice ${\bf Q}(hkl)=h{\bf a}^*+k{\bf b}^*+l{\bf c}^*$, and we define the wave-vector transfer along the chain as $q={\bf Q}\cdot {\bf a}$. Due to weak inter-chain interactions, CDC has long-range AF order in zero field below $T_N = 0.93\;\mathrm{K}$ with an AF wave-vector ${\bf Q}_m={\bf a}^*$. An applied field along the c-axis suppresses the ordered phase in a second order phase transition at $H_c=3.9\mathrm{T}$,\cite{Chen_CDC} indicating that inter-chain interactions favor correlations that are incompatible with the field-induced staggered magnetization.\cite{Oshikawanew} At fields much greater than $H_c$, the staggered fields thus arise mostly from a staggered $g$-tensor and DM interactions and not from interchain interactions.\par The neutron scattering experiments were performed on $7.76\;\mathrm{g}$ of deuterated single-crystalline CDC. The measurements were carried out using the SPINS triple axis spectrometer and the DCS time of flight spectrometer at the NIST Center for Neutron Research. The SPINS measurements were performed with a focussing analyzer covering $7^{\rm o}$ in $2\Theta$ scattering angle set to reflect $E_f=5\;\mathrm{meV}$ to the center of the detector. A Be filter rejected neutrons with energies higher than $5\;\mathrm{meV}$ from the detection system. The measurements were performed with the strongly dispersive direction along the scattered neutron direction to integrate over wave-vectors along a weakly-dispersive directions. The experimental configuration for the measurements made using the DCS spectrometer and the conversion of those data to absolute units are described in detail elsewhere.\cite{Kenzelmann_CDC_PRL} They were performed with an incident energy $E_i$=$3.03\;\mathrm{meV}$ and the incident beam parallel to the a-axis for configuration A, and with an incident energy $E_i$=$4.64\;\mathrm{meV}$ and angle of $60^{\rm o}$ between the incident beam and the a-axis for configuration B.\par \begin{figure}[ht] \begin{center} \includegraphics[height=6cm,bbllx=70,bblly=250,bburx=491, bbury=590,angle=0,clip=]{Fig1newexcitation.eps} \caption{Neutron scattering intensity as a function of energy transfer for zero-field and $9.86\;\mathrm{T}$ measured using SPINS. The dashed line is a fit of two Gaussians convolved with the resolution function given by Cooper and Nathans.\protect\cite{Cooper_Nathans} The solid line shows the calculated intensity obtained from the exact diagonalization of finite chains for $H=11\;\mathrm{T}$ and scaled to the data, with the non-magnetic background given as the straight dashed line. The dashed-dotted line is the exact two-spinon cross-section of antiferromagnetic $S$=$\frac{1}{2}$ chain \protect \cite{Bougourzi_Karbach} convolved with the experimental resolution function. Inset: Excitation energy of the higher-energy mode for $H$=$9.86\;\mathrm{T}$ as a function of wave-vector transfer along the chain direction.} \label{Fignewexcitation} \end{center} \end{figure} \section{Experimental Results} Figure~\ref{Fignewexcitation} demonstrates the dramatic changes that CDC undergoes upon application of a magnetic field. In zero field, the neutron scattering spectrum for $q=0.4\pi$ consists of a strong peak which corresponds to the two-spinon continuum, whose band width, at this wave vector is narrow and barely distinguishable from the experimental resolution. In a $H$=$9.86\;\mathrm{T} \simeq \frac{3}{4} J/(g\mu_B)$ field applied along the c-axis, which also induces a staggered field along the a-axis, the scattering includes two resolution-limited excitations. According to the quantum sine-Gordon theory, the lower-energy excitation corresponds to a bound-spinon state which develops into solitons and breathers at long wave-lengths. The high-energy excitation at $3.4\;\mathrm{meV} \simeq 2.2 J$, however, does not have a simple interpretation in terms of the sine-Gordon model. Its magnetic nature is apparent on account of its field dependence. The inset to Fig.~\ref{Fignewexcitation} shows the dispersion of this excitation, which has a maximum at $q$=$0.4\pi$.\par \begin{figure}[ht] \begin{center} \includegraphics[height=10cm,bbllx=70,bblly=136,bburx=500, bbury=690,angle=0,clip=]{Fig2twoscans.eps} \caption{Neutron scattering intensity as a function of energy transfer at $11\;\mathrm{T}$ for two different chain wave vectors, measured using the DCS instrument with an incident energy $E_i$=$4.64\;\mathrm{meV}$. The solid line is the spectrum calculated from exact diagonalization of finite chains in absolute units, taking into account the polarization dependence of the experiment.} \label{Figotherscans} \end{center} \end{figure} The dispersion of the high-field excitations at lower energies, $\hbar\omega < 2\;\mathrm{meV}$, is illustrated in Fig.~\ref{Figotherscans} for two different chain wave-vectors $q$. For $q$=$0.7\pi$, there are two maxima as a function of energy, corresponding to the well-defined modes developing into the sine-Gordon soliton and breather excitations at long wave-lengths. For $q$=$0.4\pi$, only one peak is clearly observed because of the weak intensity of one of the modes in this wave-vector region. The dispersion of these field-induced resonant modes was determined as a function of the chain wave-vector, $q$, by fitting resolution-corrected Gaussian line-shapes to the observed scattering. The adjusted excitation energies are shown in Fig.~\ref{Figdispersion}a, illustrating that magnetic spectral weight generally shifts to lower energies in an applied field. However, due to the $H_{\rm st}$-induced gap, this effect is much less pronounced than in a uniform field as the ground state energy increases.\cite{Muller}\par \begin{figure}[ht] \begin{center} \includegraphics[height=11cm,bbllx=75,bblly=68,bburx=500, bbury=570,angle=0,clip=]{Fig3dispersion.eps} \caption{(a) Dispersion of the excitations at $H$=$11\;\mathrm{T}$ and the lower bound of the zero-field two-spinon cross-section,\protect\cite{Karbach_Bougourzi} obtained through fits to the zero-field data. The broken lines are predicted thresholds for continua for a $S$=$\frac{1}{2}$ chain in a uniform field.\protect\cite{Muller} The dashed-dotted line is the des-Cloizeaux-Pearson lower bound for excitations in $S$=$\frac{1}{2}$ chains in zero field \protect\cite{Cloizeaux_Pearson} for $J$=$1.5\;\mathrm{meV}$. The solid lines correspond to the dispersion obtained from Gaussian fits to the spectra obtained by the exact diagonalization of finite chains. The dotted line corresponds to the mean-field dispersion. The dashed double-dot line is a guide to the eye for the high-energy mode above $3\;\mathrm{meV}$, which was measured at $9.86\;\mathrm{T}$. (b) Integrated intensity of the two resonant modes at $H$=$11\;\mathrm{T}$ as a function of wave-vector transfer.} \label{Figdispersion} \end{center} \end{figure} \section{Mean-field theory} We now present a simple mean-field theory of interacting $S$=$\frac{1}{2}$ fermions which captures both the emergence of a new excitation at high energies and the dispersion of the bound spinon states. As it is well known, a mean-field approach usually is not adequate to solve a one dimensional system. This is because, for finite range interactions, the fluctuations are strong enough to preclude a non-zero value of an order parameter at any fine temperature. Moreover, if the order parameter is associated with a continuous symmetry, its mean value is zero even at T=0. Consequently, a mean field theory that assumes a non-zero value of the order parameter with excitations that are originated by small fluctuations of such quantity cannot be a good description of a general one dimensional system. In the present case, the system is invariant under rotations around the $z$ axis for $H_{\rm st}=0$ and the candidate to be the order parameter is the $xy$ planar component of the staggered magnetization $M^{\rm st}_{\perp}$. As expected for a continuous symmetry and a gapless spectrum, the system is critical at $T=0$, i.e., the order parameter has divergent fluctuations. However, the staggered field $H_{\rm st}=0$ couples linearly with the order parameter $M^{\rm st}_{x}$ along the $x$ direction. Consequently, for $H_{\rm st}\neq0$, the U(1) rotational symmetry is explicitly broken and the mean value of $M^{\rm st}_{x}$ becomes non-zero at any temperature. This also changes the nature of the excitations. The spinons (kink and antikinks) are no longer the low energy quasiparticles of the system since a pair of them is now confined by a linear potential. A similar effect occurs when we increase the dimension of the system due to the interchain interaction. Therefore, we expect a mean-field treatment to be good approach for high enough values of $H_{\rm st}$. For $H_{\rm st}=0$, we know that the theory must be critical with the associated linear soft modes shown in Fig.\ref{Figdispersion}a. We also know that one dimensional spin system can be mapped into a fermionic system. In addition, a non-interacting fermionic Hamiltonian has a ground state that is also critical and has linear soft modes like the ones shown in Fig.\ref{Figdispersion}a. Therefore, it is convenient to use a fermionic representation for the mean-field approach. For this purpose, the spin degrees of ${\rm Cu^{2+}}$ ions are described in terms of fermionic creation and annihilation operators: \begin{equation} S^{\nu}_j=\frac{1}{2}\sum_{\alpha,\alpha'} c^{\dagger}_{j\alpha} \sigma^{\nu}_{\alpha \alpha'} c^{\;}_{j\alpha'}\, , \end{equation} where $\nu={x,y,z}$ and $\sigma^{\nu}$ are the Pauli matrices. Using this fermionic representation, Baskaran, Zou, and Anderson \cite{Baskaran} proposed a mean-field theory (MFT) to treat low dimensional Heisenberg spin $S=1/2$ Hamiltonians. The MFT was generalized to SU(N) spin models (for large $N$) by Affleck and Marston.\cite{Affleck_Marston} Arovas and Auerbach \cite{Arovas} studied this theory in comparison with the Bethe Ansatz solution and showed that the fluctuation corrections are important in enforcing the Gutzwiller projection. Using an extended version of this MFT, we will study here the ground state properties and the spin dynamics of the $S$=$\frac{1}{2}$ chain in a staggered field given by $\mathcal{H}$.\par Apart form an irrelevant constant, the expression for $\mathcal{H}$ in the fermionic representation is: \begin{equation} \mathcal{H}=- \frac{J}{2} \sum_{i,\sigma,\sigma'} c^{\dagger}_{i\sigma} c{\;}_{i+1 \sigma} c^{\dagger}_{i+1 \sigma'}c{\;}_{i\sigma'} -\frac{g_c}{2} \mu_B H \sum_{i,\sigma} \sigma n_{i\sigma} - \frac{g_a}{2} \mu_B H_{\rm st} \sum_{i,\sigma} (-1)^i c^{\dagger}_{i\sigma} c{\;}_{i {\bar \sigma}}\, , \end{equation}with ${\bar \sigma}= -\sigma$. Since there is one spin per site, there is a constraint on the fermion occupation number: $n_i=\sum_{\sigma}n_{i\sigma}=1$ with $n_{i\sigma}=c^{\dagger}_{i\sigma}c^{\;}_{i\sigma}$. For the Heisenberg term, we will use a linear combination of the mean-field decoupling introduced in Ref.~\onlinecite{Baskaran} and the other natural decoupling in the presence of a staggered field along the $x$ direction: \begin{equation} \mathcal{H}_{MF}=-\frac{J \gamma}{2}\sum_{i\sigma} (c^{\dagger}_{i\sigma} c^{\;}_{i+1 \sigma}+ {\rm H.c.}) - \frac{g_c}{2} \mu_B H \sum_{i,\sigma} \sigma n_{i\sigma} - \frac{1}{2} (g_a \mu_B H_{\rm st} + J \delta) \sum_{i,\sigma} (-1)^i c^{\dagger}_{i\sigma} c{\;}_{i {\bar \sigma}}+ \lambda \sum_{i} n_i\ , \end{equation} where the Lagrange multiplier or chemical potential, $\lambda$, enforces the constraint of one spin per site at mean-field level. We are assuming translational invariance for $\gamma_{i}$, $\gamma=\sum_{\sigma} \langle c^{\dagger}_{i\sigma}c_{i+1\sigma}\rangle=\gamma_i$, and a staggered dependence for the effective field $\delta_i=\sum_{\sigma} \langle c^{\dagger}_{i\sigma}c_{i{\bar \sigma}}\rangle=\delta (-1)^i$. This staggered dependence is induced by the field $H_{\rm st}$. In momentum space, this leads to \begin{equation} \mathcal{H}_{MF}=\sum_{-\pi< k \leq \pi, \sigma}\left[ \left(-J \gamma \cos(k) - \frac{\sigma}{2} g_c \mu_B H\right) c^+_{k\sigma}c_{k\sigma} - \frac{1}{2} (g_a \mu_B H_{\rm st} + J \delta) (c^+_{k+\pi \sigma}c_{k{\bar \sigma}} +c^+_{k{\bar \sigma}}c_{k+\pi \sigma})\right]\, , \end{equation}which can be written in the matrix formulation as \begin{equation} H_{MF}(k)_\sigma = \left[ \begin{array}{cc}-J\gamma \cos(k) - \sigma \frac{1}{2} g_c \mu_B H &- \frac{1}{2} (g_a \mu_B H_{\rm st} + J \delta)\\ - \frac{1}{2} (g_a \mu_B H_{\rm st} + J \delta)&J\gamma\cos(k)+\sigma \frac{1}{2} g_c \mu_B H\end{array} \right]\, , \end{equation} for $-\pi/2 < k \leq \pi/2$. The eigenvalues of this matrix, \begin{equation} \epsilon^{\pm}_{k \sigma} = \pm \sqrt{\left(J\gamma\cos(k)+\sigma \frac{g_c}{2} \mu_B H \right)^2 + \frac{1}{4}(g_a \mu_B H_{\rm st}+J \delta)^2 }\, , \label{Eqfermiondispersion} \end{equation} are the energies of the quasi-particle operators, \begin{eqnarray} \alpha^{\dagger}_{k\sigma} = u_{k \sigma} c^{\dagger}_{k\sigma}+ v_{k \sigma} c^{\dagger}_{k+\pi\bar \sigma} \nonumber \\ \beta^{\dagger}_{k \sigma} = - v_{k \sigma} c^{\dagger}_{k\sigma}+ u_{k \sigma} c^{\dagger}_{k+\pi \bar \sigma}\, , \label{qpo1} \end{eqnarray} with \begin{eqnarray} u_{k\sigma} = \frac{\epsilon^{+}_{k \sigma}+J\gamma\cos(k)+\sigma \frac{1}{2} g_c \mu_B H}{\sqrt{\left(\epsilon^{+}_{k \sigma}+J\gamma\cos(k)+\sigma \frac{1}{2}g_c \mu_B H \right)^2+ \frac{1}{4} (g_a \mu_B H_{\rm st}+J \delta)^2}}\, , \nonumber \\ v_{k\sigma} = \frac{\frac{1}{2} g_a \mu_B H_{\rm st}+ J \delta}{\sqrt{\left(\epsilon^{+}_{k \sigma}+J\gamma\cos(k)+\sigma \frac{1}{2} g_c \mu_B H \right)^2+ \frac{1}{4} (g_a \mu_B H_{\rm st}+J \delta)^2 }}\, . \label{qpo2} \end{eqnarray} $H_{MF}$ is diagonal in the new basis: \begin{equation} \mathcal{H}_{MF}=\sum_{-\pi/2 < k \leq \pi/2, \sigma} (\epsilon^+_{k\sigma} \beta^{\dagger}_{k\sigma}\beta^{\;}_{k\sigma} +\epsilon^-_{k\sigma} \alpha^{\dagger}_{k\sigma}\alpha^{\;}_{k\sigma})\, . \end{equation} The mean-field parameters $\gamma$ and $\delta$ are given by the self-consistent equations: \begin{eqnarray} \gamma=\frac {1}{2\pi} \sum_{\sigma} \int_{-\pi/2}^{\pi/2} \cos(k) (u_{k\sigma}^2-v_{k \sigma}^2) [\langle \alpha^{\dagger}_{k\sigma} \alpha^{\;}_{k\sigma} \rangle - \langle \beta^{\dagger}_{k\sigma} \beta^{\;}_{k\sigma} \rangle] dk\, , \nonumber \\ \delta=\frac {1}{2\pi} \sum_{\sigma} \int_{-\pi/2}^{\pi/2} u_{k\sigma} v_{k \sigma} [\langle \alpha^{\dagger}_{k\sigma} \alpha^{\;}_{k\sigma} \rangle - \langle \beta^{\dagger}_{k\sigma} \beta^{\;}_{k\sigma} \rangle] dk \, . \label{self} \end{eqnarray} The value of $\lambda$ is determined by imposing the average occupation per site to be equal to 1: \begin{equation} \frac {1}{2\pi} \sum_{\sigma} \int_{-\pi/2}^{\pi/2} [\langle \alpha^{\dagger}_{k\sigma} \alpha^{\;}_{k\sigma} \rangle + \langle \beta^{\dagger}_{k\sigma} \beta^{\;}_{k\sigma} \rangle] dk =1 \,. \end{equation} When $H=H_{\rm st}=0$, the integration over the phase fluctuations of the local field $\gamma_i$ \cite{Arovas} renormalizes the value of $\gamma$ given by Eq.(\ref{self}): ${\tilde \gamma} = \pi \gamma/\sqrt{2} $. This renormalization improves considerably the comparison with the exact des-Cloizeaux-Pearson \cite{Cloizeaux_Pearson} two-spinon threshold ($\gamma_{ex}=\pi/2$). To improve the quantitative comparison of the MFT with the experiment and the exact diagonalization results, we will assume here that the same renormalization factor, $\pi / \sqrt{2}$, must be applied when $H$ and $H_{\rm st}$ are finite. The value of $\gamma$ is a measure of the effective strength of the spin fluctuations introduced by the Heisenberg term. More specifically, $\gamma J$ is the Fermi velocity that in the original spin language corresponds to the velocity of the spinon excitations. Fig.~\ref{FigFermi} shows the evolution of the fermionic bands when the uniform and the staggered fields are applied. In the absence of these magnetic fields (Fig.~\ref{FigFermi}a), there is only one band and the two-spinon cross section is associated with the continuum of particle-hole excitations. The dispersion relation for the lower branch is ${\tilde \gamma} J |\sin (q)|$. When a uniform magnetic field $H\neq0$ is applied (Fig.~\ref{FigFermi}b), the spin up and down bands are split by the Zeeman term. As a consequence, there is a change $\delta q$ of Fermi wave vectors $|q_F|=\pi/2\pm \delta q$ and a corresponding change in the wave vectors of the zero energy modes: the energy of the transverse modes goes to zero at $q=\pi$ and $q=2\delta q$, while the longitudinal excitations have gapless modes at $q=0$ and $q=\pi - 2\delta q$. From Eq.~(\ref{Eqfermiondispersion}), the main effect of a non-zero staggered field $H_{\rm st}$ is to open a gap at the Fermi level, i.e., the fermionic system becomes an insulator and the spectrum is gaped for any excitation (Fig.~\ref{FigFermi}c). The gap results from the inter-band scattering which is introduced by the staggered field $H_{\rm st}$. According to Eqs.~(\ref{qpo1}), the degree of mixing is maximum at $q=q_F$. The emergence of this gap is consistent with the experimental data shown in Fig.~\ref{Figdispersion}. \begin{figure}[ht] \begin{center} \includegraphics[height=9cm,bbllx=220,bblly=280,bburx=575, bbury=710,angle=0,clip=]{Fig4Fermi.eps} \vspace{0.2cm} \caption{Schematic representation of the Fermi particle dispersion in (a) zero field, (b) uniform field $H$ and (c) uniform field $H$ and staggered field $H_{\rm st}$ with $H > H_{\rm st}$. In zero field the spin up and down particles with wave-vector $k$ have the same dispersion.} \label{FigFermi} \end{center} \end{figure} To study the excitations of the new ground state induced by $H_{\rm st}$ it is necessary to calculate the neutron scattering cross section within our MFT. The neutron scattering cross section for the transverse excitations ($\nu=x,y$) at $T=0\;\mathrm{K}$ is given by: \begin{eqnarray} S^{\nu \nu}(q,\omega)=\frac{1}{8\pi} \sum_{\sigma} [ \int^{\frac{\pi}{2}}_{\frac{\pi}{2}-q} (u_{k+q \sigma}u_{k \sigma}+ \eta v_{k+q\sigma}v_{k\sigma})^2 \delta(\omega-\epsilon^{+}_{k+q\sigma}+\epsilon^{-}_{k \sigma}) dk \nonumber \\ + \int^{\frac{\pi}{2}-q}_{-\frac{\pi}{2}} (u_{k+q \sigma}v_{k {\bar \sigma}}+ \eta v_{k+q \sigma}u_{k{\bar \sigma}})^2 \delta(\omega-\epsilon^{+}_{k+q\sigma}+\epsilon^{-}_{{k \bar \sigma}})] dk\, , \label{cross} \end{eqnarray} where $\eta=-1$ for $\nu=x$, $\eta=1$ for $\nu=y$ and $-\pi < q \leq \pi$. Note that in Eq.(\ref{cross}), $k+q$ must be contracted to the reduced Brillouin zone ($k+q \equiv k+q+n\pi$). The cross section for longitudinal excitations is: \begin{eqnarray} S^{zz}(q,\omega)=\frac{1}{8\pi} \sum_{\sigma} [ \int^{\frac{\pi}{2}}_{\frac{\pi}{2}-q} (u_{k+q {\bar \sigma}}u_{k\sigma} + v_{k+q{\bar \sigma}}v_{k\sigma})^2 \delta(\omega-\epsilon^{+}_{k+q {\bar \sigma}}+\epsilon^{-}_{k \sigma}) \nonumber \\ + \int^{\frac{\pi}{2}-q}_{-\frac{\pi}{2}} (u_{k+q \sigma} v_{k \sigma} + v_{k+q \sigma}u_{k\sigma})^2 \delta(\omega-\epsilon^{+}_{k+q \sigma}+\epsilon^{-}_{k \sigma})] dk\, . \label{long} \end{eqnarray} Equations (\ref{cross}) and (\ref{long}) reveal well-defined excitations that are determined by the cancelation of $d\omega/dk$, i.e., the divergence of the Jacobian. In Fig.~\ref{mfbr}a, we show the different branches of the transverse excitations. The lower branch corresponds to the dispersion relation of the low energy excitations. In agreement with the experiment (see Fig.~\ref{Figdispersion}), there are two minima, one is located at the incommensurate wave vector $q_I=2\arcsin(g_c \mu_B H/2\gamma J)$ and the other one occurs at $q=\pi$. It is interesting to note that for $\nu=x$ the intensity of this branch goes to zero at $q_I=2 \delta q$ due to a cancelation of the matrix element that multiplies the delta function in the integrand of Eq.~(\ref{cross}). The black and the blue curves of Fig.~\ref{mfbr}a are the upper boundaries of inter-band particle-hole excitations associated with the transverse modes (Fig.~\ref{FigFermi}b). More specifically, the black curve results from excitations in which an electron is annihilated in the lower band and created in the upper band, while for the blue curve the process is the opposite. The green curve of Fig.~\ref{mfbr}b is the upper boundary for the intra-band particle-hole excitations that describes the longitudinal modes. Note that these boundaries appear with dashed lines in the spectrum of excitations with the other polarization (Fig.~\ref{FigFermi}). This is a consequence of the inter-band $q=\pi$ scattering which is introduced by the staggered field $H_{\rm st}$. The dashed line just indicates that these ``shadow'' branches have a very small intensity.\par \begin{figure}[ht] \begin{center} \includegraphics[angle=-90,width=7.0cm,scale=1.2]{Fig5mfbranches.ps} \vspace{-0.2cm} \caption{ a) Transverse and b) longitudinal excitations obtained from the mean-field theory, as described in the text in more detail.} \label{mfbr} \end{center} \end{figure} The gap in the low energy spectrum is not the only qualitative change introduced by the staggered field $H_{\rm st}$ within the mean-field approach. Since the Fermi wave vectors $q_F=\pm \pi/2\pm \delta q$ are now extremal points of the new bands (see Fig.~\ref{FigFermi}c), we expect the emergence of a new branch of transverse excitations associated with transitions between points that are close to $q_F=\mp \pi/2\pm \delta q$ ($q_F=\pm \pi/2\pm \delta q$) and points in the proximity of $q=0$ ($q=\pi$). This new branch of excitations (see the red curve in Fig.~\ref{mfbr}a) coincides with the new excitation which is experimentally observed at high energies (see Fig.~\ref{Fignewexcitation}), and thus explains our experimental results. For the longitudinal polarization, the new branch of excitations is shifted by $\pi$ relative to the transverse polarization (see the red curve in Fig.~\ref{mfbr}b). Since the maximum of this longitudinal branch is located at $q=\pi/2+\delta q$, it is difficult to distinguish this branch from the breathers in the experimental data. The energy of this maximum is close to $3 \mathrm{meV}$ according to the MFT while the experimental value is $3.4 \mathrm{meV}$. This is reasonable given that the MFT is not an adequate approximation to give a quantitative description of the excitations.\par \section{Exact diagonalization of finite length chains} The intensities of the different branches are not properly described by the mean-field equations (\ref{cross}) and (\ref{long}). For instance, the MFT predicts a high intensity for the upper boundary of the two-spinon excitations even at zero field for which very accurate calculations are available.\cite{Muller} This is clearly an artifact of the MFT. To obtain an accurate description of the intensities and the energies of the different branches we complemented our analytical approach with the exact diagonalization of finite size chains. Using the Lanczos method, we obtained the exact ground state of $H$ for finite chains of length $L=12,14,16,18,22,20,24$. Having the ground state, we computed the dynamical magnetic susceptibility, $\chi(\omega,q)$, for all the possible wave vectors $q=0,\, 2\pi/L,\,....,\,2\pi (L-1)/L$ of a chain of length $L$ using the method introduced in Ref.~\onlinecite{Gagliano_Balseiro}. The wave vector $q=\pi$ is present in all of the considered chains. The small changes in the calculated $\chi(\omega,\pi)$ as a function of $L$ indicate that the finite size effects are small for the considered problem. In general, smaller finite size effects are expected for systems that have an excitation gap because the spin-spin correlation length is finite.\par The $T=0\;\mathrm{K}$ structure factors were calculated using \begin{equation} S^{\alpha\alpha}(q,\omega)=\frac{1}{\pi} \chi''^{\alpha\alpha}(q,\omega)\, . \end{equation}The energy spectra were obtained by convoluting the discrete spectra for finite chains with Lorentzian functions with a full width at half maximum, $2\Gamma=0.1 \mathrm{meV}$, in order to model the experimental energy resolution. The intensity of the calculated structure factors is given for a chain of $L$ spins and normalized so that $\sum_{q,\alpha} \int d\omega S^{\alpha\alpha}(q,\omega)= S(S+1)$ as required by the total scattering sum rule.\par \begin{figure}[ht] \begin{center} \includegraphics[height=7cm,bbllx=27,bblly=251,bburx=575, bbury=564,angle=0,clip=]{Fig6exactdiagonalization.eps} \caption{The dynamic structure factors $S^{\alpha \alpha}(q,\omega)$ obtained through exact diagonalization of finite chains for $H_{\rm st}$=$0.075H$, where $\alpha$=$x$, $y$ and $z$ are the spin polarizations. The uniform field is along the c-axis (z polarization), the staggered field along the a-axis (x polarization).} \label{Figexactdiag} \end{center} \end{figure} The calculated neutron scattering for all chain lengths was averaged and the dynamic structure factor for the three different polarizations is shown in Fig.~\ref{Figexactdiag} as a function of wave-vector transfer and energy transfer for $H_{\rm st}$=$0.075H$ on an absolute scale. The structure factor $S^{zz}$ polarized along the uniform field contains a well-defined excitation with a minimum gap energy at an incommensurate wave-vector. The structure factors $S^{xx}$ and $S^{yy}$ polarized perpendicular to the uniform field contain well-defined excitations whose dispersion has a minimum at the antiferromagnetic point, with $S^{yy}$ having an excitation at lower energy than $S^{xx}$. These excitations correspond to the first and second breather of the quantum sine-Gordon model.\par Fig.~\ref{Figexactdiag} also provides evidence that the excitation spectrum contains a substantial amount of continuum states, as observed in the experiment.\cite{Kenzelmann_CDC_PRL} These states lie higher in energy than the well-defined low-energy excitations and extend to high energies. At $H_{\rm st}$=$0.075H$ our numerical calculations yield an expectation value of the Hamiltonian per $S=\frac{1}{2}$ of $\mathcal{H}=-0.34 J$, increased from the ground state energy at zero field, $\mathcal{H}=(\frac{1}{4} - \ln 2 ) J= -0.44 J$, by $0.1 J$.\par The numerical data were binned and Gaussian fits were used to obtain excitation energies as a function of wave-vector. These results are shown in Fig.~\ref{Figdispersion}a as a solid line, showing that the numerical calculations reproduce the dispersion relation of the low energy modes. We calculated the dynamic structure factor for the DCS experiment taking into account wave-vector dependent mixing of the polarized dynamic structure factor $S^{\alpha\alpha}$. Fig.~\ref{Figcomparison} directly compares the calculated and measured intensities on an absolute scale, showing that there is excellent agreement between the numerical calculations and the experiment.\par \begin{figure}[ht] \begin{center} \includegraphics[height=7cm,bbllx=24,bblly=254,bburx=550, bbury=548,angle=0,clip=]{Fig7comparison.eps} \caption{(a) Calculated dynamic structure factor $S(q,\omega)$ for wave-vectors and polarization of the DCS experiment. (b) Structure factor measured using DCS (Ref.~\protect\onlinecite{Kenzelmann_CDC_PRL}).} \label{Figcomparison} \end{center} \end{figure} In addition, Figure~\ref{Figexactdiag} shows that a new branch of transverse magnetic excitations with the maximum around $q=0.4\pi$ and $\hbar \omega=3.5\mathrm{meV}$ emerges when the staggered field is present. This provides a quantitative explanation for the high energy peak that appears in the neutron scattering data (see Fig.~\ref{Fignewexcitation}). Muller {\it et al}.~\cite{Muller} showed that this branch is not present for $H_{\rm st}=0$ by using macroscopic selection rules. For non-zero $H_{\rm st}$, the total spin $S$ and its projection along the $z$-direction, $S^{z}$, are not good quantum numbers anymore, allowing the emergence of this new excitation. Our mean-field approach is in good agreement with this result.\par \section{Conclusions} In summary, we have performed neutron scattering experiments, numerical calculations and an analytical study investigating the AF $S$=$\frac{1}{2}$ chain in uniform and staggered fields. We found that the incommensurate bound-spinon states are well described by a mapping to an interacting fermionic model after a renormalization of the energy. The model also explains the emergence of a new excitations with the application of a staggered field. Our results suggest that the proposed mapping is more powerful than initial results suggested, and that it may also be useful for other quantum spin systems with a relatively short correlation length.\par \begin{acknowledgments} Work at JHU was supported by the NSF through DMR-0306940. DCS and the high-field magnet at NIST were supported in part by the NSF through DMR-0086210 and DMR-9704257. C. B. and D.~H. R. gratefully acknowledge discussions with A.~J. Millis concerning the mean-field theory of $S$=$\frac{1}{2}$ chains. \end{acknowledgments}
2,869,038,155,745
arxiv
\section{Introduction: statement of the problem} Dual superconductivity of the vacuum was advocated long ago as the mechanism for confinement of colour\cite{1,2,3}. Dual means that the role of electric and magnetic fields and charges are interchanged with respect to ordinary superconductors. The basic idea is that the chromoelectric field acting between a quark antiquark pair is channeled into an Abrikosov flux tube\cite{4}, by dual Meissner effect. The resulting static potential is proportional to the distance $R$ \begin{equation} V(R) = \sigma R \label{eq1.1}\end{equation} $\sigma$ is the string tension. Flux tubes are expected to behave as strings\cite{5,6}. Numerical simulations of QCD on the lattice support this picture: \begin{itemize} \item[1)] The interquark force at large distances obeys Eq.(\ref{eq1.1})\cite{7}. \item[2)] Flux tubes exist in field configurations produced by static $q\bar q$ pairs\cite{8,9,10}. \item[3)] Higher modes of the string are visible\cite{11}. \end{itemize} Till recently, however, a convincing demonstration that the ground state of QCD behaves as a superconductor was still lacking. In the following I will analyse recent progress on this point. In particular I will present direct evidence of dual superconductivity of QCD vacuum, obtained by measuring on a lattice a disorder parameter\cite{12}. Ordinary superconductivity is nothing but the spontaneous breaking (S.B.), \`a la Higgs, of the $U(1)$ symmetry related to charge conservation\cite{13}. A charged field \begin{equation} \Phi = \psi\,{\rm e}^{{\rm i} \theta q}\qquad \psi = |\Phi|\label{eq1.2}\end{equation} acquires a non vanishing vacuum expectation value (v.e.v.) $\langle\Phi\rangle$. As a consequence \begin{itemize} \item[(i)] the photon acquires a mass $\mu$ \begin{equation} \mu^2 = e^2\,\langle\Phi\rangle^2\label{eq1.3}\end{equation} \item[(ii)] the vacuum is not $U(1)$ invariant, and has no definite charge: indeed if it where invariant the v.e.v. of any charged operator would vanish. \end{itemize} A well known consequence of the Higgs phenomenon is that the derivative of the angular variable $\theta$ of Eq.(\ref{eq1.2}) becomes the longitudinal component of the photon. Instead of $A_\mu$ it proves convenient to use as a field variable $\tilde A_\mu = A_\mu - \frac{1}{e}\partial_\mu\theta$ which is gauge invariant. In terms of $\tilde A_\mu$ $F_{\mu\nu} = \partial_\mu\tilde A_\nu - \partial_\nu\tilde A_\mu$: in particular ${\bf H} = {\bf \nabla}\wedge{\bf \tilde A}$. The equations of motion for a static configuration become \begin{equation} \partial_i F_{ij} + \mu^2 \tilde A_j = 0 \label{eq1.4}\end{equation} or \begin{equation} {\bf \nabla}\wedge{\bf H} = \mu^2 {\bf \tilde A} \label{eq1.5}\end{equation} Taking the curl of both sides of Eq.(\ref{eq1.5}) gives \begin{equation} {\bf \nabla}^2 {\bf H} + \mu^2 {\bf H} = 0 \label{eq1.6}\end{equation} Eq.(\ref{eq1.5}) means that a permanent current (London current) \begin{equation} {\bf j} = \mu^2 {\bf \tilde A}\label{eq1.7}\end{equation} is present in the superconductor, with ${\bf E} = 0$, or, since $\rho {\bf j} = {\bf E}$, that $\rho = 0$. Eq.(\ref{eq1.6}) means that the magnetic field ${\bf H}$ has a finite penetration depth, and this is nothing but Meissner effect. On a line around a flux tube at distance larger than the penetration depth ${\bf \tilde A} = 0$, $\oint {\bf \tilde A} d{\bf x} = 0$ or, by the definition of ${\tilde A}_\mu$ $\oint {\bf A} d{\bf x} = n\pi/q$, which is flux quantization. The key parameter in the game is $\langle \Phi\rangle$. To detect superconductivity one can either look for permanent currents Eq.(\ref{eq1.7}), i.e. demonstrate that $\mu^2\neq 0$, or directly for the spontaneous breaking of $U(1)$, i.e. look for a non vanishing v.e.v. of a charged operator. In QCD the dual situation is expected to occur. The disorder parameter is the v.e.v. of an operator with non zero magnetic charge, and the London current is a magnetic current. The strategy of detecting dual superconductivity by looking for persistent currents will be reviewed by D. Haymaker in his talk to this conference. I will instead present a direct determination of the disorder parameter $\langle \Phi\rangle$. \section{Monopoles in gauge theories} Monopoles as solitons in gauge theories are related to the elements of the first homotopy group of the gauge group\cite{14}. Since $\Pi_1(SU(N)) = \{1\}$ in order to have monopoles the symmetry has to reduce to some non simply connected group. In a theory with $SU(2)$ gauge group coupled to a scalar field in the adjoint representation\cite{15} $\vec \Phi$, when the Higgs phenomenon reduces the symmetry from $SU(2)$ to $U(1)$ monopoles do exist as stable static solutions\cite{16,17}. The relevant degrees of freedom are described by the gauge invariant field strength\cite{16} \begin{equation} f_{\mu\nu} = {\vec G}_{\mu\nu}\cdot\vec \Phi - \frac{1}{g} \hat\Phi\cdot\left( D_\mu\hat \Phi\wedge D_\nu\hat\Phi\right) \label{eq2.1}\end{equation} $\hat\Phi = \vec \Phi/|\vec \Phi|$ is the colour direction of the Higgs field. At large distances the field $f_{\mu\nu}$ of a monopole configuration is the field of a Dirac monopole of magnetic charge 2. One can define a gauge field $a_\mu$ \begin{equation} a_\mu = {\vec A}_\mu\cdot\hat \Phi \label{eq2.2}\end{equation} Contrary to $f_{\mu\nu}$ $a_\mu$ is not gauge invariant, since ${\vec A}_\mu$ is not gauge covariant. In general\cite{18} \begin{equation} f_{\mu\nu} = \partial_\mu a_\nu - \partial_\nu a_\mu - \frac{1}{g}\hat \Phi\left(\partial_\mu\hat\Phi\wedge \partial_\nu\hat \Phi\right) \label{eq2.3}\end{equation} After a gauge rotation which brings $\hat \Phi$ in a given colour direction $(\hat\Phi)^a = \delta^a_3$, the last term in Eq.(\ref{eq2.3}) vanishes and \begin{equation} f_{\mu\nu} = \partial_\mu a_\nu - \partial_\nu a_\mu \label{eq2.4}\end{equation} Such a gauge rotation is called an abelian projection: in a gauge defined by this procedure the $U(1)$ degrees of freedom relevant to the definition of monopoles coincide with a subgroup of the gauge group. $a_\nu$ and $f_{\mu\nu}$ are formally identical to the fields of a $U(1)$ gauge theory. We notice for further reference that also the commutation relations between $f_{0i}$ and $a_i$ are identical to those of a $U(1)$ theory. To define the monopoles which produce, by condensation in the vacuum, dual superconductivity and confinement, the relevant degrees of freedom have to be selected by an abelian projection\cite{19}. A few different abelian projections have been proposed in the literature as candidates for this purpose\cite{19,20} and will be discussed in detail in what follows. We conclude this section by noticing that, whatever the relevant abelian projection, the problem is always reduced to detect dual superconductivity of a $U(1)$ system. \section{Detecting dual superconductivity in $U(1)$ gauge theory} I will sketch the construction of the creation operator for a monopole\cite{12}, whose v.e.v. will be used as a disorder parameter for dual superconductivity. Let $\Pi_i({\bf x},t) = F_{0i}({\bf x},t)$ be the usual conjugate momenta to the field variables $A_i({\bf x},t)$. The operator \begin{equation} \mu({\bf y},t) = {\rm exp}\left({\rm i}\int d^3{\bf x} {\bf \Pi}({\bf x},t) \frac{1}{e}{\bf b}({\bf x}-{\bf y})\right) \label{eq3.1}\end{equation} creates a monopole of magnetic charge $m$ in the site ${\bf y}$ at time $t$, if $\frac{1}{e}{\bf b}({\bf x}-{\bf y})$ is the classical vector potential produced by such a monopole, with the Dirac string subtracted. Putting the string along the direction ${\bf n}$ \begin{equation} {\bf b}({\bf r}) = \frac{m}{2}\frac{\displaystyle {\bf n}\wedge{\bf r}} {\displaystyle r(r - {\bf n}{\bf r})} \label{eq3.2}\end{equation} Indeed $\mu$ as defined by Eq.(\ref{eq3.1}) is the operator which adds to any field configuration the field of the monopole, in the same ad the translation operator adds $a$ to the position $q$: \[ {\rm e}^{i p a} | q\rangle = |q + a\rangle\] $\mu({\bf y},t)$ carries magnetic charge $m$. By use of the canonical commutation relation $[ \Pi_i({\bf x},t, A_j({\bf y},t)] = -{\rm i}\delta_{ij}\delta^3( {\bf x} - {\bf y})$, and of the definition of the magnetic charge operator \begin{equation} Q = \int d^3{\bf x} {\bf \nabla}\cdot{\bf H} = \int d^3{\bf x} {\bf \nabla}\cdot ({\bf \nabla}\wedge {\bf A})\label{eq3.3}\end{equation} \begin{equation} \left[ Q,\mu({\bf y},t)\right] = \int d^3{\bf x} \frac{1}{e}{\bf \nabla}\cdot ({\bf \nabla}\wedge {\bf b}) \mu({\bf y},t) = \frac{2\pi m}{e} \mu({\bf y},t) \label{eq3.4}\end{equation} We will use the v.e.v. $\langle\mu\rangle$ as disordere parameter for dual superconductivity\cite{12}. Our construction is inspired by the classical work of ref.\cite{21} and by its application to monopole condensation of ref.\cite{22}. In ref.\cite{22} condensation of monopoles is proved, in the infinite volume limit, for a specific form of the action, the Villain action. Our construction coincides with ref.\cite{22} for that case, but can be used for any form of the action, and for finite volumes. The infinite volume limit can be reached by finite size analysis. I refer to ref.\cite{12} for the details of the construction which I will summarize as follows. \begin{itemize} \item[i)] $\langle\mu\rangle$ can be determined either by the cluster property from the correlation of a monopole and an antimonopole at large distance $d$ \begin{equation} \langle \mu({\bf d},0)\,\bar\mu({\bf 0},0)\rangle \mathop\simeq\limits_{ |{\bf d}|\to \infty} \langle \mu\rangle ^2 \label{eq3.5}\end{equation} or directly. It is known that, for Wilson action on lattice, electric charge is confined for $\beta < \beta_c$ ( $\beta= 1/e^2$, $\beta_c \simeq 1.01$); for $\beta > \beta_c$ the system is made of free photons. We expect $\langle \mu\rangle_{V\to \infty}\neq 0$ for $\beta < \beta_c$ and $\langle \mu\rangle_{V\to \infty} = 0$ for $\beta > \beta_c$. Of course $\langle \mu\rangle$ beeing an analytic function of $\beta$ for finite volume, it can be identically zero for $\beta > \beta_c$ only in the thermodynamic limit $V\to\infty$. \item[ii)] Instead of $\langle \mu\,\bar\mu\rangle$ itself it proves convenient to use the quantity \begin{equation} \rho \mathop=\limits_{|d|\to \infty} \frac{1}{2}\ln \langle \mu({\bf d},0)\,\bar\mu({\bf 0},0)\rangle \label{eq3.6}\end{equation} $\rho$ has less fluctuations than $\langle\mu\rangle$ itself, and is independent on the boundary conditions. \item[iii)] If $\langle\mu\rangle$ tends to zero as a power as $\beta\to\beta_c$ \begin{equation} \langle\mu\rangle\mathop\simeq\limits_{\beta\to\beta_c} (\beta-\beta_c)^\delta \label{eq3.7}\end{equation} then, from the definition Eq.(\ref{eq3.6}) \begin{equation} \rho \simeq \frac{\displaystyle \delta}{\displaystyle \beta-\beta_c} \label{eq3.8}\end{equation} \end{itemize} Eq.(\ref{eq3.8}) can be translated in terms of correlation length $\xi$ and of the critical index $\nu$ by use of the relation \begin{equation} \xi^{-1} \simeq (\beta-\beta_c)^\nu \label{eq3.9}\end{equation} If $\xi \gg a$ ($a$ = lattice spacing) and $L \gg a$, then $\langle\mu\rangle$ is approximatively independent of $a$ (finite size scaling): \begin{equation} \langle\mu\rangle \simeq L^{-\delta/\nu}\Phi(\frac{L}{\xi}) \label{eq3.10}\end{equation} $\Phi$ is an analytic function at finite volume, and Eq.(\ref{eq3.10}) tends to Eq.(\ref{eq3.7}) as $V\to\infty$. The Eq.(\ref{eq3.10}) implies \begin{equation} \rho L^{-1/\nu} = f((\beta-\beta_c) L^{1/\nu}) \label{eq3.11}\end{equation} For lattices of different size $L$ the quantity $\rho L^{-1/\nu}$ must be a universal function of the scaled variable $(\beta-\beta_c) L^{1/\nu}$. The limit $L\to\infty$ is thus extracted and the exponents $\delta$ and $\nu$ can be determined. Typical data for $\rho(d)$ are shown in fig.1. \par\noindent \begin{minipage}{0.5\linewidth} \epsfxsize0.85\linewidth {\centerline{ \epsfbox{fig1.ps}}} \end{minipage} \begin{minipage}{0.5\linewidth} \epsfxsize0.85\linewidth {\centerline{ \epsfbox{fig2.ps}}} \end{minipage} \par\noindent \begin{minipage}{0.5\linewidth} {\centerline{Fig.1}} \end{minipage} \begin{minipage}{0.5\linewidth} {\centerline{ Fig.2\quad $L^{\frac{1}{\nu}}/\rho$ vs $(\beta-\beta_c) L^{\frac{1}{\nu}}$}} \end{minipage} The scaling (Eq.(\ref{eq3.11})) is demonstrated in fig.2, where $L^{1/\nu}\rho^{-1}$ is plotted versus $L^{1/\nu}(\beta_c - \beta)$ for different lattice sizes. Data for periodic b.c. are well described by \begin{equation} \langle\mu\rangle \simeq L^{-\delta/\nu} \left[\left( \beta_c - \beta) L^{1/\nu} + v_0\right)^2 + v_1^2\right]^{\delta/2} \label{eq3.12}\end{equation} A best fit gives $\delta = 2.0\pm.2$ $\beta_c = 1.0111(1)$ $1/\nu = 3.97\pm.40$ $v_0\sim v_1 \sim 1$. For $\beta < \beta_c$ vacuum is a dual superconductor. \section{Dual superconductivity in $SU(2)$ gauge theory\cite{23}} We have applied the construction described in sect.3 to detect dual superconductivity in $SU(2)$ gauge theory. We have probed condensation of the monopoles defined by two different abelian projections\cite{19}: \begin{itemize} \item[(a)] The abelian projection defined by diagonalizing as effective Higgs field $\hat\Phi$ the Polyakov line. \item[(b)] The abelian projection defined by diagonalizing a component (say $F_{12}$) of the field strength $F_{0i}$. \end{itemize} For the projection (a) the relevant abelian field strength $F_{0i}$ (Eq.(\ref{eq2.1})) is simply $F_{0i} = \hat\Phi^a G_{0i}^a$, since $D_0\hat\Phi = 0$. The operator $\mu$ [Eq.(\ref{eq3.1})] is constructed in terms of $\Pi_{i}$ $=$ $F_{0i}$ and the analysis of the $U(1)$ model is repeated. A typical behaviour is shown in Fig.3, where $\rho$ is plotted vs $\beta$. \par\noindent \begin{minipage}{0.5\linewidth} \epsfxsize0.85\linewidth {\centerline{ \epsfbox{fig3.ps}}} \end{minipage} \begin{minipage}{0.5\linewidth} \epsfxsize0.85\linewidth {\centerline{ \epsfbox{fig4.ps}}} \end{minipage} \par\noindent \begin{minipage}{0.5\linewidth} {\centerline{Fig.3}} \end{minipage} \begin{minipage}{0.5\linewidth} {\centerline{ Fig.4\quad $L^{\frac{1}{\nu}}/\rho$ vs $(\beta/\beta_c-1) L^{\frac{1}{\nu}}$}} \end{minipage} The simulation is done on a asymmetric lattices $N_t = 4,6$, $N_s = 16,20$. A clear signal is visible at the deconfining temperature. A finite size analysis confirms that condensation survives the limit $V\to\infty$ (fig.4). The best fit gives: \[ \nu \simeq 0.65\qquad \delta = 1.3\pm0.1\quad \Delta\beta_c\equiv\beta_c(N_T = 6) - \beta_c (N_T= 4) = 0.048\pm0.002\] to be compared to $\Delta\beta_c = 0.07$ predicted by two loop asymptotic scaling. For the abelian projection (b) no signal is observed. There is no correlation between the condensation of monopoles defined by this projection and deconfinement. \section{Concluding remarks} \begin{itemize} \item[(i)] We have demonstrated that the abelian projection which diagonalizes the Polyakov line defines monopoles condensing in QCD vacuum. The dual $U(1)$ corresponding to their charge is spontaneously broken and the QCD vacuum is a dual superconductor. Recent observations that the abelian string tension in this projection is almost equal the usual string tension support our conclusion\cite{24a}. \item[(ii)] Most of the work done in the literature on the role of monopoles in confinement consists in correlating confinement to the density of monopoles or of monopoles world lines, as suggested by the pioneering work of ref.\cite{31} on $U(1)$: a good review is contained in ref.\cite{20}. Of course the density of monopoles is not a disorder parameter for dual superconductivity, in the sense described in sect. 1, in the same way as the density of electrons or of Cooper pairs is not for ordinary superconductors. In fact the density of monopoles, contrary to $\mu$ (Eq.(\ref{eq3.4})), commutes with the monopole charge $M$, and cannot signal condensation. \item[(iii)] Most of the work done in the literature has been done with the so called ``maximal abelian'' projection\cite{27}. The monopoles defined by this projection seem to be relevant to confinement, as evidenced also by the detection of persistent currents\cite{28,29}. The maximal abelian gauge presents less lattice artifacts than others\cite{30}. We plan to investigate also this projection by our method: a problem with computing power comes from the fact that the gauge is defined by a maximization which has to be repeated at each updating step in the computation of $\rho$. \end{itemize} In conclusion we have produced conclusive and direct evidence that \begin{itemize} \item[(i)] QCD vacuum is a dual superconductor. \item[(ii)] not all the abelian projections are equally good to define the monopoles relevant to confinement\cite{19}. \end{itemize}
2,869,038,155,746
arxiv
\section{Introduction} One of the most dynamic subjects of differential equations has been the stability theory of Ulam-Hyers. The theme came in 1940 by Ulam in a lecture on unresolved issues at the University of Wisconsin \cite{16,17}. The issue raised by Ulam was partially answered the following year by Hyers in the case of the Banach spaces. Thus, the theory of stabilities, came to be called Ulam-Hyers. However, in 1978 \cite{21}, Rassias introduced a generalization of the version exhibited by Hyers. In this sense, due to this breakthrough and novelty in mathematical analysis, numerous specialists have researched the stability of solutions of functional differential equations. The idea of Ulam-Hyers stability for functional equations, is the substitution of the functional equation for a given inequality that acts as a perturbation of the equation. We recommend a few monographs and papers that permit a progressively careful study of the subjects \cite{akkouchi,18,19,20}. With the beginning of the fractional calculus and over the years his theory being well consolidated and grounded, many researchers began to look in a different way for the area, especially researchers working with differential equations \cite{ZE1,ZE23,ZE3,zhou,yang,kilbas,samko}. In this sense, today it is more than proven that investigating and analyzing certain physical problems, through fractional derivatives, ensures more accurate and consistent results with reality. On the other hand, moving to a more theoretical side, investigating the existence, uniqueness and Ulam-Hyers stability of solution of fractional differential equations has gained increasing prominence in the scientific community, although there are a range of works, the theory is still being built with good results \cite{wang222,wangnew,wangclass,wanglinear}. In 2012, Wang and Zhou \cite{wang222} in their work, investigated several kind of stabilities of the mild solution stability of the fractional evolution equation in Banach space, namely: Mittag-Leffler-Ulam stability, Mittag-Leffler-Ulam- Leffler-Ulam-Hyers stability, Mittag-Leffler-Ulam-Hyers - Rassias stability and generalized Mittag-Leffler-Ulam-Hyers-Rassias stability. In 2014, Abbas \cite{abbas1} investigated the existence, uniqueness, and stability of the mild solution of the integrodifferential equation with nonlocal conditions through Holder's inequality, Schauder's fixed point theorem, and Gronwall's inequality in Banach space. Other work can be found in the references of the two papers themselves. On the other hand, Zhou and Jiao \cite{zhou}, using fractional operators and some fixed point theorems, investigated the existence and uniqueness of mild solutions of fractional neutral evolution equations and made some applications in order to elucidate the obtained results. In this sense, Saadati et. al. \cite{saadati}, presented results on the existence of mild solutions for fractional abstract equations with non-instantaneous impulses. In order to obtain such results, the authors used non-compactness measure and the Darbo-Sadovskii and Tichonov fixed point theorems . For a more in-depth reading, we suggest some papers \cite{yang,sousa2,dabas1,jawahdou,balac,olszowy,chen}. Although we are faced with a significant amount of work dealing with solution properties of fractional differential equations, there is still much work to be done. In order to propose new results and provide new materials on Ulam-Hyers stability and to contribute positively to the area, the present paper has as main objective to investigate the Ulam-Hyers stabilities on the intervals $[0,T]$ and $[0,\infty)$. So let's consider the fractional nonlinear abstract Cauchy problem given by \begin{equation}\label{CP} \left\{ \begin{array}{rll} \displaystyle {}^{H}{\mathbb{D}}_{0^{+}}^{\alpha,\beta} \xi(t) & = & \mathcal{A} \xi(t) + u(t) \mathcal{H}(t,\xi(t)), ~t \in I\\ I_{0^{+}}^{1-\gamma} \xi(0) & = & \xi_0 \end{array} \right. \end{equation} where ${}^{H}{\mathbb{D}}_{0^{+}}^{\alpha,\beta} (\cdot)$ is the Hilfer fractional derivative of order $0 < \alpha \leq 1$ and type $0 \leq \beta \leq 1$, $\gamma=\alpha+\beta-\alpha \beta$, $I=[0,T]$ or $[0,\infty)$, $\xi \in C(I,\Omega)$, $\Omega:=(\Omega,\|\cdot\|)$ is a Banach space, $t \in I$, $ \mathcal{A}:\Omega \rightarrow \Omega$ is the infinitesimal generator of a $C_0$-semigroup $(\mathbb{S}(t))_{t \geq 0}$ and $\mathcal{H}: I \times \Omega \rightarrow \Omega$ is a given continuous function. We highlight below the main points that motivated us to investigate the mild solution stability for the fractional abstract Cauchy problem: \begin{enumerate} \item A new class of Ulam-Hyers type stabilities for the fractional abstract Cauchy problem; \item At the limit of $\beta\rightarrow 1$ in the mild solution of the abstract Cauchy problem with $0 <\alpha <1$, we have a sub-class of Ulam-Hyers stabilities for the Riemann-Liouville fractional derivative; \item At the limit of $\beta\rightarrow 0$ in the mild solution of the abstract Cauchy problem with $0 <\alpha <1$, we have a sub-class of Ulam-Hyers stabilities for the fractional derivative of Caputo; \item When $\alpha=1$, we have as particular case, the integer version; \item An important consequence of the obtained results are the possible future applications through the Ulam-Hyers stabilities in engineering, biology and especially in mathematics; \end{enumerate} The paper is organized as follows. In section 2, we introduce the $\psi$-Riemann-Liouville fractional integral, the $\psi$-Hilfer fractional derivative and fundamental concept of the operator $(\alpha,\beta)$- resolvent. In this sense, it is presented the mild solution of the fractional Cauchy problem as well as the Ulam-Hyers stability. In section 3, it is directed to the first result of this paper, that is, we investigate the Ulam-Hyers and Ulam-Hyers-Rassias stabilities in the $[0, T]$ range and discuss some particular cases. In section 4, we discuss the Ulam-Hyers and Ulam-Hyers-Rassias stabilities in the interval $[0,\infty)$. Concluding remarks close the paper. \section{Preliminaries} In this section, we will introduce some important definitions and results in order to assist in the development of this paper. Let $T > 0$ be a given positive real number. The weighted space of continuous functions $\xi \in I'=(0,T]$ is given by \cite{sousa21} { \begin{equation*} C_{1-\gamma}(I,\Omega)= \left\{ \xi \in C(I',\Omega), \, t^{1-\gamma} \xi(t) \in C(I,\Omega) \right\} \end{equation*} where $0 < \gamma \leq 1$, with norm \begin{equation*} \begin{array}{rll} ||\xi||_{C_{1-\gamma}} & = & \displaystyle \sup_{t \in I} ||t^{1-\gamma}\xi(t)|| \end{array} \end{equation*} and \begin{equation*} \begin{array}{rll} ||\xi- \phi||_{C_{1-\gamma}} & = & {\rm{d}}_{1-\gamma} (\xi,\phi) :=\displaystyle \sup_{t \in I} ||t^{1-\gamma}(\xi(t)-\phi(t))|| \cdot \end{array} \end{equation*} Let $\left( a,b\right) $ $\left( -\infty \leq a<b\leq \infty \right) $ be a finite interval {\rm{(or infinite)}} of the real line $\mathbb{R}$ and let $\alpha >0$. Also let $\psi \left( x\right) $ be an increasing and positive monotone function on $\left( a,b\right] ,$ having a continuous derivative $\psi ^{\prime }\left( x\right)$ {\rm{(we denote first derivative as $\dfrac{d}{dx}\psi(x)=\psi'(x)$)}} on $\left( a,b\right) $. The left-sided fractional integral of a function $f$ with respect to a function $\psi $ on $ \left[ a,b\right] $ is defined by \cite{ZE1,sousa21} \begin{equation}\label{eq7} \mathcal{I}_{a+}^{\alpha ;\psi }f\left( x\right) =\frac{1}{\Gamma \left( \alpha \right) }\int_{a}^{x}\psi ^{\prime }\left( s\right) \left( \psi \left( x\right) -\psi \left( s\right) \right) ^{\alpha -1}f\left( s\right) ds. \end{equation} On the other hand, let $n-1<\alpha <n$ with $n\in \mathbb{N},$ let $J=\left[ a,b\right] $ be an interval such that $-\infty \leq a<b\leq \infty $ and let $f,\psi \in C^{n}\left[ a,b\right] $ be two functions such that $\psi $ is increasing and $\psi ^{\prime }\left( x\right) \neq 0,$ for all $x\in J$. The left-sided $\psi -$Hilfer fractional derivative $^{H}\mathbb{D}_{a+}^{\alpha ,\beta ;\psi }\left( \cdot \right) $ of a function $f$ of order $\alpha $ and type $0\leq \beta \leq 1,$ is defined by \cite{ZE1,ZE23} \begin{equation}\label{eq8} ^{H}\mathbb{D}_{a+}^{\alpha ,\beta ;\psi }f\left( x\right) =\mathcal{I}_{a+}^{\beta \left( n-\alpha \right) ;\psi }\left( \frac{1}{\psi ^{\prime }\left( x\right) }\frac{d}{dx}\right) ^{n}\mathcal{I}_{a+}^{\left( 1-\beta \right) \left( n-\alpha \right) ;\psi }f\left( x\right) . \end{equation} Let $(\Omega,||\cdot||)$ be a given Banach space and $I=[0,+\infty)$ or $I=[0,T]$ where $T$ and $\mathscr{L}(\Omega)$ the set of bounded linear maps from $\Omega$ to $\Omega$. Next, we present the definition of the fundamental operator $(\alpha,\beta)$-resolvent in the presentation of the mild solution of the fractional abstract Cauchy problem Eq.(\ref{CP}). \begin{definition} {\rm \cite{chen}} Let $\alpha > 0$ and $\beta \geq 0$. A function $\mathbb{S}_{\alpha,\beta} : \mathbb{R}_{+} \to \mathscr{L}(\Omega)$ is called a $\beta$-times integrated $\alpha$-resolvent operator function or an $(\alpha,\beta)$-resolvent operator function {\rm{(ROF)}} if the following conditions are satisfied: \begin{tabular}{cl} {\rm{(A)}} & $\mathbb{S}_{\alpha,\beta}(\cdot)$ is strongly continuous on $\mathbb{R}_{+}$ and $\mathbb{S}_{\alpha,\beta}(0)=g_{\beta+1}(0)I$;\\ {\rm{(B)}} & $\mathbb{S}_{\alpha,\beta}(s) \mathbb{S}_{\alpha,\beta}(t)= \mathbb{S}_{\alpha,\beta}(t) \mathbb{S}_{\alpha,\beta}(s)$ for all $t,s \geq 0$;\\ {\rm{(C)}} & the function equation \\ & $ \mathbb{S}_{\alpha,\beta}(s) I_t^{\alpha} \mathbb{S}_{\alpha,\beta}(t) - I_s^{\alpha} \mathbb{S}_{\alpha,\beta}(s)\mathbb{S}_{\alpha,\beta}(t) $ $=g_{\beta+1}(s) I_t^{\alpha} \mathbb{S}_{\alpha,\beta}(t) - g_{\beta+1}(t) I_s^{\alpha} \mathbb{S}_{\alpha,\beta}(s)$ \\ & holds for all $t,s \geq 0$. \end{tabular} \medskip \end{definition} The generator $\mathcal{A}$ of $\mathbb{S}_{\alpha,\beta}$ is defined by \begin{equation} D(\mathcal{A}):= \left\{x \in \Omega: \lim_{t \to 0^{+}} \frac{\mathbb{S}_{\alpha,\beta}(t)\, x - g_{\beta+1}(t)\, x}{g_{\alpha+\beta+1}(t)} \,\, {\rm{exists}} \right\} \end{equation} and \begin{equation} \mathcal{A}\,x := \lim_{t \to 0^{+}} \frac{\mathbb{S}_{\alpha,\beta}(t)\, x - g_{\beta+1}(t)\, x}{g_{\alpha+\beta+1}(t)}\, , \quad x \in D(\mathcal{A}), \end{equation} where $g_{\alpha+\beta+1}(t):= \dfrac{t^{\alpha+\beta}}{\Gamma(\alpha+\beta)}$ ($\alpha+\beta>0$). An $(\alpha,\beta)$-ROF $\mathbb{S}_{\alpha,\beta}$ is said to be exponentially bounded if there exist constants $\delta \geq 1$, $w \geq 0$ such that $||\mathbb{T}_{\alpha}(t)|| \leq \delta\, e^{wt}$ and $||\mathbb{S}_{\alpha,\beta}(t)|| \leq \delta\, e^{wt}, ~t \geq 0$. Now, we consider the continuous function given $\mathcal{H}: I \times \Omega \rightarrow \Omega$ such that, for almost all $t \in I$, we get \begin{equation}\label{eq2} ||\mathcal{H}(t,x) - \mathcal{H}(t,y)|| \leq \ell (t) ||x-y||_{C_{1-\gamma}} \, , ~ x,y \in \Omega \end{equation} where $\ell:[0,T] \to \mathbb{R}^{+}$ and $u:[0,T] \to \mathbb{R}$ are two given measurable functions such that $\ell,u$ and $\ell u$ are locally integrable on $I$. The following is the definition of the Mainardi function, fundamental in mild solution of the Eq.(\ref{CP}). Then, the Wright function, denoted by $M_{\alpha}(\theta)$, is defined by \cite{sousa20,gu} \begin{equation*} M_{\alpha}(\theta) = \sum_{n=1}^{\infty} \frac{(-\theta)^{n-1}}{(n-1)!\Gamma(1-\alpha n)}\, , \quad 0 < \alpha < 1, \quad \theta \in \mathbb{C} \end{equation*} satisfying the relation \begin{equation*} \int_0^{\infty} \theta^{\overline{\delta}} M_{\alpha} (\theta) \, {\rm{d}}\theta = \frac{\Gamma(1+\overline{\delta})}{\Gamma(1+\alpha \overline{\delta})}\, , \quad {\rm{for}} \,\, \theta, \overline{\delta} \geq 0 \cdot \end{equation*} \begin{lemma} {\rm \cite{sousa20,gu}} The fractional nonlinear differential equation, {\rm{Eq.(\ref{CP})}}, is equivalent to the integral equation \begin{equation}\label{EI} \xi(t) = \frac{t^{\gamma-1}}{\Gamma(\gamma)}\xi(0) + \frac{1}{\Gamma(\alpha)} \int_0^t (t-s)^{\alpha -1} \left[\mathcal{A} \xi(s) + u(s) \, \mathcal{H}(s,\xi(s)) \right]\, {\rm{d}}s\, , \,\, t \in [0,T] \cdot \end{equation} A function $\xi \in C_{1-\gamma}(I,\Omega)$ is called a mild solution of {\rm{Eq.(\ref{CP})}}, if the integral equation, {\rm{Eq.(\ref{EI})}} holds, we have \begin{equation}\label{eq4} \xi(t) = \mathbb{S}_{\alpha,\beta}(t) \xi(0) + \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,\xi(s))\, {\rm{d}}s\, , \quad t \in I \end{equation} where $\displaystyle \mathbb{T}_{\alpha}(t) = t^{\alpha -1} G_{\alpha}(t)$, $\displaystyle G_{\alpha}(t) = \int_0^{\infty} \alpha \theta M_{\alpha}(\theta) \mathbb{S}(t^{\alpha \theta})\, {\rm{d}}\theta$ and $\mathbb{S}_{\alpha,\beta}(t) = \mathcal{I}_0^{\beta(1 -\alpha)} \mathbb{T}_{\beta}(t)$. \end{lemma} For a given $\xi_0 \in \Omega$ and any $\xi \in C_{1-\gamma}(I,\Omega)$, we set \begin{equation}\label{eq5} \Lambda (\xi)(t):= \mathbb{S}_{\alpha,\beta}(t) \xi_0 + \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,\xi(s))\, {\rm{d}}s \end{equation} for all $t \in I$. For the procedure in this paper, $\ell,u$ are measurable functions such that $\ell,u$ and the product $\ell u$ are locally integrable. Moreover, it is easy to see that the application $\xi \to \Lambda(\xi)$ is a self-mapping of the space $C_{1-\gamma}(I,\Omega)$. For $\xi_{0}\in\Omega$ and $\varepsilon$, we consider \begin{equation}\label{2.1} \xi(t) = \Lambda (\xi(t)), \quad t \in I \end{equation} and the following inequalities \begin{equation}\label{2.2} ||t^{1-\gamma}\left( \xi(t) - \Lambda(\xi(t))\right) || \leq \varepsilon \, , \quad t \in I \end{equation} and \begin{equation}\label{2.3} ||t^{1-\gamma}\left( \xi(t) - \Lambda(\xi(t))\right) || \leq G(t) \, , \quad t \in I, \end{equation} where $\xi\in C_{1-\gamma}(I,\Omega)$ and $G \in C(I,(0,+\infty))$. The following are the definitions of the main results to be investigated in this paper. Following the methodology of \cite{akkouchi}, the definitions were adapted to the problem version of fractional differential equations. Then we have: \begin{definition}{\rm \cite{akkouchi,sousaulam}} The {\rm{Eq.(\ref{2.1})}} is Ulam-Hyers stable if there exists a real number $ c > 0$ such that for each $\varepsilon > 0$ and for each solution $\xi \in C_{1-\gamma}(I,\Omega)$ of {\rm{Eq.(\ref{2.2})}} there exists a solutions $ v \in C_{1-\gamma}(I,\Omega)$ of {\rm{Eq.(\ref{2.1})}} such that \begin{equation} ||t^{1-\gamma}\left(\xi(t) - v(t)\right) || \leq \varepsilon \, , \quad t \in I . \end{equation} \end{definition} \begin{definition}{\rm \cite{akkouchi,sousaulam}} The {\rm{Eq.(\ref{2.1})}} is generalized Ulam-Hyers stable if there exists $\theta \in C_{1-\gamma}([0,+\infty), [0,+\infty))$, $\theta(0)=0$, such that for each $\varepsilon > 0$ and for each solution $ \xi \in C_{1-\gamma}(I,\Omega)$ of {\rm{Eq.(\ref{2.2})}} there exists a solutions $ v \in C_{1-\gamma}(I,\Omega)$ of {\rm{Eq.(\ref{2.1})}} such that \begin{equation} ||t^{1-\gamma}\left(\xi(t) - v(t)\right) || \leq \theta(\varepsilon) \, , \quad t \in I . \end{equation} \end{definition} \begin{definition}{\rm \cite{akkouchi,sousaulam}} The {\rm{Eq.(\ref{2.1})}} is generalized Ulam-Hyers-Rassias stable with respect to $G \in C_{1-\gamma}([0,+\infty),[0,+\infty))$, if there exists $c_{G} > 0$ such that for each solution $\xi \in C_{1-\gamma}(I,\Omega)$ of {\rm{Eq.(\ref{2.3})}} there exists a solution $ v \in C_{1-\gamma}(I,\Omega)$ of {\rm{Eq.(\ref{2.1})}} such that \begin{equation} ||t^{1-\gamma}\left(\xi(t) - v(t)\right) ||\leq c_G G(t) \, , \quad t \in I \cdot \end{equation} \end{definition} \section{Ulam-Hyers and Ulam-Hyers-Rassias stabilities of mild on $[0,T]$.} In this section, we investigate the first of the main results of this paper, i.e., the Ulam-Hyers and Ulam-Hyers-Rassias stabilities of Eq.(\ref{2.1}) in the interval $[0,T]$, using the Banach's fixed point theorem. Let $\left(\mathbb{S}_{\alpha,\beta}(t) \right)_{t \geq 0}$ the $(\alpha,\beta)$-resolvent operator function on a Banach space $(\Omega,||\cdot||_{C_{1-\gamma}})$. and the continuous function $\xi:[0,T] \to \Omega$, given by \begin{equation}\label{eq12} \Lambda(\xi)(t):= \mathbb{S}_{\alpha,\beta}(t) \xi_0 + \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,\xi(s))\, {\rm{d}}s \, , \quad t \in [0,T) \end{equation} for $\xi_0 \in \Omega$ fixed. Then, we have the theorem that gives certain conditions, guarantees the Ulam-Hyers stability to Eq.(\ref{2.1}) on the finite interval $[0,T]$. \begin{theorem} Let $\left(\mathbb{S}_{\alpha,\beta}(t) \right)_{t \geq 0}$ the $(\alpha,\beta)$-resolvent operator function on a Banach space $(\Omega,||\cdot||_{C_{1-\gamma}})$, with $0 \leq \gamma \leq 1$ and let $T > 0$ be a positive real number. We set \begin{equation} \widetilde{\lambda}:= \delta \, T ^{1-\gamma} \int_0^T e^{w(T-s)} |u(s)| \ell(s) \, {\rm{d}}s \cdot \end{equation} If $\widetilde{\lambda} < 1$, then the {\rm{Eq.(\ref{2.1})}} is stable in the Ulam-Hyers sense. \end{theorem} \begin{proof} Admit that $\widetilde{\lambda} < 1$ and let $\varepsilon > 0$ be given. For $\phi,\xi \in C_{1-\gamma}(I,\Omega)$, we have \begin{equation*} \begin{array}{rll} ||(\Lambda \phi)(t) - (\Lambda \xi)(t)|| & = & \displaystyle \left|\left|\mathbb{S}_{\alpha,\beta}(t) \xi_0 + \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,\phi(s))\, {\rm{d}}s \right.\right.\\ & - & \displaystyle \left.\left. \mathbb{S}_{\alpha,\beta}(t) \xi_0 - \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,\xi(s))\, {\rm{d}}s \right|\right|\\ & = & \displaystyle \left|\left| \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \left( \mathcal{H}(s,\phi(s)) - \mathcal{H}(s,\xi(s)) \right) \, {\rm{d}}s \right|\right|\\ & \leq & \displaystyle \int_0^t ||\mathbb{T}_{\alpha}(t-s)|| |u(s)| ||\mathcal{H}(s,\phi(s)) - \mathcal{H}(s,\xi(s)) || \, {\rm{d}}s\\ & \leq & \displaystyle \int_0^t ||\mathbb{T}_{\alpha}(t-s)|| |u(s)|\, \ell(s)\, ||\phi - \xi ||_{C_{1-\gamma}} \, {\rm{d}}s\\ & = & \displaystyle \delta \int_0^T \,e^{w(T-s)} |u(s)|\, \ell(s)\, {\rm{d}}s \, ||\phi - \xi ||_{C_{1-\gamma}} \, , \quad t \in [0,T] \cdot \end{array} \end{equation*} Therefore \begin{align*} ||(\Lambda \phi) - (\Lambda \xi)||_{C_{1-\gamma}} &= \sup_{t \in I}||t^{1-\gamma}\left((\Lambda \phi)(t) - (\Lambda \xi)(t)\right) ||\\ & \leq \left( \displaystyle \delta \, T ^ {1-\gamma}\int_0^T \,e^{w(T-s)} |u(s)|\, \ell(s)\, {\rm{d}}s\right) \, ||\phi - \xi ||_{C_{1-\gamma}}. \end{align*} So, we get \begin{equation*} {\rm{d}}_{1-\gamma}(\Lambda \phi, \Lambda \xi) \leq \widetilde{\lambda} \, {\rm{d}}_{1-\gamma}(\phi,\xi). \end{equation*} Since $\widetilde{\lambda}<1$, $\Lambda$ is a contradiction. On the other hand, consider $\theta,\phi \in C_{1-\gamma}(I,\Omega)$, such that \begin{equation*} {\rm{d}}_{1-\gamma}(\Lambda \theta, \theta) \leq \varepsilon \, \end{equation*} and \begin{equation*} {\rm{d}}_{1-\gamma}(\phi,\xi) \leq \frac{\varepsilon}{1-\widetilde{\lambda}}. \end{equation*} Then, we obtain \begin{equation*} \begin{array}{rll} {\rm{d}}_{1-\gamma} (\theta,\Lambda \phi) & \leq & {\rm{d}}_{1-\gamma}(\theta, \Lambda \theta) + {\rm{d}}_{1-\gamma}(\Lambda \theta, \Lambda \phi)\\ & \leq & \displaystyle \varepsilon + \frac{\widetilde{\lambda}\varepsilon}{1-\widetilde{\lambda}} \leq \frac{\varepsilon}{1-\widetilde{\lambda}} \cdot \end{array} \end{equation*} In this sense, we have that the closed ball $\displaystyle \overline{B}_{C_{1-\gamma}} \left(\theta, \frac{\varepsilon}{1-\widetilde{\lambda}} \right)$ of the Banach space $C_{1-\gamma}(I,\Omega)$ is invariant by the map $\Lambda$, i.e. \begin{equation*} \Lambda \left( \overline{B}_{C_{1-\gamma}} \left(\theta, \frac{\varepsilon}{1-\widetilde{\lambda}} \right) \right) \subset \overline{B}_{C_{1-\gamma}} \left(\theta, \frac{\varepsilon}{1-\widetilde{\lambda}} \right) \cdot \end{equation*} Then, applying the Banach fixed-point theorem to $\Lambda $ acting on $\overline{B}_{C_{1-\gamma}} \left(\theta, \dfrac{\varepsilon}{1-\widetilde{\lambda}} \right)$, we have that there is only one element $\xi \in \overline{B}_{C_{1-\gamma}} \left(\theta, \dfrac{\varepsilon}{1-\widetilde{\lambda}} \right)$ such that $\xi=\Lambda (\xi)$. So we have $\xi$ is a solution of the {\rm Eq.(\ref{2.1})}, which satisfies \begin{equation*} d_{1-\gamma}(\theta,\xi) \leq \frac{\epsilon}{1-\widetilde{\lambda}}, \end{equation*} which gives \begin{equation*} ||t^{1-\gamma}\left(\theta(t) - \xi(t)\right) || \leq c \, \varepsilon \, , \quad t \in [0,T] \end{equation*} where $c:=1/(1-\widetilde{\lambda})$. Thus, we conclude that the integral equation {\rm{Eq.(\ref{2.1})}} is stable in the Ulam-Hyers sense . \end{proof} Next, we will investigate the Ulam-Hyers-Rassias stability by completing the first purpose of this paper. \begin{theorem} Let $(\Omega,||\cdot||)$ be a Banach space and let $(\mathbb{S}_{\alpha,\beta}(t))_{t \geq 0}$ be a $(\alpha,\beta)$-resolvent operator function on $\Omega$. Let $\delta \geq 1$, $w \geq 0$ be constants such that \begin{equation}\label{eq14} ||\mathbb{S}_{\alpha,\beta}(t)|| \leq \delta\, e^{wt}\, \text{and}\,\, ||\mathbb{T}_{\alpha}(t)||\leq \delta\, e^{wt} \end{equation} for all $t \geq 0$. Let $\xi_0 \in\Omega$, $T > 0$ and $G:[0,T] \to (0,\infty)$ be a continuous function. Suppose that a continuous function $f:[0,T] \to \Omega$ satisfies \begin{equation}\label{4.1} \left|\left| t^{1-\gamma}(f(t) - \mathbb{S}_{\alpha,\beta}(t) \xi_0 - \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,f(s)) \, {\rm{d}}s )\right|\right| \leq G(t) \end{equation} for all $t \in [0,T]$. Suppose that there exists a positive constant ${\sf \rho}$ such that \begin{equation}\label{4.2} \ell(s) |u(s)| e^{w(T-s)} \leq {\sf \rho} \end{equation} for almost all $s \in [0,T]$. Then, $\exists ~C_{G} > 0$ (constant) and a unique continuous function $v:[0,T] \rightarrow \rightarrow \Omega$ such that \begin{equation}\label{4.3} v(t) = \mathbb{S}_{\alpha,\beta}(t) \xi_0 + \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,v(s))\, {\rm{d}}s \, , \quad t \in [0,T] \end{equation} and \begin{equation}\label{4.4} ||f(t) - v(t)|| \leq C_{G} G(t) \, , \quad t \in [0,T] \, \cdot \end{equation} \end{theorem} \begin{proof} Consider the constant $K > 0$ such that $\delta \, \rho\, K\, T^{1-\gamma}<1$ and chose a continuous function $\phi:[0,T] \to (0,\infty)$ as follows, \begin{equation}\label{4.6} \int_0^t \phi(s) \, {\rm{d}}s \leq K\, \phi(t)\, , \quad t \in [0,T]. \end{equation} Now let, $f$, $G$ satisfy the inequality {\rm{(\ref{4.1})}} and let ~$\widetilde{\alpha}_G,~\widetilde{\beta}_G>0$ such that \begin{equation}\label{4.7} \widetilde{\alpha}_G \phi(t) \leq G(t) \leq \widetilde{\beta}_G \phi(t) \, , \quad t \in [0,T] \, \cdot \end{equation} On the other hand, for all $h,g \in C_{1-\gamma}(I,\Omega)$, consider the following set \begin{equation*} {\rm{d}}_{\phi,1-\gamma}(h,g) : = {\rm{inf}}\left\{C \in [0,\infty): ||t^{1-\gamma}\left(h(t) - g(t)\right) || \leq C \phi(t) \, , \quad t \in [0,T] \right\}\, \cdot \end{equation*} It is easy to see that $(C_{1-\gamma}(I,\Omega),{\rm{d}}_{\phi,1-\gamma})$ is a metric and that $(C_{1-\gamma}(I,\Omega),{\rm{d}}_{\phi,1-\gamma})$ is a complete metric space. Now, consider the operator $\Lambda : C_{1-\gamma}(I,\Omega) \to C_{1-\gamma}(I,\Omega)$ defined by \begin{equation*} (\Lambda h)(t):= \mathbb{S}_{\alpha,\beta}(t) \xi_0 + \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,h(s))\, {\rm{d}}s\, , \quad t \in [0,T] \cdot \end{equation*} The next step is to show that $\Lambda $ is a contraction in the metric space $C_{1-\gamma}(I,\Omega)$ induced by metric ${\rm{d}}_{\phi,1-\gamma}$. Then, let $h,g \in C_{1-\gamma}(I,\Omega)$ and $C(h,g) \in [0,\infty)$ a constant such that \begin{equation*} ||t^{1-\gamma}\left(h(t) - g(t)\right)|| \leq C(h,g) \phi(t) \, , \quad t \in [0,T]\, \cdot \end{equation*} Then, using {\rm{Eq.(\ref{eq14})}}, {\rm{Eq.(\ref{4.2})}} and {\rm{Eq.(\ref{4.6})}}, we obtain \begin{equation*} \begin{array}{rll} ||(\Lambda h)(t) - (\Lambda g)(t)|| & = & \displaystyle \left|\left| \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \left( \mathcal{H}(s,h(s)) - \mathcal{H}(s,g(s)) \right)\, {\rm{d}}s \right|\right|\\ & \leq & \displaystyle \int_0^t ||\mathbb{T}_{\alpha}(t-s)|| |u(s)| ||\mathcal{H}(s,h(s)) - \mathcal{H}(s,g(s))|| \, {\rm{d}}s \\ &\leq & \displaystyle \int_0^t \delta\, e^{w(t-s)} |u(s)| \ell(s) ||h-g||_{C_{1-\gamma}} \, {\rm{d}}s \\ & \leq & \displaystyle \delta C(h,g) \int_0^t e^{w(t-s)} \phi(s) |u(s)| \ell(s) \, {\rm{d}}s \\ & \leq & \displaystyle \delta C(h,g) \rho \int_0^t \phi(s) \, {\rm{d}}s \\ & \leq & T^{1-\gamma}\delta C(h,g)\rho K \phi(t) \, , \quad t \in [0,T] \cdot \end{array} \end{equation*} Therefore, we have $d_{\phi,1-\gamma}(\Lambda(h),\Lambda(g)) \leq \delta \, \rho \, K \,T ^ {1-\gamma}\, C(h,g)$ from which we deduce that \begin{equation*} d_{\phi,1-\gamma}(\Lambda(h),\Lambda(g)) \leq \delta \, \rho \, K \,T ^ {1-\gamma}\, d_{\phi,1-\gamma}(h,g) \cdot \end{equation*} Using the fact that $\delta \, \rho \, K \,T ^ {1-\gamma}< 1$, we have that $\Lambda$ is a contraction in $(C_{1-\gamma}(I,\Omega),d_{\phi,1-\gamma})$. In this sense, through Banach's fixed point theorem, we have that there is a unique function $v \in C_{1-\gamma}(I,\Omega)$ such that $v=\Lambda(v)$. Now, using By the triangle inequality, we get \begin{equation*} \begin{array}{rll} d_{\phi,1-\gamma}(f,v) & \leq & d_{\phi,1-\gamma}(f,\Lambda(f)) + d_{\phi,1-\gamma}(\Lambda(f),\Lambda(v))\\ & \leq & \beta_{G} + \delta \, \rho \, K \,T ^ {1-\gamma}\, d_{\phi,1-\gamma}(f,v) \end{array} \end{equation*} which implies that \begin{equation} d_{\phi,1-\gamma}(f,v) \leq \frac{\beta_G}{1-\delta \, \rho \, K \,T ^ {1-\gamma}}. \end{equation} Which in turn, gives \begin{equation}\label{eq21} \begin{array}{rll} ||t^{1-\gamma}\left(f(t) - v(t)\right) ||& \leq & \displaystyle \frac{\beta_G}{1-\delta \, \rho \, K \,T ^ {1-\gamma}} \phi(t)\\ &&\\ & \leq & \displaystyle \frac{\beta_G}{1-\delta \, \rho \, K \,T ^ {1-\gamma}} \frac{G(t)}{\alpha_G} = C_G G(t) \, , \quad t \in [0,T] \end{array} \end{equation} where $C_G:= \displaystyle \frac{\beta_G}{(1-\delta \, \rho \, K \,T ^ {1-\gamma}) \, \alpha_G},$ which is the desired inequality {\rm{(\ref{4.4})}}. \end{proof} \begin{rem} From {\rm Theorem 1} and {\rm Theorem 2}, we have some particular cases, that is, by taking the boundaries with $\beta \rightarrow 1 $ and $ \beta \rightarrow 0$. Also, we have the whole case when $ \alpha = 1 $. So we have the following versions: {\rm (1)} Taking $\beta \rightarrow 0$ in {\rm Eq.(6)}, we have as particular case, the version of Theorem 1 for the Riemann-Liouville fractional derivative, given by: \begin{theorem} Let $\left( S_{\alpha ,0}\left( t\right) \right) _{t\geq 0}$ the $\left( \alpha ,0\right) -$ resolvent operator function on Banach space $\left( \Omega ,\left\Vert {\cdot}\right\Vert \right) $ with and let $T>0$ be a positive real number. We set \begin{equation*} \widetilde{\lambda }:=\delta \, T^{1-\gamma} \int_{0}^{T}e^{\omega \left( T-s\right) }\left\vert u\left( s\right) \right\vert \ell \left( s\right) ds. \end{equation*} If $\widetilde{\lambda }<1$ them {\rm Eq.(6)} is stable in the Ulam-Hyers sense. \end{theorem} {\rm (2)} Taking limit $\beta \rightarrow 1$ in {\rm Eq.(6)}, we have the version of Theorem 1 for the Caputo fractional derivative, ensuring that {\rm Eq.(6)} is Ulam-Hyers stable. {\rm (3)} Taking limit $\beta \rightarrow 1$ in {\rm Eq.(6)}, we have as particular case the version of {\rm Theorem 2} for the Caputo fractional derivative given by the following theorem (Ulam-Hyers-Rassias): \begin{theorem} Let $\left( \Omega ,\left\Vert {\cdot}\right\Vert \right) $ be a Banach space and $\left( \mathbb{S}_{\alpha ,1}\left( t\right) \right) _{t\geq 0}$ be $\left( \alpha ,1\right) -$ resolvent operator function on $\Omega $. Let $\delta \geq 1$, $\omega \geq 0$ be constants such that $\left\Vert \mathbb{S}_{\alpha ,1}\left( t\right) \right\Vert \leq \delta e^{\omega t}$ and $\left\Vert \mathbb{T}_{\alpha }\left( t\right) \right\Vert \leq \delta e^{\omega t}$ for all $ t\geq 0$. Let $\xi _{0}\in \Omega $ be fixed, $T>0$ and $G:\left[ 0,T\right] \rightarrow \left( 0,\infty \right) $ be a continuous function. Suppose that a continuous function $f:\left[ 0,T\right] \rightarrow \Omega $ satisfies \begin{equation*} \left\Vert t^{1-\gamma}(f\left( t\right) -\mathbb{S}_{\alpha ,1}\left( t\right) \xi _{0}-\int_{0}^{t}\mathbb{T}_{\alpha }\left( t-s\right) u\left( s\right) H\left( s,f\left( s\right) \right) ds)\right\Vert \leq G\left( t\right) \end{equation*} for all $t\in \left[ 0,T\right]$. Suppose that exists a positive constant $\rho $ such that \begin{equation*} \ell \left( s\right) \left\vert u\left( s\right) \right\vert e^{\omega \left( T-s\right) }\leq \rho \end{equation*} for almost all $s\in \left[ 0,T\right]$. Then, exist the constant $C_{G}>0$ and a unique continuous functions $v:\left[ 0,T\right] \rightarrow \Omega $ such that \begin{equation*} v\left( t\right) =\mathbb{S}_{\alpha ,1}\left( t\right) \xi _{0}+\int_{0}^{t}\mathbb{T}_{\alpha }\left( t-s\right) u\left( s\right) H\left( s,f\left( s\right) \right) ds,\text{ }t\in \left[ 0,T\right] \end{equation*} and \begin{equation*} \left\Vert t^{1-\gamma}(f\left( t\right) -v\left( t\right)) \right\Vert \leq C_{G}G\left(t\right) ,\text{ }\forall t\in \left[ 0,T\right] . \end{equation*} \end{theorem} 4. Taking limit $\beta \rightarrow 1$ or $\beta \rightarrow 0$ and choosing $\alpha =1$, we have the version of the {\rm Theorem 1} and {\rm Theorem 2}, for integer case. \end{rem} \section{Ulam-Hyers and Ulam-Hyers-Rassias stabilities of mild solution on $[0,+\infty)$} As in section 3, we will investigate the Ulam-Hyers and Ulam-Hyers-Rassias stabilities in the interval $ [0, +\infty]$, with the same assumption on the function $\mathcal{H}$. So we start with the following theorem: \begin{theorem}Let $\xi_0 \in \Omega$ be fixed and let $\varepsilon > 0$ be a given positive number. Suppose that a continuous function $f:[0,+\infty) \to \Omega$ satisfies \begin{equation}\label{5.1} \left|\left|t^{1-\gamma}( f(t) - \mathbb{S}_{\alpha,\beta}(t) \xi_0 - \int_0^t \mathbb{T}_{\alpha}(t-s) u(s) \mathcal{H}(s,f(s)) \, {\rm{d}}s) \right|\right| \leq \varepsilon \end{equation} for all $t \in [0,+\infty)$. Suppose that \begin{equation}\label{5.2} \widetilde{\lambda}_{\alpha,1-\gamma} = \displaystyle \sup_{t \geq 0}\, t^{1-\gamma} \,\int_0^t \ell(s) |u(s)| ||\mathbb{T}_{\alpha}(t-s)|| \, {\rm{d}}s < 1 \end{equation} with $0 < \alpha \leq 1$ and $0 \leq \gamma \leq 1$. Then, there exists a unique continuous function $v:[0,+\infty) \to \Omega$ such that \begin{equation}\label{5.3} v(t) = \mathbb{S}_{\alpha,\beta}(t) \xi_0 + \int_0^t u(s) \mathbb{T}_{\alpha}(t-s) \mathcal{H}(s,v(s))\, {\rm{d}}s \, , \quad t \in [0,+\infty) \end{equation} and \begin{equation}\label{5.4} ||t^{1-\gamma}\left(f(t) - v(t)\right) || \leq \frac{\varepsilon}{1-\widetilde{\lambda}_{\alpha,1-\gamma}} \, , \quad t \in [0,+\infty) \, \cdot \end{equation} \end{theorem} \begin{proof} Consider that $\widetilde{\lambda}_{\alpha,1-\gamma} < 1$, $\varepsilon > 0$ be given and $f \in C_{1-\gamma}([0,+\infty),\Omega)$ satisfy the inequality {\rm{(\ref{5.1})}}. On the other hand, we consider the set $\widetilde{\cal E}_{f,1-\gamma}$, given by \begin{equation} \widetilde{\cal E}_{f,1-\gamma}:= \left\{ g \in C_{1-\gamma}([0,+\infty),\Omega); \,\, \displaystyle \sup_{t \geq 0} ||t^{1-\gamma}\left(g(t) - f(t)\right) || < + \infty \right\}. \end{equation} The set $\widetilde{\cal E}_{f,1-\gamma}$ is not empty, because it contains $f$ and $\Lambda(f)$. Now, consider the functions $h,g \in \widetilde{\cal E}_{f,1-\gamma}$, such that \begin{equation*} d_{1-\gamma}(h,g):= \displaystyle \sup_{t \geq 0} ||t^{1-\gamma}\left(h(t) - g(t)\right) ||. \end{equation*} Then, $d_{1-\gamma}$ is a distance and the metric space $(\widetilde{\cal E}_{f,1-\gamma},d_{1-\gamma})$ is complete. For any functions $h,g \in \widetilde{\cal E}_{f,1-\gamma}$, we get \begin{equation*} \begin{array}{rll} ||(\Lambda h)(t) - (\Lambda g)(t)|| & = & \displaystyle \left|\left| \int_0^t u(s) \mathbb{T}_{\alpha}(t-s) \left[ \mathcal{H}(s,h(s)) - \mathcal{H}(s,g(s)) \right]\, {\rm{d}}s \right|\right|\\ & \leq & \displaystyle \int_0^t ||\mathbb{T}_{\alpha}(t-s)|||u(s)| ||\mathcal{H}(s,h(s)) - \mathcal{H}(s,g(s))|| \, {\rm{d}}s \\ & \leq & \left( \displaystyle \int_0^t ||\mathbb{T}_{\alpha}(t-s)|| |u(s)| \ell(s) \, {\rm{d}}s \right) \, d_{1-\gamma}(h,g)\, , \quad t \in [0,+\infty). \end{array} \end{equation*} This gives \begin{equation*} ||t^{1-\gamma} \left( (\Lambda h)(t) - (\Lambda g)(t)\right) || \leq \widetilde{\lambda}_{\alpha,1-\gamma} d_{1-\gamma}(h,g)\, , \quad t \in [0,+\infty). \end{equation*} Therefore, we have \begin{equation*} d_{1-\gamma}(\Lambda h, \Lambda g) \leq \widetilde{\lambda}_{\alpha,1-\gamma} d_{1-\gamma}(h,g). \end{equation*} Moreover, it is easy to show that $\Lambda(h) \in \widetilde{\cal E}_{f,1-\gamma}$ for any function $h \in \widetilde{\cal E}_{f,1-\gamma}$. Thus, we have $\Lambda $ is a contraction in $(\widetilde{\cal E}_{f,1-\gamma},d_{1-\gamma})$. In this sense, by Banach's fixed point theorem, we have that there is only one element $v \in \widetilde{\cal E}_{f,1-\gamma}$ such that $v=\Lambda(v)$. By the triangle inequality, we get \begin{equation*} \begin{array}{rll} d_{1-\gamma}(f,v) & \leq & \leq d_{1-\gamma}(f,\Lambda(f)) + d_{1-\gamma}(\Lambda(f),\Lambda(v))\\ & \leq & \varepsilon + \widetilde{\lambda}_{\alpha,1-\gamma} d_{1-\gamma}(f,v) \end{array} \end{equation*} that implies \begin{equation*} d_{1-\gamma}(f,v) \leq \frac{\varepsilon}{1-\widetilde{\lambda}_{\alpha,1-\gamma}}, \end{equation*} this is, \begin{equation}\label{5.5} ||t^{1-\gamma}\left( f(t) - v(t)\right) ||\leq c\, \varepsilon\, , \quad t \in [0,+\infty) \end{equation} where $c:=\dfrac{1}{1-\widetilde{\lambda}_{\alpha,1-\gamma}}$ . The inequality {\rm{(\ref{5.5})}} shows that the {\rm{Eq.(\ref{2.1})}} is Ulam-Hyers stable. \end{proof} With the following result aimed at investigating the Ulam-Hyers-Rassias stability, complete the second main result of this paper. \begin{theorem} Let $\Omega$ be a Banach space, $(\mathbb{S}_{\alpha,\beta}(t))_{t \geq 0}$ be a $(\alpha,\beta)$-resolvent operator function on $\Omega$ and $\phi_0 \in \Omega$ be fixed. Let $K >0$ be given and $\phi:[0,+\infty) \to (0,+\infty)$ be a continuous function such that \begin{equation}\label{6.1} \int_0^t \phi(s) \, {\rm{d}}s \leq K \, \phi(t)\, , \quad t \in [0,+\infty) \cdot \end{equation} Suppose that a continuous function $f:[0,+\infty) \to \Omega$ satisfies \begin{equation}\label{6.2} \left|\left| t^{1-\gamma}(f(t) - \mathbb{S}_{\alpha,\beta}(t) \xi_0 - \int_0^t u(s) \mathbb{T}_{\alpha}(t-s) \mathcal{H}(s,f(s)) \, {\rm{d}}s) \right|\right| \leq \phi(t) \end{equation} for all $t \in [0,+\infty)$. Suppose that there exists a positive constant $\rho > 0$ such that \begin{equation}\label{6.3} \ell(s) |u(s)| ||\mathbb{T}_{\alpha}(t-s)||\leq \rho \end{equation} for almost all $(s,t) \in [0,+\infty)$ with $0 \leq s \leq t$ and suppose that \begin{equation}\label{6.4} \rho \,K \, T^{1-\gamma} < 1 \cdot \end{equation} Then, there exists a unique continuous function $v:[0,+\infty) \to \omega$ such that \begin{equation}\label{6.5} v(t) = \mathbb{S}_{\alpha,\beta}(t) \xi_0 + \int_0^t u(s) \mathbb{T}_{\alpha}(t-s) \mathcal{H}(s,v(s))\, {\rm{d}}s \, , \quad t \in [0,+\infty) \end{equation} and \begin{equation}\label{6.6} ||t^{1-\gamma}\left( f(t) - v(t)\right) || \leq \frac{1}{1-K\rho} \phi (t) \, , \quad t \in [0,+\infty) \cdot \end{equation} \end{theorem} \begin{proof} Let $f \in C_{1-\gamma}([0,+\infty),\Omega)$ satisfy the inequality {\rm{(\ref{6.2})}} and the following set, defined by \begin{equation*} \widetilde{\cal E}_{f,1-\gamma}:= \left\{ g \in C_{1-\gamma}([0,+\infty),\Omega): \exists C \geq 0 : ||t^{1-\gamma}\left( g(t)-f(t)\right) || \leq C \phi(t) \, , \quad t \in [0,+\infty) \right\}. \end{equation*} The set $\widetilde{\cal E}_{f,1-\gamma}$ is not empty, because it contains $f$ and $\Lambda(f)$. Now, for $h,g \in \widetilde{\cal E}_{f,1-\gamma}$, we define the following set \begin{equation*} d_{\phi,1-\gamma}(h,g):= \inf \left\{ C \in [0,+\infty): ||t^{1-\gamma}\left( h(t)-g(t)\right) || \leq C \phi(t)\, , \quad t \in [0,+\infty) \right\}. \end{equation*} Note that it is easy to see that $(\widetilde{\cal E}_{f,1-\gamma},d_{\phi,1-\gamma})$ is a complete metric space satisfying $ \Lambda(\widetilde{\cal E}_{f,1-\gamma}) \subset \widetilde{\cal E}_{f,1-\gamma}$, where $\Lambda : \widetilde{\cal E}_{f,1-\gamma} \rightarrow \widetilde{\cal E}_{f,1-\gamma}$ is defined by \begin{equation*} (\Lambda h)(t):= \mathbb{S}_{\alpha,\beta}(t) \xi_0 + \int_0^t u(s) \mathbb{T}_{\alpha}(t-s) \mathcal{H}(s,h(s))\, {\rm{d}}s \, , \quad t \in [0,+\infty). \end{equation*} The idea is to prove that in fact the $\Lambda$ application is a contraction on the metric space $(\widetilde{\cal E}_{f,1-\gamma},{\rm{d}}_{\phi,1-\gamma})$. Then, let $h,g \in \widetilde{\cal E}_{f,1-\gamma}$ and $C(h,g) \in [0,+\infty)$ be an arbitrary constant such that \begin{equation*} ||t^{1-\gamma}\left( h(t) - g(t)\right) || \leq C(h,g) \phi(t) \, , \quad t \in [0,+\infty). \end{equation*} In this sense, we have the following inequality \begin{equation*} \begin{array}{rll} ||(\Lambda h)(t) - (\Lambda g)(t)||& = & \displaystyle \left|\left| \int_0^t u(s) \mathbb{T}_{\alpha}(t-s) \left( \mathcal{H}(s,h(s)) - \mathcal{H}(s,g(s)) \right)\, {\rm{d}}s \right|\right|\\ & \leq & \displaystyle \int_0^t |u(s)| ||\mathbb{T}_{\alpha}(t-s)||_{C_{1-\gamma}} ||\mathcal{H}(s,h(s)) - \mathcal{H}(s,g(s))|| \, {\rm{d}}s \\ &\leq & \displaystyle \int_0^t |u(s)| ||\mathbb{T}_{\alpha}(t-s)||_{C_{1-\gamma}} \ell(s) ||h - g||_{C_{1-\gamma}} \, {\rm{d}}s \\ & \leq & \displaystyle C(h,g) \int_0^t |u(s)| \ell(s) ||\mathbb{T}_{\alpha}(t-s)|| \phi(s) \, {\rm{d}}s \\ & \leq & \displaystyle \rho \, C(h,g) \int_0^t \phi(s) \, {\rm{d}}s \\ & \leq & \rho \, C(h,g)\, K \phi(t) \, , \quad t \in [0,T] \cdot \end{array} \end{equation*} Therefore, we have ${\rm{d}}_{\phi,1-\gamma}(\Lambda(h),\Lambda(g)) \leq C(h,g) \, T^{1-\gamma} \,\rho\, K $, that implies in \begin{equation*} {\rm{d}}_{\phi,1-\gamma}(\Lambda(h),\Lambda(g)) \leq T^{1-\gamma} \,\rho\, K {\rm{d}}_{\phi,1-\gamma}(h,g). \end{equation*} Using the fact that $T^{1-\gamma} \,\rho\, K < 1$, we get $\Lambda$ is strictly contractive on the $(\widetilde{\cal E}_{f,1-\gamma},{\rm{d}}_{\phi,1-\gamma})$. Thus, through the Banach fixed-point theorem, there is a unique function $v\in\widetilde{\cal E}_{f,1-\gamma}$ such that $v=\Lambda(v)$. Using the triangle inequality, we obtain \begin{equation*} \begin{array}{rll} d_{\phi,1-\gamma}(f,v) & \leq & d_{\phi,1-\gamma}(f,\Lambda(f)) + d_{\phi,1-\gamma}(\Lambda(f),\Lambda(v))\\ & \leq & 1 + T^{1-\gamma} \,\rho\, K\, d_{\phi,1-\gamma}(f,v) \end{array} \end{equation*} which implies that \begin{equation*} d_{\phi,1-\gamma} \leq \frac{1}{1-T^{1-\gamma} \,\rho\, K}. \end{equation*} Therefore, we conclude that \begin{equation*} ||t^{1-\gamma} \left( f(t) - v(t)\right) ||\leq C_{\phi} \phi(t)\, , \quad t \in [0,+\infty) \end{equation*} where $C_{\phi}:= \dfrac{1}{1-T^{1-\gamma} \,\rho\, K}$. \end{proof} Remark 2. In the same way that we highlight the particular cases for Theorem 1 and Theorem 2, here also the observation made according to remark 1 is valid. \section{Concluding remarks} We conclude this paper with the objectives achieved, that is, we investigate the Ulam-Hyers and Ulam-Hyers-Rassias stabilities for the mild solution of the fractional nonlinear abstract non-linear Cauchy problem: the first part was destined to the inite interval $ [0, T] $ and the second part to the red infinite interval $[0, \infty) $. It is important to emphasize the fundamental role of the Banach fixed point theorem in the results obtained. Although, the results presented here, contribute to the growth of the theory; some questions still need to be answered. The first question is about the possibility of investigating the existence and uniqueness of mild solutions for fractional differential equations formulated via $\psi$-Hilfer fractional derivative. Consequently, the second allows us to question the Ulam-Hyers stabilities. But for such a success, it is necessary and sufficient condition to obtain a Laplace transform and inverse Laplace transform with respect to another function \cite{jarad2}. Another important consequence that we can take, the mild solution part of a fractional problem, is to be able to investigate properties of Navier-Stokes equations \cite{neto,soj}. So this path is the next step in the research being developed. \section*{Acknowledgment} JVCS acknowledges the financial support of a PNPD-CAPES (process number nº88882.305834/2018-01) scholarship of the Postgraduate Program in Applied Mathematics of IMECC-Unicamp. The second author acknowledges the Science and Engineering Research Board (SERB), New Delhi, India for the Research Grant (Ref: File no. EEQ/2018/000407).
2,869,038,155,747
arxiv
\section{Projections of two-dimensional fits\label{app:suppMat}} \input{supplemental_material} } \cleardoublepage \section{The CMS Collaboration \label{app:collab}}\begin{sloppypar}\hyphenpenalty=5000\widowpenalty=500\clubpenalty=5000\input{BPH-17-005-authorlist.tex}\end{sloppypar} \end{document}
2,869,038,155,748
arxiv
\section{} \textbf{Summary}. - The paper is structured as follows. Sect.\textbf{1}. contains some passages of a recent report by Weisberg and Taylor (see $[1g)]$ regarding thirty years of observations of binary radiopulsar B PSR1913+16. Sect.\textbf{2}.: a straightforward criticism of the relativistic approach which is employed in papers cited in \cite{1}. Sect.\textbf{2bis}.: a possible alternative explanation of the shrinkage of the orbit of the above radiopulsar. Sect.\textbf{3}.: the linear approximation of GR is \emph{inadequate} to give an existence theorem of physical GW's. Sect.\textbf{4}.: \emph{system B PSR1913+16 cannot emit GW's}. Sect.\textbf{5}.: the analogy between Maxwell-Lorentz e.m. theory and the linearized version of GR is a \emph{false} analogy. Sect.\textbf{6}.: erroneousness of a surmise concerning the behaviour of B PSR1913+16. \vskip1.20cm \textbf{1}. - Nobody has ever found a \emph{\textbf{direct}} experimental proof of the \emph{real} existence of the gravitational waves (GW's). According to some authors \cite{1}, an \emph{\textbf{indirect}} experimental evidence could be given by the time decrease of the orbital period $P_{b}$ of the binary pulsar B PSR1913+16 \cite{1bis}. \par The abstract of paper $[1g)]$ runs as follows: ``We describe results derived from thirty years of observations of PSR B1913+16. Together with the Keplerian orbital parameters, measurements of the relativistic periastron advance and a combination of gravitational redshift and time dilation yield the stellar masses with high accuracy. The measured rate of change of orbital period agrees with that expected from the emission of gravitational radiation, according to general relativity, to within about $0.2$ percent. Systematic effects depending on the pulsar distance and on poorly known galactic constants now dominate the error budget, so tighter bounds will be difficult to obtain. $[\ldots]$.''. And in sect.\textbf{3}.\textbf{1} of the same paper $[1g)]$ the authors claim that: ``According to general relativity, a binary star system should emit energy in the form of gravitational waves. The loss of orbital energy results in shrinkage of the orbit, which is most easily observed as a decrease in orbital period. Peters and Mathews (1963) (see \cite{2}) showed that in general relativity the rate of period decrease is given by \begin{eqnarray} \label{eq:one} \dot{P}_{b,GR} & = & - \frac{192 \pi G^{5/3}}{5c^{5}} \left(\frac{P_{b}}{2 \pi} \right)^{-5/3} \left(1-e^{2}\right)^{-7/2} \times {} \nonumber\\ & & {} \left(1+\frac{73e^{2}}{24}+\frac{37e^{4}}{96}\right)m_{p}m_{c} \left(m_{p}+m_{c}\right)^{-1/3}.'' \end{eqnarray} Here: $G$ is the gravitational constant; $c$ the speed of light \emph{in vacuo}; $e$ the orbital eccentricity $(e=0.6171338(4))$; $m_{p}$ the mass of the pulsar ($m_{p}=1.4414 \pm0.0002$ solar masses), $m_{c}$ the mass of the companion ($m_{c}=1.3867 \pm0.0002$ solar masses). \par Then, Weisberg and Taylor $[1g)]$ write: ``Comparison of the measured $\dot{P}_{b}$ with the theoretical value requires a small correction, $\dot{P}_{b,Gal}$, for relative acceleration between the solar system and binary pulsar system, projected onto the line of sight (Damour and Taylor 1991) $[see [1e)]]$. This correction is applied to the measured $\dot{P}_{b}$ to form a ``corrected value'' $\dot{P}_{b, corrected}=\dot{P}_{b}-\dot{P}_{b,Gal}$. The correction term depends on several rather poorly known quantities, including the distance $[\approx16,000$ light-years$]$ and proper motion of the pulsar and the radius of the Sun's galactic orbit. The best currently available values yield $\dot{P}_{b,Gal}=-(0.0128\pm0.0050)\times 10^{-12}$ s/s, so that $\dot{P}_{b, corrected}=2.4056\pm0.0051)\times 10^{-12}$ s/s. Hence \begin{equation} \label{eq:two} \frac{\dot{P}_{b, corrected}}{\dot{P}_{b,GR}} = 1.0013 \pm 0.0021 , \end{equation} and we conclude that the measured orbital decay is consistent at the $(0.13 \pm 0.21)$ \% level with the general relativistic prediction for the emission of gravitational radiation. $[\ldots]$.'' \vskip0.50cm \textbf{2}. - The good agreement between the measured $\dot{P}_{b}$ and the computed $\dot{P}_{b}$ is suspect -- as I have already emphasized \cite{3} --, because the relativistic perturbative approximation, of which eq.(\ref{eq:one}) is a consequence, is quite unreliable from the point of view of the \emph{\textbf{exact}} (non-linear) formulation of GR, as it was pointed out by several relativists \cite{3bis}. \par Further, I remark that in GR the hypothetic GW's do not have a \emph{true} energy. Therefore the \emph{true} mechanical energy which is lost during the orbital motion should transform itself into the \emph{pseudo} (i.e. false) energy of the hypothetic GW's: the energy balance would be violated. \emph{\textbf{Objection}}: if we \emph{suppose} that the \emph{linearized} version of GR has an unconditioned, approximate validity -- as the experimentalists, and some (simple) theoreticians \cite{4}, do -- the physical existence of GW's seems a theoretical possibility, and it seems -- by exploiting the analogy with Maxwell-Lorentz e.m. theory -- that the \emph{acceleration} of a body can generate GW's. \emph{\textbf{Answer}}: the energy-momentum of such GW's has a tensor character only under Lorentz transformations of co-ordinates, but \emph{not} under general transformations. Now, this is contrary to the basic tenet of GR. \vskip0.50cm \textbf{2bis}. - The authors of papers \cite{1} have \emph{assumed} that \emph{both} stars of the considered binary system are neutron stars, and thus act dynamically as \emph{point} masses. But if the companion star were a helium star or a white dwarf, tides and viscous actions might mimic the relativistic (as the periastron advance) and pseudorelativistic effects. In particular, the viscous losses of the companion could give a time decrease of the pulsar revolution period of the same order of magnitude of that given by the hypothesized emission of gravitational radiation -- as it is well known to many observational astrophysicists. \par Finally, the empirical success of a theory -- or of a given computation -- is not an absolute guaranty for its conceptual adequacy. Consider for instance the Ptolemaic theory of cycles and epicycles, which explained rather well the planetary orbits (with the only exception of Mercury's). \vskip0.50cm \textbf{3}. - It can be proved that the linear approximation of GR is quite \emph{inadequate} to a proper study of the hypothetic GW's, see \cite{5}, \cite{6}. And if we continue the approximation beyond the linear stage (see \cite{7}, \cite{8}), we find that the radiation terms of the gravitational field can be \emph{destroyed} by convenient co-ordinate transformations: this proves that the GW's are \emph{only a product of a special choice of the reference system}, i.e. that they do not possess a \emph{physical} reality (see further \cite{9}, \cite{10}, \cite{11}): the undulatory solutions of Einstein field equations have a mere \emph{formal} (\emph{non}-physical) character. \vskip0.50cm \textbf{4}. - If the two stars of B PSR1913+16 are dynamically treated as two (gravitationally interacting) \emph{point} masses \cite{1}, the \emph{\textbf{exact}} formulation of GR tells us that their orbits are \emph{\textbf{geodetic}} lines \cite{9}, i.e. their motions are ``natural'', ``free'' motions, quite analogous to a rectilinear and uniform motion of a point charge in the customary Maxwell-Lorentz theory. Accordingly, no GW is sent forth by our stars! \cite{10}, \cite{11}. \par In my paper $[3 \beta)]$ I have given another elegant proof of this fact, resting on a fundamental proposition by Hermann Weyl \cite{12}, according to which for any relative motion of two bodies it is always possible (in GR) to choose a co-ordinate system in which \emph{both} bodies are \emph{at rest}. (Remark that in GR the expression \emph{at rest} must be defined precisely by specifying the \emph{interested spacetime manifold}.) \par Let us apply the above proposition to system B PSR1913+16, i.e. let us choose a co-ordinate frame for which both stars are at rest. Evidently, an observer $\Omega$ who ``resides'' in this frame does not record any emission of GW's. Now, any observer $\Omega'$ -- very far, in particular, from $\Omega$ --, for whom B PSR1913+16 is in motion, does not possess (in GR!) any physical privilege with respect to $\Omega$. Accordingly, both observers, $\Omega$ and $\Omega'$, do not register any GW sent forth by our binary system. (See Weyl \cite{13} for the Riemann-Einstein manifold of two point masses at rest.) \vskip0.50cm \textbf{5}. - The \emph{\textbf{false}} formal analogy between the e.m. Maxwell-Lorentz theory and the linearized version of GR is the responsible for the publication of countless and senseless papers. In particular, it has generated the conviction that, in GR, the \emph{acceleration} of a body must give origin to GW's; many people have forgotten that, in the \emph{exact} (non-linear) formulation of GR, the concept ``acceleration'' does not possess an absolute character. (The above conviction was also extended to perturbative approximations of higher order.) \par We observe finally that the exact theory does not admit any class of physically privileged reference frames for which, in particular, the undulatory character of a given gravitational field is an invariant property. \vskip0.50cm \textbf{6}. - A last remark. Some authors have conjectured that a coexistence of effects due to emission of GW's by B PSR1913+16 and to tides and viscous actions of the companion star could be possible. \par Now, this is pure nonsense, because -- as it can be proved -- even motions that are \emph{not} purely gravitational cannot generate GW's \cite{14}. \small \vskip0.5cm\par\hfill {$\Pi$$\tilde{\upsilon}$$\rho$ $\sigma$$o$$\iota$ $\pi$$\rho$$o$$\sigma$$o$$\acute{\iota}$$\sigma$$\omega$.} \vskip0.10cm\par\hfill {\emph{(I will bring fire to thee.)}} \vskip0.10cm\par\hfill {EURIPIDES, \emph{Andromache}.} \normalsize \small
2,869,038,155,749
arxiv
\section{Introduction} A major theme in quantum computation is the idea of {\it analog quantum simulation}. This is the task of simulating one Hamiltonian $H$ by another Hamiltonian $\tilde{H}$, which might be more readily or easily implemented. In fact, this goal was identified as a main motivation for realizing quantum computers as early as 1981 by Feynman\cite{Feynman1982}, with the idea that such analog quantum simulations can shed light on properties of physical quantum systems that are hard to simulate efficiently on classical computers. Cirac and Zoller~\cite{CiracZollerNatPhys2012} further developed this idea, and explained that such simulators are likely to be achievable well before fully fault-tolerant quantum computation \cite{AharonovFaultTolerance, KnillThreshold, KitaevFaultTolerantAnyon} becomes practical, which might take a long time. While fault-tolerant quantum computers when realized can be used to apply {\it digital} quantum simulations~\cite{LloydDigitalSimulation1996} (where a quantum {\it circuit} simulates the time-evolution $e^{-iHt}$ under a local Hamiltonian $H$), {\it analog} quantum simulations are more accessible for near-term experiments because they do not require full-fledged quantum computer. Many groups are designing implementations in a variety of experimental platforms\cite{SimonOpticalLatticeSim2011,Bloch2012,Blatt2012,Aspuru-Guzik2012,Houck2012,GeorgescuQuantumSimulationReview2014}, and we have recently seen some experiments in intermediate-sized quantum systems in regimes where classical simulations are difficult~\cite{Bernien2017,Zhang2017}. It has been argued that analog quantum simulation constitutes one of the more interesting challenges in the {\it noisy intermediate-scale quantum computing} (NISQ) era \cite{Preskill2018}. Beyond their natural physical applications, analog simulations of Hamiltonians are also very important for quantum complexity theory. For example, in the theory of quantum NP, one is often interested in {\it reducing} problems defined by one class of Hamiltonians to another (e.g.~\cite{KSV02, quantumNPsurvey, BravyiHastingsSim,UniversalHamiltonian,KKR06}). These reductions are often derived using {\it perturbative gadgets} (e.g.~\cite{KKR06, OliveiraTerhal, BDLT08, JordanGadgets, CaoImprovedOTGadget, CaoNagajGadget}). Moreover, analog Hamiltonian simulators might also be useful for the design of Hamiltonian-based quantum algorithms, such as adiabatic algorithm~\cite{FarhiAdiabatic2000} and QAOA~\cite{QAOA}. In those settings, it is often desirable to tailor the Hamiltonians being used, while maintaining the properties essential for the algorithm. In this paper, we initiate the rigorous study of the minimal resources required to simulate a given target Hamiltonian, and ask: When can we simulate a Hamiltonian $H$ by another $\tilde{H}$ that is {\it simpler}, easier, or more economic to implement? Of course, this vaguely stated question can take several forms if made more rigorous; here we focus on a natural goal which we loosely call {\it Hamiltonian sparsification}, which aims to simplify the {\it interaction graph} of the Hamiltonian. For a $2$-local $n$-qubit Hamiltonian $H$, the interaction graph has $n$ vertices, with edges connecting any pairs of qubits that participate in a local term in $H$. For a $k$-local Hamiltonian, we consider an interaction {\it hyper}graph, where each term acting on $k$ qubits is represented by a hyperedge. A generic $k$-local Hamiltonian has $\Theta(n^k)$ edges, and $\Theta(n^{k-1})$ degree per vertex. Roughly speaking, Hamiltonian sparsification aims to simulate a Hamiltonian using another whose interaction graph is more ``economic'', e.g., it has less edges (we refer to this as {\it dilution}) or its degree is bounded (we refer to this as {\it degree-reduction}). Hamiltonian sparsification has several important motivations. First, it can help physicists tackle the immense hurdles they face when trying to realize Hamiltonians in the lab. In addition, in many settings in quantum complexity, such as in the study of quantum PCP \cite{qPCPsurvey} and recent approaches to the area law question \cite{LocalTestOfEntanglement}, simulating a Hamiltonian by one with constant degree or fewer edges is a potentially important primitive. Indeed, sparsification is used ubiquitously in classical computer science, in a variety of applications; we mention two important ones. The first, graph sparsification (and more generally, matrix sparsification) is a central tool in matrix algorithms \cite{STNearlyLinearTimeAlg, SpielmanSpectralSparsify, SpielmanEffectiveResistance, BSST13}. Famously, Ref.~\cite{BSS12} proved that any graph can be replaced by another which is sparse (namely, has small degree on average), such that their Laplacian matrices are spectrally similar. Another common use of sparsification in classical computer science is {\it degree-reduction} (DR), used in the study of local Constraint Satisfaction Problems (CSPs) and PCPs~\cite{dinur}. We believe that this natural and basic notion deserves to be studied in the quantum context as well, and might have interesting applications beyond those we can foresee today. \subsection{Gap-Simulations: Simulating only the low-lying part of the spectrum} Before embarking on the study of Hamiltonian sparsification, we first need an appropriate definition of analog simulation. The study of analog Hamiltonian simulation was set on rigorous footing in a recent work by Cubitt, Montanaro, and Piddock~\cite{UniversalHamiltonian}; their definition refines that of Bravyi and Hastings~\cite{BravyiHastingsSim}, and it roughly goes as follows: A given Hamiltonian $H$ is simulated by ``encoding'' its full spectrum into the low-lying part of the spectrum of $\tilde{H}$ acting on a larger Hilbert space. When $\tilde{H}$ is implemented, then the low-lying part of its spectrum can be used to derive properties and information about the original Hamiltonian $H$. For obvious reasons, we will refer to this definition as {\it full-spectrum simulation}. In Ref.~\cite{UniversalHamiltonian}, the notion of {\it universal} Hamiltonians was defined and studied: these are families of Hamiltonians which are capable of performing full-spectrum simulations of {\it any} given Hamiltonian, albeit generally with exponential overhead in energy. While this strong notion of {\it full-spectrum simulation} is necessary for simulating all dynamical properties of a system, it is common in physics that one is only interested in the properties of the low-energy states and, particularly, the groundstates. In addition, the spectral gap separating the groundstates from the rest of the spectrum is an intimately related quantity that is usually physically important. For example, the groundstates encode exotic quantum behaviors such as topological order, and the spectral gap protects them~\cite{Wen1990,LevinWenTopo}. Also, they are used together to define quantum phases of matter and characterize phase transitions \cite{SachdevQPT, UndecidabilityOfGap}. Moreover, both are the main objects of interest in quantum computational complexity: In quantum adiabatic algorithms~\cite{FarhiAdiabatic2000}, the goal is to prepare a groundstate of a problem Hamiltonian, and the spectral gap governs the efficiency of the process. In quantum NP theory \cite{quantumNPsurvey}, only the groundstate(s) of the Hamiltonian matters as it is the witness for the problem. The spectral gap also determines the temperature of a thermal equilibrium (Gibbs) state that can be used to approximate the groundstate. Hence, we believe that a natural and minimal notion of analog Hamiltonian simulation, which is still meaningful for many physical contexts, should require that both the space of groundstates and the spectral gap above it be preserved. Therefore, we suggest to consider sparsification, or more generally Hamiltonian simulation, using this minimal notion, which we formally define as {\it gap-simulation}. To the best of our knowledge, despite its naturalness, this relaxed notion of Hamiltonian simulation was not formally defined and rigorously studied previously in the quantum complexity literature. A Hamiltonian $\tilde{H}$ is said to \emph{gap-simulate} $H$ if it mimics the groundstate(s) and the spectral gap of $H$; no constraints are imposed on the excited part of the spectrum. To provide a sensible definition requires some care, since in the quantum world we can allow inaccuracies and entanglement to an ancilla. We provide two versions of the definition: In the weaker one (Def.~\ref{defn:hamsimul-incoherent}), the groundspace is mimicked {\it faithfully}, i.e. the {\it support} of any groundstate of $\tilde{H}$, when reduced to the original Hilbert space, is close to the groundspace of $H$. However, this definition does not require quantum {\it coherence} within the groundspace be maintained. Such coherence is guaranteed by our stronger definition (Def.~\ref{defn:hamsimul}), in which all superpositions within the groundspace are simulated. The extent to which the gap-simulation is incoherent (or unfaithful) is quantified via a small constant $\epsilon$ (or $\delta$). It seems that the coherent notion is the ``correct'' one for most quantum applications, though the weaker one might also be useful in certain contexts (see Sec.~\ref{sec:discussion}). We mention that here, like in Ref.~\cite{BravyiHastingsSim, UniversalHamiltonian}, we allow encoding of the qubits. Typically, we consider ``localized'' encodings, though this is not explicitly required in the definition. To set the stage, some basic results about the framework are provided: We show in Lemma \ref{lem:equiv} that for Hamiltonians with unique groundstates, our two definitions of gap-simulations coincide. Moreover, both coherent and incoherent gap-simulation definitions are shown to be stable under compositions. How does the gap-simulation framework compare with the stricter definitions of full-spectrum simulations developed in Ref.~\cite{BravyiHastingsSim,UniversalHamiltonian}? In Appendix~\ref{sec:comp-defns}, this connection is discussed formally; roughly, our definition is indeed a relaxed version of full-spectrum simulations whose spectral error is smaller than the spectral gap, up to varying restrictions about encoding. We choose to work here with the more relaxed definition of gap-simulation, since impossibility results for a weaker definition are of course stronger. More generally, it seems that this framework is an important and relevant one to consider in physics and quantum complexity contexts. Being less demanding, gap-simulation is likely achievable in certain cases where full-spectrum simulation is difficult or even impossible. \vspace{-10pt} \subsection{Main Results} Equipped with this framework of Hamiltonian sparsification via gap-simulations, we ask: When are sparsifications possible in the quantum world? It is conceivable that, like in the analogous classical settings mentioned above \cite{dinur,BSST13}, they ought to be always possible. The main result of in this paper (Theorem \ref{thm:main}) shows that in stark contrast to the classical setting, both coherent and incoherent degree-reductions are not generally possible in the quantum world, even if one uses the relaxed notion of gap-simulation. This impossibility phenomenon is due to the existence of many-body entanglement in some quantum groundstates; we show, using a strengthened version of Hastings-Koma decay of correlation theorem \cite{HastingsKoma}, that there exist local Hamiltonians whose groundstates cannot be coherently mapped into the groundspace of a gapped Hamiltonian with constant degree. Though one might suspect this is a consequence of degeneracy in the groundspace, we show that it holds even in the case of a unique groundstate. We believe this is a surprising and curious phenomenon, which demonstrates the richness in the subject, and highlights the difference in the resources required for classical versus quantum Hamiltonian simulation. This impossibility result on degree-reduction is essentially tight, as we provide a complementary result (Theorem~\ref{thm:degree-reduction-poly}) based on a somewhat sophisticated application of the circuit-to-Hamiltonian construction, stating that degree-reduction becomes possible for any local Hamiltonian with non-negligible spectral gap, when polynomially large overhead in interaction strength is allowed. We also study a related important sparsification task: dilution. While our main result Theorem~\ref{thm:main} is an information-theoretic result that rules out {\it existence} of degree-reducers regardless of computational power, we are unable to provide such a strong result in the case of dilution. Information-theoretically, we can only rule out dilution with perfect (or inverse-polynomially close to perfect) coherence (Theorem~\ref{thm:imposs1-dilute}). Nevertheless, we are able to prove impossibility of any efficient classical algorithm to find diluters with constant unfaithfulness, for generic (even classical) Hamiltonians (Theorem~\ref{thm:imposs-dilute}). The proof of this theorem (relying on Ref.~\cite{DellvanMelkebeek}) works under the assumption that $\textnormal{\texttt{coNP}} \not\subseteq \textnormal{\texttt{NP/poly}}$ (alternatively, the polynomial hierarchy does not collapse to its third level). Although generic constructive dilution is ruled out by our Theorem~\ref{thm:imposs-dilute}, the question of existence of diluters for general Hamiltonian, with bounded or large interaction strengths, remains open. The paper provides quite a few further results complementing the above-mentioned main contributions. These build on ideas in classical PCP reductions and perturbative gadgets. In addition, the ideas studied here are strongly reminiscent of questions arising in the context of the major open problem of quantum PCP~\cite{qPCPsurvey}. We clarify this connection and provide some preliminary results along these lines. We believe that the study of the resources required for Hamiltonian simulations in various contexts, as well as the framework of gap-simulation, are of potential deep interest for physics as well as quantum complexity. The questions raised touch upon a variety of important challenges, from quantum simulations, to algorithm design, to quantum PCP and NP reductions, to physical implementations on near-term quantum processors, and more. Their study might also shed light on questions in many-body physics, by developing tools to construct ``equivalent'' Hamiltonians, from the point of view of the study of groundstate physics. The discussion in Sec.~\ref{sec:discussion} includes a more detailed list of some of the open questions and implications. \subsection{Overview} In Sec.~\ref{sec:set-the-stage}, we set the stage by providing definitions of gap-simulation and sparsification, and proving basic facts about this new framework. In Sec.~\ref{sec:results}, we state our results formally. Subsequently, Sec.~\ref{sec:proofs-overview} provides elaborated and intuitive proof sketches, and Sec.~\ref{sec:discussion} provides further discussion. All technical proofs are deferred to the appendices. \section{Definition of the Framework: Setting the Stage\label{sec:set-the-stage}} \vspace{-5pt} \subsection{Gap-Simulations of Hamiltonians\label{sec:gap-simulation}} We restrict our attention to $k$-local Hamiltonians $H=\sum_{i=1}^M H_i$ acting on $n$ qu$d$its (with internal states $\{\ket{0},\ldots,\ket{d-1}\}$), where each term $H_i$ acts nontrivially on a (distinct) subset of at most $k$ qudits. We denote $\lambda_j(X)$ as the $j$-th lowest eigenvalue of $X$, and $\|X\|$ as the spectral norm of $X$. In addition, for any Hermitian projector $P$, we denote $P^\perp \equiv \mathds{1}-P$, and $\ket{\psi}\in P \Longleftrightarrow P\ket{\psi} = \ket{\psi}$. \begin{defn}[groundspace, energy spread and gap] \label{defn:gap} Consider a family of $n$-qudit Hamiltonians $\{H_{(n)}\}_{n=1}^\infty$. Let $E_n^g = \lambda_1(H_{(n)})$, and suppose $P_{(n)}$ is a Hermitian projector onto the subspace of eigenstates of $H_{(n)}$ with energy $\le E_n^g + w_n\gamma_n$, for some $\gamma_n > 0$, $0\le w_n < 1$, such that \begin{gather} [H_{(n)}, P_{(n)}] = 0, \quad \|P_{(n)} (H_{(n)}-E_n^g) P_{(n)} \|\le w_n \gamma_n, \nonumber \\ \textnormal{and} \quad \lambda_j(P^\perp_{(n)} (H_{(n)} -E_n^g ) P^\perp_{(n)} + \gamma_n P_{(n)}) \ge \gamma_n \quad \forall j. \end{gather} We call the subspace onto which $P_{(n)}$ projects a \emph{quasi-groundspace}, $w_n$ its \emph{energy spread}, and $\gamma_n$ its \emph{quasi-spectral gap}. When we choose $w_n = 0$ and $\gamma_n = \min_j \{\lambda_j(H_{(n)})-E_n^g: \lambda_j(H_{(n)})\neq E_n^g\}$, we call the quasi-groundspace that $P_{(n)}$ projects onto simply \emph{the groundspace} of $H_{(n)}$, and $\gamma_n$ \emph{the spectral gap} of $H_{(n)}$. Let $w_\infty = \sup_n w_n$ and $\gamma_\infty = \inf_n \gamma_n$. If $\gamma_\infty > 0$ and $w_\infty<1$, we say $\{H_{(n)}\}_{n=1}^\infty$ is \emph{spectrally gapped}. \end{defn} Below, we omit the subscript $n$ in $H_{(n)}$, referring to a single $H$, with the usual implicit understanding that we consider families of Hamiltonians, where $n\to\infty$. All explicit Hamiltonians we gap-simulate here have $w_n=0$, but Definition \ref{defn:gap} is more general and allows $w_n>0$, so that it can capture situations with slightly perturbed groundstates (or when simulating a larger low-energy part of the spectrum). We now define Hamiltonian gap-simulation, visualized in Fig.~\ref{fig:gapsimul}: \begin{defn}[gap-simulation of Hamiltonian] \label{defn:hamsimul} Let $H$ and $\tilde{H}$ be two Hamiltonians, defined on Hilbert spaces $\H$ and $\tilde{\H}$ respectively, where $\tilde{\H}$. Let $V: \H\otimes \H_\textnormal{anc} \to \tilde{\H}$ be an isometry ($V^\dag V=\mathds{1}$), where $\H_\textnormal{anc}$ is some ancilla Hilbert space. Denote $\tilde{E}^g \equiv \lambda_1(\tilde{H})$. Per Definition~\ref{defn:gap}, let $P$ be a quasi-groundspace projector of $H$, $\gamma$ its quasi-spectral gap. We say that $\tilde{H}$ \emph{gap-simulates} $(H,P)$ with \emph{encoding} $V$, \emph{incoherence} $\epsilon\ge 0$ and \emph{energy spread} $0\le\tilde{w}<1$ if the following conditions are both satisfied: \begin{enumerate} \item There exists a Hermitian projector $\tilde{P}$ projecting onto a subspace of eigenstates of $\tilde{H}$ such that \begin{gather} [\tilde{H}, \tilde{P}]= 0, \quad \|\tilde{P}(\tilde{H} -\tilde{E}^g)\tilde{P}\|\le \tilde{w}\gamma, \quad \textnormal{and} \quad \lambda_j(\tilde{P}^\perp (\tilde{H} - \tilde{E}^g )\tilde{P}^\perp + \gamma \tilde{P}) \ge \gamma \quad \forall j. \label{eq:strongsimul} \end{gather} I.e., $\tilde{P}$ projects onto a quasi-groundspace of $\tilde{H}$ with quasi-spectral gap not smaller than that of $P$ in $H$, and energy spread $\tilde{w}$. \item There exists a Hermitian projector $P_\textnormal{anc}$ acting on $\H_\textnormal{anc}$, so that \end{enumerate} \begin{flalign} \label{eq:incoherence} \textnormal{[bounded incoherence]} && \|\tilde{P} - V(P\otimes P_\textnormal{anc})V^\dag \| \le \epsilon && \phantom{\textnormal{(incoherence)}} \end{flalign} When $P$ projects onto the groundspace of $H$, rather than a quasi-groundspace, we usually do not mention $P$ explicitly, and simply say that $\tilde{H}$ \emph{gap-simulates} $H$. \end{defn} \begin{figure}[h] \centering \includegraphics[height=3.5cm]{gapsimuldef.pdf} \caption{\label{fig:gapsimul}Visualizing gap-simulation of Hamiltonian $(H, P)$ by $\tilde{H}$. If $\|\tilde{P}-V(P\otimes P_\textnormal{anc})V^\dag\|\le \epsilon$, for some isometry $V$, then this is a coherent gap-simulation with $\epsilon$-incoherence. If $\|\tilde{P} - V(P\otimes \mathds{1}_\textnormal{anc})V^\dag \tilde{P}\|\le \delta$, then this is an incoherent but faithful gap-simulation with $\delta$-unfaithfulness.} \vspace{-5pt} \end{figure} Requiring $\epsilon$ from Eq.~\eqref{eq:incoherence} be small ensures that \emph{coherence} in the groundspace is maintained by the gap-simulation. This is illustrated by considering a Hamiltonian $H$ with two orthogonal groundstates $\ket{g_1}$ and $\ket{g_2}$. The condition of Eq.~\eqref{eq:incoherence} essentially says that for any coherent superposition $\ket{g}=c_1\ket{g_1}+c_2\ket{g_2}$, and a state $\ket{a}\in P_\textnormal{anc}$ on the ancilla, there exists a ground state of $\tilde{H}$ that looks like $\ket{\tilde{g}}= V \ket{g}\otimes\ket{a} + \O(\epsilon)$. Moreover, any groundstate of $\tilde{H}$ could be written in this form. This would preserve the expectation value of any observable in the groundspace, i.e. $\braket{g|\hat{\sigma}|g}\approx \braket{\tilde{g}|V\hat{\sigma}V^\dag|\tilde{g}} + \O(\epsilon)$. In contrast, one can consider an alternative situation where the groundspace of a simulator $\tilde{H}$ is spanned by states of the form $\ket{\tilde{g}_i'} \approx V \ket{g_i}\otimes\ket{a_i}$, where $\braket{a_i|a_j}\ll 1$. This situation remains interesting, as finding a ground state $\ket{\tilde{g}'_i}$ of $\tilde{H}$ reveals information about a ground state of $H$ by decoding: $\ketbra{g_i} \approx \Tr_\textnormal{anc}(V^\dag\ketbra{\tilde{g}'_i} V)$. However, the coherence among groundstates is destroyed, since $\ket{g}=\ket{g_1}+\ket{g_2}$ is mapped to $\ket{\tilde{g}'}\approx V(\ket{g_1}\otimes\ket{a_1}+\ket{g_2}\otimes\ket{a_2})$, and observables such as $\hat{\sigma}=\ketbrat{g_1}{g_2}$ are not preserved: $\braket{g|\hat{\sigma}|g}\not\approx\braket{\tilde{g}'|V\hat{\sigma}V^\dag|\tilde{g}'}$. Although coherence seems important to maintain in most quantum settings, we also define {\it incoherent} gap-simulation, which may be relevant in some situations (see discussion in Sec.~\ref{sec:discussion}). \begin{defn}[incoherent gap-simulation] \label{defn:hamsimul-incoherent} Consider two Hamiltonians $H$ and $\tilde{H}$, $P$ a quasi-groundspace projector of $H$, and $V$ some isometry in the same setting as in Definition~\ref{defn:hamsimul}. We say that $\tilde{H}$ \emph{incoherently gap-simulates} $(H,P)$ with \emph{encoding} $V$, \emph{unfaithfulness} $\delta \ge 0$ and energy spread $0\le \tilde{w}<1$ if it satisfies the first condition of Definition~\ref{defn:hamsimul} and, instead of the second condition of Eq.~\eqref{eq:incoherence}, \begin{flalign}\label{eq:unfaithfulness} \textnormal{[bounded unfaithfulness]}&& \|\tilde{P} - V(P\otimes \mathds{1}_\textnormal{anc})V^\dag \tilde{P}\|\le \delta && \phantom{\textnormal{(unfaithfulness)}} \end{flalign} Again, when $P$ projects onto the groundspace of $H$, we simply say $\tilde{H}$ \emph{incoherently gap-simulates} $H$. \end{defn} Small unfaithfulness essentially means that the {\it support} of the vectors in the groundspace of $\tilde{H}$ is roughly contained in an subspace spanned by encoding the groundspace of $H$ with some ancilla. It is easy to see that small incoherence implies small unfaithfulness, namely $\delta\le 2\epsilon$ (see Appendix \ref{sec:uniqueGS}). However, small unfaithfulness is a strictly weaker condition than small incoherence; we will see an example in Prop.~\ref{prop:incoherent-tree}. Importantly, when $H$ has a {\it unique} groundstate, the two notions are equivalent up to a constant (the proof of this fact is perhaps surprisingly not entirely trivial; see Appendix \ref{sec:uniqueGS}): \begin{lemma}[incoherent gap-simulation is coherent when groundstate is unique] \label{lem:equiv} Suppose $H$ has a unique groundstate, with groundspace projector $P=\ketbra{g}$. If $\tilde{H}$ incoherently gap-simulates $H$ with unfaithfulness $\delta <1$, then it also gap-simulates $H$ with incoherence $\epsilon \le \sqrt{2}\delta/\sqrt{1-\delta^2}$. \end{lemma} While we do not explicitly restrict the form of encoding $V$ in the above definitions, we need to specify them for the impossibility proofs, where we will consider localized encoding: \begin{defn}[localized encoding] \label{defn:localized-encoding} Consider a (possibly incoherent) gap-simulation of $H$ by $\tilde{H}$ encoded by an isometry $V:\H\otimes \H_\textnormal{anc} \to \tilde{\H}$. Let $\H\otimes \H_\textnormal{anc} =\bigotimes_{i=1}^n(\H_i\otimes \mathcal{A}_i)$, where $\mathcal{A}_i$ is the $i$-th ancilla subsystem; also let $\tilde{\H}=\bigotimes_{i=1}^m \tilde{\H}_i$, $m\ge n$. We say $V$ is a \emph{localized encoding} if either of the following is true: \begin{enumerate} \item $V=\bigotimes_{i=1}^n V_i$, where $V_i:\H_i\otimes \mathcal{A}_i\to \tilde{\H}_i$, and $\tilde{\H}_i$ consists of $O(1)$ qudits in $\tilde{H}$ for $i=1,\ldots,n$. \item $V$ is a constant-depth quantum circuit: $V=\prod_{a=1}^D U_a$, where $D=O(1)$, $U_a = \bigotimes_{\mu} U_{a,\mu}$, and $U_{a,\mu}$ is a unitary operator acting on $O(1)$ number of qudits. \end{enumerate} We say $V$ is an \emph{$\eta$-localized encoding} if there exists a localized encoding $V_L$ such that $\|V-V_L\|\le \eta$. \end{defn} In addition to constant-depth quantum circuits, any quantum error-correcting code where each logical qudit is encoded as $O(1)$ qudits is also a localized encoding. Note it is easy to see that if a gap-simulation has $\eta$-localized encoding $V$ and incoherence $\epsilon$ (or unfaithfulness $\delta$), it is also a gap-simulation with localized encoding $V_L$ and incoherence $\epsilon'\le\epsilon+2\eta$ (or unfaithfulness $\delta'\le \delta+2\eta$). Hence, we usually restrict our attention to fully localized encoding in the remainder of the paper. It is also fairly straightforward to show that compositions of gap-simulation behave intuitively: \begin{lemma}[Composition] \label{lem:composition} Suppose $H_1$ (incoherently) gap-simulates $(H_0,P_0)$ with encoding $V_1$, incoherence $\epsilon_1$ (or unfaithfulness $\delta_1$), energy spread $\tilde{w}_1$, and a corresponding quasi-groundspace projector $P_1$. Also suppose $H_2$ (incoherently) gap-simulates $(H_1, P_1)$ with encoding $V_2$, incoherence $\epsilon_2$ (or unfaithfulness $\delta_2$), and energy spread $\tilde{w}_2$. Then $H_2$ (incoherently) gap-simulates $(H_0,P_0)$ with encoding $V_2 (V_1\otimes \mathds{1}_{\textnormal{anc},1})$, incoherence $\le\epsilon_2+\epsilon_1$ (or unfaithfulness $\le 2\delta_2+\delta_1$), and energy spread $\tilde{w}_2$. \end{lemma} \subsection{Hamiltonian Sparsification: Degree-Reduction and Dilution} \vspace{-5pt} We define here the set of parameters of interest when considering minimizing resources in gap-simulations: \begin{enumerate} \item $k$ -- locality of individual Hamiltonian term; typically $O(1)$ in physical systems, but we parametrize it to allow minimization, as well as to allow $O(\log^a n)$-local Hamiltonians, for some constant $a$. \item $r$ -- maximum degree of Hamiltonian, the main objective in degree-reduction. \item $M$ -- number of terms in the Hamiltonian, the main objective in dilution. \item $J$ -- the interaction strength of individual Hamiltonian terms. This is typically restricted to $O(1)$ in physical systems, but allowing it to grow with $n$ leads to more possibilities of gap-simulators. Equivalently, a gap-simulator with $J$ growing with $n$ can be converted to one that simulates the original Hamiltonian but has a vanishing gap if we restrict to bounded-strength Hamiltonian terms. \item $\epsilon$ and $\delta$ -- incoherence $\epsilon$ and unfaithfulness $\delta$ that capture how well the Hamiltonian gap-simulates the original Hamiltonian in terms of groundspace projectors. \item $\tilde{w}$ -- energy spread in the gap-simulator Hamiltonian; allowing it to be different from the original Hamiltonian gives more freedom in gap-simulations. \end{enumerate} We will use the notation of $[r,M,J]$-gap-simulator to indicate that the maximum degree is $r$, the number of local terms is $M$, and for each term $\tilde{H}_i$ we have $\|\tilde{H}_i\|\le J$. We define: \vspace{-5pt} {~} \begin{defn}[Degree-reduction (DR) and dilution] Let $\tilde{H}$ be a $k$-local $[r,M,J]$-gap-simulator of $H$ with $\epsilon$-incoherence (or $\delta$-unfaithfulness) and energy spread $\tilde{w}$. Additionally suppose $H=\sum_{i=1}^{M_0} H_i$ is a sum of $M_0=M_0(n)$ terms, each of which is $O(1)$-local. Then \begin{itemize} \item We call $\tilde{H}$ an $[r,M,J]$-\emph{degree-reducer} of $H$ if $r=O(1)$. \item We call $\tilde{H}$ an $[r,M,J]$-\emph{diluter} of $H$ if $M=o(M_0(n))$. \end{itemize} We also call any degree-reducer or diluter of $H$ a \emph{sparsifier} of $H$. \end{defn} \vspace{-10pt} \section{Results\label{sec:results}} \vspace{-5pt} Our impossibility results are based on two families of $2$-local $n$ qubits Hamiltonians, which can both be expressed in terms of the collective angular momentum operator $\mathcal{J}_\alpha = \sum_{i=1}^n \sigma_\alpha^{(i)}/2$ for $\alpha\in\{x,y,z\}$, where $\sigma_\alpha^i$ are the standard Pauli matrices. \paragraph{Example A (degenerate groundstates)}--- \vspace{-5pt} \begin{equation}\label{eq:Honeone} H_A = \left(\mathcal{J}_z+\frac{n}{2} \right)\left(\mathcal{J}_z+\frac{n}{2}-1\right) = \frac14\sum_{i< j}^n (1-\sigma_z^{(i)})\otimes(1-\sigma_z^{(j)}) = \sum_{i< j}^n \ketbra{1}^{(i)}\otimes\ketbra{1}^{(j)}. \vspace{-5pt} \end{equation} There are $M_0(n)=n(n-1)/2=\Omega(n^2)$ terms in $H_A$, and each qubit has degree $n-1$. The terms in $H_A$ mutually commute, and its groundspace is spanned by the following $n+1$ zero-energy orthonormal states that have $\mathcal{J}_z=-n/2$ or $\mathcal{J}_z=-n/2+1$: \vspace{-5pt} \begin{equation} GS(H_A) = \text{span}\{\ket{00\cdots00}, \ket{00\cdots01}, \ket{00\cdots10}, \ldots, \ket{10\cdots00}\}. \vspace{-5pt} \end{equation} If we consider a qubit in $\ket{1}$ to be an ``excitation'', the groundstates are states with one or zero ``excitations". Observe that $w_n=0$ and $\gamma_n=1$, independent of $n$; the system is thus spectrally gapped. \vspace{-5pt} \paragraph{Example B (unique groundstate)}--- In this example we require that $n$ is even, $n=2s$: \vspace{-5pt} \begin{equation} \label{eq:Hdicke} H_B = \mathcal{J}_z^2 - \frac12 \vec{\mathcal{J}}^2 + b_n = \frac12(\mathcal{J}_z^2-\mathcal{J}_x^2-\mathcal{J}_y^2) +b_n = \frac14 \sum_{i<j}^{n} (\sigma_z^{(i)}\sigma_z^{(j)} - \sigma_x^{(i)}\sigma_x^{(j)} - \sigma_y^{(i)}\sigma_y^{(j)}) -\frac{n}{8} + b_n \end{equation} where $b_n\equiv \frac18 n(n + 2)$ is a constant chosen so that $\lambda_1(H_B)=0$. Similarly to $H_A$, this Hamiltonian has $M_0(n)=\frac12 n(n-1)$ $2$-local terms, and each qubit has degree $n-1$. Since $[\vec{\mathcal{J}}^2,\mathcal{J}_z]=0$, the eigenstates of $H_B$ can be written in eigenbasis of both $\vec{\mathcal{J}}^2$ and $\mathcal{J}_z$; it is an easy exercise (see Appendix~\ref{sec:HamProperties}) that the following well-known Dicke state from atomic physics~\cite{Dicke} is the unique groundstate of $H_B$ with eigenvalue $0$: \begin{equation} \ket{g_B} = \ket{\mathcal{J}=\frac{n}{2}; \mathcal{J}_z = 0} = \binom{n}{n/2}^{-1/2} \sum_{|\{i\,:\,x_i=1\}| = n/2} \ket{x_1\cdots x_n}. \end{equation} Other eigenstates have energy at least $1$, so the system is spectrally gapped with $w_n=0$ and $\gamma_n = 1$. It turns out that these deceptively simple examples form a challenge for Hamiltonian sparsification. \subsection{Limitations on Degree-Reduction} For didactic reasons, we start by ruling out generic {\it perfectly coherent} DR. This is done by showing that such DR is impossible for $H_A$. \begin{lemma}[Impossibility of generic $0$-incoherence DR] \label{lem:imposs1-DR} There does not exist any $k$-local Hamiltonian $\tilde{H}_A$ that is an $[o(n/k),M,J]$-degree-reducer of the $n$-qubit Hamiltonian $H_A$ with localized encoding, $0$-incoherence, and energy spread $\tilde{w} <1/2$, regardless of number of terms $M$ or interaction strength $J$. \end{lemma} A closer inspection of the proof implies a trade-off between $\epsilon$ and $J$, from which it follows that if $J=O(1)$ then generic DR is impossible even if we allow $\epsilon$ which is inverse polynomially small (see exact statement in Lemma~\ref{lem:imposs1}, Appendix \ref{sec:imposs1}.) We note that this result in fact rules out any improvement of the degree for $H_A$, to some sub-linear degree. However, perfect (or even inverse-polynomially close to perfect) coherence is a rather strong requirement. Indeed, by improving our proof techniques, we manage to improve our results for $H_A$ to show impossibility even for constant coherence. Moreover, by devising another Hamiltonian with a unique groundstate, $H_B$, and proving such an impossibility result also for this Hamiltonian, we arrive at the following theorem. Our main result is a strong impossibility result, ruling out generic DR with {\it constant unfaithfulness} (and consequently, also constant incoherence). \begin{thm}[Main: Impossibility of constant coherence (faithfulness) DR for $H_A$ ($H_B$)] \label{thm:main} For sufficiently small constants $\epsilon\ge0$ $(\delta\ge0)$ and $\tilde{w}\ge0$, there exists system size $n_0$ where for any $n\ge n_0$, there is no $O(1)$-local $[O(1),M,O(1)]$-degree-reducer of the $n$-qubit Hamiltonian $H_A$ $(H_B)$ with localized encoding, $\epsilon$-incoherence ($\delta$-unfaithfulness), and energy spread $\tilde{w}$, for any number of Hamiltonian terms $M$. \end{thm} We deduce that generic quantum DR, with even constant unfaithfulness, is impossible. This stands in striking contrast to the classical setting. It is well known that classical DR is possible for all CSPs in the context of PCP reductions\cite{dinur}. This construction easily translates to a $0$-unfaithfulness degree-reducer for any {\it classical} local Hamiltonian: \begin{prop}[Incoherent DR of classical Hamiltonians] \label{prop:classical-deg-reduct} Consider an $n$-qudit $k$-local \emph{classical} Hamiltonian $H = \sum_{S\subset \{1,\ldots, n\}} C_S$, where each $C_S:\{z_i:i\in S\} \to [0, 1]$ is a function of $d$-ary strings of length $|S|\le k$ representing states of qudits in $S$. Let the number of terms in $H$ be $M_0=|\{S\}|=O(n^k)$. Then there is a $k$-local $[3,O(kM_0),O(1)]$-degree-reducer of $H$ with $0$-unfaithfulness, no energy spread, and trivial encoding $V=\mathds{1}$. \end{prop} This demonstrates a large difference between the quantum and classical settings in the context of Hamiltonian sparsification. Characterizing which quantum Hamiltonians can be degree-reduced (with bounded interaction strength), either coherently or just faithfully, remains open. The impossibility of DR by Theorem \ref{thm:main}, which heavily relies on the interaction strength $J$ being a constant, is essentially tight. We prove this in a complementary result showing that degree-reduction is possible when $J$ is allowed to grow polynomially for any local Hamiltonian whose spectral gap closes slower than some polynomial (which is the case of interest for gap-simulation): \begin{thm}[Coherent DR with polynomial interaction strength] \label{thm:degree-reduction-poly} Suppose $H$ is an $O(1)$-local Hamiltonian with a quasi-groundspace projector $P$, which has quasi-spectral gap $\gamma=\Omega(1/\poly(n))$ and energy spread $w$. Also assume $\|H\|=O(\poly(n))$. Then for every $\epsilon>0$, one can construct an $O(1)$-local $[O(1), O(\poly(n)/\epsilon^2), O(\poly(n,\epsilon^{-1}))]$-degree-reducer of $H$ with incoherence $\epsilon$, energy spread $w+O(1/\poly(n))$, and trivial encoding. \end{thm} The proof is constructive: we map any given Hamiltonian to the quantum phase-estimation circuit, make the circuit sparse, and transform it back to a Hamiltonian using Kitaev's circuit-to-Hamiltonian construction~\cite{KSV02}. Some innovations are required to ensure coherence within the groundspace isn't destroyed. For the most general local Hamiltonian whose spectral gap may close exponentially, we can show that coherent DR is possible with exponential interaction strength: \begin{thm}[Coherent DR with exponential interaction strength] \label{thm:degree-reduction-exp} Let $H$ be an $n$-qubit $O(1)$-local Hamiltonian with $M_0$ terms, each with bounded norm. Suppose $H$ has quasi-spectral gap $\gamma$ and energy spread $w$ according to Def.~\ref{defn:gap}. For any $\epsilon>0$, one can construct a $2$-local $[O(1), O(M_0), O ((\gamma\epsilon)^{-\poly (n)} )]$-degree-reducer of $H$ with incoherence $\epsilon$, energy spread $w+\O(\epsilon)$, and trivial encoding. \end{thm} The proof uses a construction from perturbative gadgets, and is similar to other results in the Hamiltonian simulation literature~\cite{OliveiraTerhal,UniversalHamiltonian}. Due to significantly more resource required compared to Theorem~\ref{thm:degree-reduction-poly}, this construction is only useful in situations where we want to preserve some exponentially small spectral gap. \vspace{-10pt} \subsection{Limitations on Dilution} For perfect or near-perfect dilution, we can prove a similar impossibility result to Lemma~\ref{lem:imposs1-DR}: \begin{thm}[Impossibility of generic $0$-incoherence dilution] \label{thm:imposs1-dilute} There does not exist any $k$-local Hamiltonian $\tilde{H}_A$ that is an $[r,o(n^2/k^2),J]$-diluter of the $n$-qubit Hamiltonian $H_A$ with localized encoding, 0-incoherence, and energy spread $\tilde{w}<1/2$, regardless of degree $r$ or interaction strength $J$. \end{thm} Similar to Lemma~\ref{lem:imposs1-DR}, this in fact holds even if we allow inverse polynomial incoherence (see Lemma~\ref{lem:imposs1}); and like above, this seems to be a rather weak impossibility result since requiring inverse polynomial incoherence may be too strong in many situations. Can we strengthen this to rule out dilution with constant incoherence? The proof technique in Theorem \ref{thm:main} does not apply for dilution, since it relies on the decay of correlation between {\it distant} nodes in the interaction graph of $\tilde{H}$ (see Sec.~\ref{sec:proof-sketch-main}). On the other hand, a diluter $\tilde{H}$ can have unbounded degree, and hence constant diameter, e.g. the star graph. Nevertheless, under a computational hardness assumption, no efficient classical {\it algorithm} for generic constant-unfaithfulness dilution exists, even for all $k$-local {\it classical} Hamiltonians: \begin{thm}[Impossibility of dilution algorithm for classical Hamiltonians] \label{thm:imposs-dilute} If $\textnormal{\texttt{coNP}} \not\subseteq \textnormal{\texttt{NP/poly}}$, then for any $\xi>0$, $\delta < 1/\sqrt{2}$, $\tilde{w} \le 1/2$, there is no classical algorithm that given a $k$-local $n$-qubit classical Hamiltonian $H$, runs in $O(\poly(n))$ time to find an $[r,O(n^{k-\xi}),J]$-diluter of $H$ with $\delta$-unfaithfulness, energy spread $\tilde{w}$, and any encoding $V$ that has an $O(n^{k-\xi})$-bit description. This holds for any $r$ and $J$. \end{thm} The above result rules out general (constructive) dilution even when the Hamiltonians are classical. For specific cases, however, dilution is possible. Our $H_A$ (which is also a classical Hamiltonian) provides such an example, for which we can achieve dilution even with $0$-unfaithfulness, in the incoherent setting: \begin{prop}[$0$-unfaithfulness incoherent dilution and DR for $H_A$] \label{prop:incoherent-tree} There is a 3-local incoherent $[2,n-1,1]$-diluter of $H_A$ with 0-unfaithfulness, energy spread $\tilde{w}=0$, and trivial encoding. This is also an incoherent $[2,n-1,1]$-degree-reducer of $H_A$. \end{prop} Furthermore, combining ideas from the construction in Proposition~\ref{prop:incoherent-tree} and Theorem~\ref{thm:degree-reduction-poly}, we can show that coherent dilution of $H_A$ with polynomial interaction strength is also possible: \begin{prop}[Constant-coherence dilution and DR for $H_A$ with polynomial interaction strength] \label{prop:circuit} There is a 6-local $[6, O(n/\epsilon^2),O(\poly(n,\epsilon^{-1}))]$-degree-reducer of $H_A$ with $\epsilon$-incoherence, energy spread $\tilde{w}=0$, and trivial encoding. This is also a $[6, O(n/\epsilon^2),O(\poly(n,\epsilon^{-1}))]$-diluter of $H_A$. \end{prop} Note since Theorem~\ref{thm:imposs-dilute} rules out constructive dilution regardless of interaction strength $J$, we cannot hope to prove an analogue of Theorem~\ref{thm:degree-reduction-poly} or \ref{thm:degree-reduction-exp} to build coherent diluters for generic Hamiltonians, even allowing arbitrarily large interaction strength. Nevertheless, it remains an interesting open question to characterize Hamiltonians for which diluters exist, whether coherent or incoherent, with constant or large interaction strengths. \subsection{Connection to Quantum PCP\label{sec:other-results}} It might appear that our results rule out quantum degree-reduction (DR) in the context of quantum PCP (which would add to existing results \cite{BravyiVyalyi, AharonovEldar2011, qPCPArad, BrandaoHarrow, qPCPHastings, AharonovEldar2013} ruling out quantum generalizations of other parts of Dinur's PCP proof \cite{dinur}). However, our results in this context (detailed in Appendix~\ref{sec:qPCP}) currently have rather weak implications towards such a statement. The catch is that despite the apparent similarity, our gap-simulating DR is a very different notion from DR transformations used in the context of quantum and classical PCP. Gap-simulation seeks the {\it existence} of a Hamiltonian $\tilde{H}$ that reproduces the properties of the groundstate(s) and {\it spectral gap} of an input Hamiltonian $H$. On the other hand, a qPCP reduction is an {\it algorithm} that given $H$, it is merely required to output some $\tilde{H}$, such that if the groundstate energy of $H$ is small (or large), then so is the groundstate energy of $\tilde{H}$; in other words, qPCP preserves the {\it promise gap}. Notice that such a $\tilde{H}$ always {\it exists}, and the difficulty in qPCP reductions is to generate $\tilde{H}$ efficiently, without knowing the groundstate energy of $H$. Thus, we cannot hope for an information-theoretical impossibility result (as in Theorem \ref{thm:main} and \ref{thm:imposs1-dilute}) in the qPCP setting without further restriction on the output. To circumvent this, we generalize to the quantum world a natural requirement, which seems to hold in the classical world for all known PCP reductions, that the reduction is {\it constructive}: roughly, it implies a mapping not only on the CSPs (Hamiltonians) but also on individual assignments (states)~\cite{BenSassonPCP,dinurgoldreich} (see definition of qPCP-DR in Appendix~\ref{sec:qPCP-implication}). Under this restriction, we prove the impossibility of qPCP-DR reductions with near-perfect coherence (see Theorem \ref{thm:imposs-qPCP} in Appendix \ref{sec:qPCP} for exact statement). The proof of Theorem \ref{thm:imposs-qPCP} approximately follows that of impossibility results of Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute} for sparsification with close-to-perfect coherence. Unfortunately, as we explain in Sec.~\ref{sec:proof-sketch-main}, strengthening these results to prove impossibility for constant error (the regime of interest for qPCP), as is done in Theorem \ref{thm:main}, seems to require another new idea. \section{Proofs Overview\label{sec:proofs-overview}} \subsection{Proof Sketch for Main Theorem \ref{thm:main} (and related results: Theorem~\ref{thm:imposs1-dilute}, \ref{thm:imposs-qPCP} and Lemma~\ref{lem:imposs1-DR})\label{sec:proof-sketch-main}} We start with the idea underlying the impossibility of degree-reduction and dilution with (close to) perfect coherence (Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute}), which we refer to as ``contradiction-by-energy''. For simplicity, let's first examine the case of gap-simulation without encoding. Consider all pairs of original qubits $(i,j)$. The groundstates of $H_A$ include basis states with zero or one excitations (namely, 1's), but not 2-excitation states. Importantly, the groundstates can be obtained from the 2-excitation state by {\it local} operations $\sigma_x^{(i)}$ and $\sigma_x^{(j)}$. Assuming the gap-simulator $\tilde{H}_A$ of $H_A$ does not interact the qubits $(i, j)$, we can express the energy of the 2-excitation state as a linear combination of the energy of 0- and 1-excitation states, up to an error of $\O(\tilde{w})$ and $\O(\epsilon\|\tilde{H}_A\|)$, using the fact that we can commute $\sigma_x^{(i)}$ and $\sigma_x^{(j)}$ through independent parts of $\tilde{H}_A$. If we assume $\tilde{w}$ is small and $\epsilon=0$, the energy of the 2-excitation state cannot be distinguished from these groundstates. Thus any gap-simulator $\tilde{H}_A$ must directly interact all pairs of qubits, which easily proves the impossibility without encoding. We can also see that if $\epsilon>0$, then DR and dilution remain impossible if $\|\tilde{H}_A\| \le O(\epsilon^{-1})$, e.g. when $\epsilon$ is polynomially small. This impossibility easily extends to localized encoding, where each original qubit is encoded into $O(1)$ qudits in the gap-simulator Hamiltonian either independently or via some constant-depth circuit. In both cases, the required $\Omega(n)$ degree and $\Omega(n^2)$ interaction terms implied for the non-encoded version translate to the same requirements for the encoded version up to a constant factor, proving Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute}. We now explain the proof of Theorem \ref{thm:main} that rules out degree-reduction even with constant incoherence. Let us first consider the statement for $H_A$ with constant $\epsilon$ incoherence. The challenge is that the contradiction-by-energy trick used in the proof of Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute} does not work for $\epsilon=\Theta(1)$ incoherence. The problem is that the error in energy is of the order of $\O(\epsilon\|\tilde{H}_A\|)$; this is too large for constant $\epsilon$, and does not allow one to distinguish the energy of ground and excited states. Instead of contradiction-by-energy, we derive a contradiction using the groundspace correlations between qubits $(i,j)$, where $\epsilon$-incoherence only induces an error of $\O(\epsilon)$. Since $H_A$ is gapped, then any degree-reducer Hamiltonian $\tilde{H}_A$ of $H_A$ must be gapped (while allowing some small energy spread $\tilde{w}$) by Def.~\ref{defn:hamsimul}. We can therefore apply a result (modified to accommodate non-vanishing energy spread, see Lemma~\ref{lem:HastingsKoma} in Appendix~\ref{sec:MHK}) of Hastings-Koma~\cite{HastingsKoma} stating that groundspace correlation decays exponentially with the distance on the graph where $\tilde{H}_A$ is embedded. Since we assume bounded degree, we can find a pair $(i,j)$ among the original $n$ qubits such that their supports $(S_i, S_j)$ after a localized encoding are $\Omega(\log n)$ distance apart, with respect to the graph metric. Hence, their correlation $\braket{V\sigma_x^{(i)}\sigma_x^{(j)}V^\dag}$ in the groundspace of $\tilde{H}_A$ must decay as $e^{-\Omega(\log n)} = O(1/\poly(n))$. Contradiction is achieved by the fact that for any pair of original qubits $(i,j)$, the groundspace of $\tilde{H}_A$ contains a state of the form $(\ket{0_i1_j}+\ket{1_i0_j})\ket{0^{n-2},\text{rest}}+\O(\epsilon)$, which has correlation at least $\braket{V\sigma_x^{(i)}\sigma_x^{(j)}V^\dag }=1 - \O(\epsilon)$. For sufficiently small $\epsilon$ and $\tilde{w}$, this constant correlation from the latter lower bound contradicts the $O(1/\poly(n))$ upper bound from the Hastings-Koma result. The second part of Theorem~\ref{thm:main} proves impossibility of incoherent DR for $H_B$ with $\delta$-unfaithfulness. Since $H_B$ has a unique groundstate that can be shown to have constant correlation between any pair of original qubits $(i,j)$, we can apply the same argument above for $H_A$ and show a contradiction with the Hastings-Koma's vanishing upper bound of $O(1/\poly(n))$ for small $\delta$ and $\tilde{w}$. We now remark how these impossibility proofs can be extended to the context of quantum PCP. The contradiction-by-energy idea in Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute} can indeed be generalized in this context. In Appendix \ref{sec:qPCP}, we show that under a reasonable restriction on the reduction -- namely that the energy of non-satisfying assignments (frustrated or excited states) after the mapping is lower bounded by the promise gap -- degree-reduction or dilution for quantum PCP is not generally possible with close-to-perfect (namely inverse polynomial) coherence (Theorem \ref{thm:imposs-qPCP}). However, this impossibility proof would not work when constant incoherence is allowed. To move to contradiction-by-correlation as in Theorem~\ref{thm:main}, we need to use some form of Hastings-Koma, which requires a spectral gap in $\tilde{H}$. Thus, more innovation is needed as it may be an unnecessarily strong requirement for quantum PCP to preserve the spectral gap. \subsection{Overview of Remaining Proofs} \paragraph{Proof sketch: Equivalence between coherent and incoherent gap-simulations for unique groundstates (Lemma~\ref{lem:equiv})}--- We want to show that incoherent gap-simulation implies coherent gap-simulation, in the case of unique groundstate of the original Hamiltonian $H$. A naive approach using the small error per groundstate of the gap-simulator will not work due to possible degeneracy in the groundspace of the simulator $\tilde{H}$; this (possibly exponential) degeneracy could add an unwanted exponential factor. Hence, we explicitly construct the subspace on which the ancilla qubits should be projected by $P_\textnormal{anc}$. The main observation is that since faithful gap-simulation implies that any state in the groundspace of $\tilde{H}$ must be close to the space spanned by $P_\textnormal{anc}$, the dimensions of $P_\textnormal{anc}$ and the groundspace of $\tilde{H}$ must be the same. A sequence of simple arguments then allows us to derive a bound on the incoherence of any state (i.e., its norm after the incoherence operator in Eq.~\eqref{eq:incoherence} is applied). \vspace{-7pt} \paragraph{Proof sketch: DR of any classical Hamiltonian (Proposition~\ref{prop:classical-deg-reduct})}--- Here we follow the standard classical DR (as in~\cite{dinur}) in which each variable (of degree $d$) is replaced by $d$ variables, and a ring of equality constraints on these variables is added to ensure they are the same. The proof that this satisfies our gap-simulator definition is straightforward. \vspace{-7pt} \paragraph{Proof sketch: Coherent DR of any Hamiltonian with $\Omega(1/\poly(n))$ spectral gap using polynomial interaction strength (Theorem~\ref{thm:degree-reduction-poly})}--- The construction is based on mapping the quantum phase estimation (PE) circuit\cite{NielsenChuang} to a Hamiltonian, using a modified version of Kitaev's circuit-to-Hamiltonian construction\cite{KSV02}. The PE circuit can write the energy of any eigenstate of a given $H$ in an ancilla register, up to polynomial precision using polynomial overhead. The degree of the Hamiltonian is reduced by ``sparsifying'' the circuit before converting to the Hamiltonian. To repair the incoherence due to different histories, we run the circuit backwards, removing entanglement between the ancilla and the original register. To achieve $\epsilon$-incoherence, we add $O(\poly(n)/\epsilon^2)$ identity gates to the end of the circuit. The eigenvalue structure of the original Hamiltonian $H$ is restored by imposing energy penalties on the energy bit-string written on the ancilla by the PE circuit. This yields a full-spectrum simulation of $H$, which also implies a gap-simulation of $H$. \vspace{-7pt} \paragraph{Proof sketch: Impossibility of generic dilution algorithm (Theorem~\ref{thm:imposs-dilute})}--- Ref.~\cite{DellvanMelkebeek} shows that under the assumption $\textnormal{\texttt{coNP}} \not\subseteq \textnormal{\texttt{NP/poly}}$, there is no poly-time algorithm to ``compress'' vertex-cover problems on $n$-vertex $k$-uniform hypergraphs and decide the problem by communicating $O(n^{k-\xi})$ bits for any $\xi>0$ to a computationally unbounded oracle. Suppose towards a contradiction that $\mathcal{A}$ is a poly-time algorithm to dilute any $k$-local classical Hamiltonian; we use it to derive a compression algorithm for vertex cover. To this end, $\mathcal{A}$ is given a classical $k$-local Hamiltonian $H$ encoding a vertex cover problem; $\mathcal{A}$ produces the diluter $\tilde{H}$ with $O(n^{k-2\xi})$ terms and some encoding $V$ described by $O(n^{k-2\xi})$ bits. Using Green's function perturbation theory (Lemma \ref{lem:PPgroundspace}), we show that $\tilde{H}$ can be written using only $\log(n)$-bit precision as $\tilde{H}'$ with $O(1)$ error in the quasi-groundspace (even accounting for degeneracy). We then communicate $(\tilde{H}',V)$ to the oracle by sending $O(n^{k-2\xi}\log n)=O(n^{k-\xi})$ bits. The oracle then uses any groundstate of $\tilde{H}'$, which has large overlap with groundstates of $H$ for small $\delta$ and high precision, to decide the vertex cover problem and transmit back the answer. \vspace{-7pt} \paragraph{Proof sketch: Incoherent dilution and DR of $H_A$ (Proposition~\ref{prop:incoherent-tree})}--- We use here the usual translation of a classical circuit to a CSP: $n-1$ qubits in a tree structure (see Figure \ref{fig:tree}) are used to simulate counting of the number of $1$s among the original qubits, and the CSP checks the correctness of this (local) computation. The ``history'' of the computation is written on the ancilla qubits, and since different strings have different such histories, the construction is incoherent (see Figure \ref{fig:treeincoherent}). \vspace{-7pt} \paragraph{Proof sketch: Coherent dilution and DR of $H_A$ with polynomial interaction strength (Proposition~\ref{prop:circuit})}--- We improve upon the construction in Prop.~\ref{prop:incoherent-tree} and Theorem~\ref{thm:degree-reduction-poly} to obtain a coherent diluter of $H_A$ with polynomial interaction strength. The key is an $O(n)$-length circuit similar to that of Prop.~\ref{prop:incoherent-tree} with a circuit that counts the number of $1$s in the same tree geometry. Using the same tricks in Theorem~\ref{thm:degree-reduction-poly} to uncompute computational histories and idling at the end, we show that this leads to a coherent gap-simulator of $H_A$ with $\epsilon$-incoherence and $O(n/\epsilon^2)$ terms. \vspace{-7pt} \paragraph{Proof sketch: Coherent DR for any Hamiltonian using exponential interaction strength (Theorem~\ref{thm:degree-reduction-exp})}--- In order to provide generic coherent degree-reduction for any local Hamiltonian, using exponential interaction strength, we first show that perturbative gadgets\cite{KKR06,OliveiraTerhal,CaoImprovedOTGadget} can be used for gap-simulation. The proofs make use of Green's function machinery to bound incoherence. This allows us to construct a degree-reducer for any $k$-local Hamiltonian by a sequence of perturbative gadget applications. In the first part of the sequence, we reduce the locality of individual Hamiltonian terms to 3-local via $O(\log k)$ serial applications of subdivision gadgets \cite{OliveiraTerhal}, and each 3-local term is further reduced to 2-local via ``3-to-2-local'' gadgets \cite{OliveiraTerhal}. Then, each original qubit is isolated from each other by subdivision gadgets so that they only interact with $O(n^{k-1})$ ancilla qubits that mediate interactions. Finally, applying fork gadgets \cite{OliveiraTerhal} in $O(\log n)$ iterations allows us to reduce maximum degree of these original qubits to 6, generating our desired degree-reducer. It is this last part that causes the exponential blow-up in the interaction strength required to maintain the gap-simulation. \vspace{-7pt} \paragraph{Proof sketch: Generalized Hastings-Koma (Lemma~\ref{lem:HastingsKoma})} --- In Ref.~\cite{HastingsKoma}, Hastings and Koma proved the exponential decay of correlations in the quasi-groundspace of a Hamiltonian $H$ consisting of finite-range (or exponentially decaying) interactions between particles embedded on a lattice (or more generally on some graph). They assume that the system is spectrally gapped, and has vanishing energy spread as the system size $n\to\infty$. Their proof is based on the relationship between the correlation $\braket{\sigma^{(i)} \sigma^{(j)}}$ they want to upper bound, and the commutator $\braket{[e^{-iHt}\sigma^{(i)} e^{iHt}, \sigma^{(j)}]}$. By applying the Lieb-Robinson bound\cite{LiebRobinson} on the latter, and integrating out the time $t$, they show that under the above conditions, the correlations between operators acting on particles $i$ and $j$ decay exponentially with the graph-theoretic distance between the particles. For application to the gap-simulation framework, we need to generalize their result to cases where the energy spread is not assumed to vanish with the system size. This is done by a careful modification of their proofs where we optimize the bounds and integration parameters so that errors due to the non-zero energy spread are suppressed. \vspace{-5pt} \section{Discussion and outlook\label{sec:discussion}} We have initiated the rigorous research of resources required for analog simulation of Hamiltonians, and proved unexpected impossibility results for Hamiltonian sparsification. Instead of working with full-spectrum simulations~\cite{BravyiHastingsSim,UniversalHamiltonian}, we use a new, relaxed definition of gap-simulation that is motivated by minimal requirements in physics. We note that impossibility results proven in a relaxed framework are of course stronger. It would be very interesting to improve our understanding of the new framework of gap-simulations presented here, and clarify its applicability. For a start, it will be illuminating to find example applications of gap-simulations in cases where full-spectrum simulations as in Ref.~\cite{BravyiHastingsSim,UniversalHamiltonian} are unknown or difficult to achieve. Such simulations can enable experimental studies of these physical systems, by reducing resources required for analog simulations. Moreover, in many-body quantum physics, tools to construct ``equivalent'' Hamiltonians that preserve groundstate properties are of great utility. In this context, the study of gap-simulations can potentially lead to better understanding of universal behaviors in quantum phases of matter, which are characterized only by groundstate physics~\cite{SachdevQPT}. Another possible application of gap-simulators may be in the design of Hamiltonian-based quantum algorithms. In adiabatic algorithms~\cite{FarhiAdiabatic2000}, it is well known that the higher parts of the spectrum of the final and initial Hamiltonians can significantly affect the adiabatic gap~\cite{FarhiQAAFail,DicksonAmin,AharonovAtia}; gap-simulating these final and initial Hamiltonians by others will not affect the final groundstate, and can sometimes dramatically improve on the gap along the adiabatic path. Gap-simulations may also be a useful tool for tailoring the Hamiltonians used in other Hamiltonian-based algorithms such as QAOA~\cite{QAOA}. We note that incoherent but faithful gap-simulations can be very interesting despite the apparent violation of the quantum requirement for coherence. For example, in adiabatic algorithms~\cite{FarhiAdiabatic2000}, we only want to arrive at one of the solutions (groundstates) to a quantum constraint satisfaction problem. In addition, in quantum NP~\cite{quantumNPsurvey}, one is interested only in whether a certain eigenvalue {\it exists}, and not in the preservation of the entire groundspace. However, in the context of quantum simulation and many-body physics, maintaining coherence seems to be crucial for transporting all the physical properties of the groundspace. One would also expect maintaining coherence to be important when gap-simulating a subsystem (perhaps in an unknown state) of a larger system. We remark that our framework deliberately avoids requiring that the eigenvalue structure of the spectrum be maintained even in its low-lying part, so as to provide a minimal but still interesting definition. Indeed, when simulating the groundspace, or a quasi-groundspace with small energy spread, this structure is not important. Nevertheless, one can imagine an intermediate definition, in which full-spectrum simulation is too strong, but the structure of a significant portion of the lower part of the spectrum matters. It might be interesting to extend the framework of gap-simulations to allow for such intermediate cases in which, for example, Gibbs states at low (but not extremely low) temperatures are faithfully simulated. A plethora of open questions arise in the context of sparsification. First, it will be very interesting to find more examples where degree-reduction and/or dilution are possible, or are helpful from the perspective of physical implementations. Assuming bounded interaction strength, which is generally a limitation of physical systems, can we rigorously characterize which Hamiltonians can be coherently (or incoherently) degree-reduced? Of course, similar questions can be asked about dilution. It will also be interesting to consider saving other resources such as the dimensionality of the particles, which would be a generalization of alphabet-reductions from the context of PCP to Hamiltonian sparsification. Our results on the impossibility of dilution are weaker than those for DR. Can we strengthen these to stronger information-theoretical results, by finding a quantum Hamiltonian for whom a diluter does not {\it exist} with constant incoherence, or even constant unfaithfulness? We mention here that the classical graph sparsification results of Ref.~\cite{BSS12,BSST13} can be viewed as dilution of a graph while approximately maintaining its spectrum. These results have been generalized to the matrix setting in Ref.~\cite{SilvaSparsification}; however, this generalization does not seem to be useful in the context of diluting the interaction graph of a local Hamiltonian. The result of Ref.~\cite{SilvaSparsification} shows that for sums of $d\times d$ positive Hermitian matrices, $O(d)$ matrices are sufficient to reproduce the spectral properties to good approximation, improving over Chernoff-like bounds~\cite{AhlswedeWinter}. While this in principle allows one to approximate a sum of terms by a sum of fewer terms, the required number of terms grows as $d=2^{\Omega(n)}$ for quantum Hamiltonians on $n$ qubits, and is thus irrelevant in our context. Improving the {\it geometry} of the simulators is another important task that is relevant for applications of Hamiltonian sparsification to physical implementations. Ref.~\cite{LechnerHaukeZoller} has devised a method of converting the NP-complete Ising model Hamiltonian ($H=\sum_{ij} J_{ij} \sigma_z^{(i)}\sigma_z^{(j)} + \sum_i h_i \sigma_z^{(i)}$) on $n$ qubits to a new Hamiltonian on $O(n^2)$ qubits with interactions embedded on a 2D lattice, and sharing the same low-energy spectrum. Their construction encodes each edge $\sigma_z^{(i)}\sigma_z^{(j)}$ as a new qubit, and corresponds to an incoherent degree-reducer, where the new groundstates are non-locally encoded version of the original states. Our Proposition~\ref{prop:classical-deg-reduct} also provides incoherent DR of these Hamiltonians, and without encoding, but the geometry is not in 2D; it will be interesting to improve our Proposition~\ref{prop:classical-deg-reduct} as well as our other positive Theorems \ref{thm:degree-reduction-poly} and \ref{thm:degree-reduction-exp} to hold using a spatially local $\tilde{H}$. We note that if we allow the overhead of polynomial interaction strength, then it should be straightforward to extend the circuit-to-Hamiltonian construction in Theorem~\ref{thm:degree-reduction-poly} for analog simulation of local Hamiltonians on a 2D lattice, by ordering the gates in a snake-like fashion on the lattice similar to Ref.~\cite{OliveiraTerhal, AharonovAQCUniversal}. Identifying situations where DR in 2D with bounded interaction strength is possible remains an open question. A different take on the geometry question is to seek gap-simulators which use a single (or few) ancilla qubits that strongly interact with the rest. This may be relevant for physical systems such as neutral atoms with Rydberg blockade~\cite{RydbergBlockade}, where an atom in a highly excited level may have a much larger interaction radius, while no two atoms can be excited in each other's vicinity. Can we improve our results about quantum PCP, and show impossibility of qPCP-DR with constant incoherence? This would make our impossibility results interesting also in the qPCP context, as they would imply impossibility of DR in the qPCP regime of constant error, under a rather natural restriction on the qPCP reduction (see discussion in Appendix~\ref{sec:qPCP}). This would complement existing impossibility results on various avenues towards qPCP~\cite{BravyiVyalyi, AharonovEldar2011, qPCPArad, BrandaoHarrow, qPCPHastings, AharonovEldar2013,qPCPsurvey}. Neverthless, it seems that proving such a result might require a significantly further extension of Hastings-Koma beyond our Lemma~\ref{lem:HastingsKoma}, which may be of interest on its own. Finally, we mention a possibly interesting variant of gap-simulation, which we call {\it weak} gap-simulation (see Appendix~\ref{sec:weak-sparsifier} and Fig.~\ref{fig:weakgapsimul}). Here, the groundspace is simulated in an {\it excited} eigenspace of the simulating Hamiltonian, spectrally gapped from above and below, rather than in its groundspace. This can be useful in the context of Floquet Hamiltonian engineering, where eigenvalues are meaningful only up to a period, and thus a spectral gap in the middle of the spectrum is analogous to a spectral gap above the groundspace~\cite{floquet}. Proposition~\ref{prop:star} in Appendix~\ref{sec:weak-sparsifier} shows how to weakly gap-simulate $H_A$ to provide dilution with {\it constant} incoherence and {\it bounded} interaction strength -- a task which we currently do not know how to do using ``standard'' gap-simulation. It remains open whether one can show stronger possibility results under weak gap-simulation. If not, can the impossibility results presented here be extended to the weak-gap-simulation setting? This might require an even stronger extension of Hastings-Koma's theorem. Overall, we hope that the framework, tools, and results presented here will lead to progress in understanding the possibilities and limitations in simulating Hamiltonians by other Hamiltonians -- an idea that brings the notion of {\it reduction} from classical computer science into the quantum realm, and constitutes one of the most important contributions of the field of quantum computational complexity to physics. \section{Acknowledgements} We are grateful to Oded Kenneth for suggesting the construction of $H_B$, and for fruitful discussions; to Itai Arad for insightful remarks about the connection to quantum PCP; to Eli Ben-Sasson for discussions about PCP; to Ashley Montanaro and Toby Cubitt for clarifications about Hamiltonian simulation. D.A. is grateful to the generous support of ERC grant 280157 for its support during the completion of most of this project. L.Z. is thankful for the same ERC grant for financing his visits to the research group of D.A. D.A. also thanks the ISF grant No.~039-9494 for supporting this work at its final stages. \newpage \begin{appendices} \section{Properties of Gap-Simulation\label{sec:gap-simulation-properties}} \subsection{Relationship between Coherent and Incoherent Gap-Simulation\label{sec:uniqueGS}} Here we show the relationship between coherent and incoherent gap-simulations. We first prove the easy direction: incoherence provides an upper bound on unfaithfulness. Pick $\delta$ to be the exact value of unfaithfulness (and not just an upper bound on it), then \begin{eqnarray} \delta &=& \|\tilde{P}-V(P\otimes \mathds{1}_\textnormal{anc} )V^\dag \tilde{P}\| \nonumber\\ &=& \|\tilde{P}-V(P\otimes \mathds{1}_\textnormal{anc} )V^\dag V(P\otimes P_\textnormal{anc} )V^\dag - V(P\otimes \mathds{1}_\textnormal{anc} )V^\dag[\tilde{P}-V(P\otimes P_\textnormal{anc} )V^\dag]\| \nonumber \\ &=& \|\tilde{P}-V(P\otimes P_\textnormal{anc} )V^\dag - V(P\otimes \mathds{1}_\textnormal{anc} )V^\dag[\tilde{P}-V(P\otimes P_\textnormal{anc} )V^\dag]\| \nonumber \\ &\le& \|\tilde{P}-V(P\otimes P_\textnormal{anc} )V^\dag\| + \|V(P\otimes \mathds{1}_\textnormal{anc} )V^\dag[\tilde{P}-V(P\otimes P_\textnormal{anc} )V^\dag]\| \nonumber \\ &\le& 2\epsilon \end{eqnarray} The above uses the fact that $V^\dag V=\mathds{1}$ for isometries. When $V$ is unitary ($VV^\dag=V^\dag V=\mathds{1}$), we can obtain an even better bound of $\delta\le\epsilon$: \begin{eqnarray} \delta &=& \|\tilde{P}-V(P\otimes \mathds{1}_\textnormal{anc} )V^\dag \tilde{P}\| = \|VV^\dag\tilde{P}-V(P\otimes \mathds{1}_\textnormal{anc} )V^\dag \tilde{P}\| = \|V(P^\perp \otimes \mathds{1}_\textnormal{anc}) V^\dag \tilde{P} \|\nonumber \\ &=& \| V(P^\perp \otimes \mathds{1}_\textnormal{anc})V^\dag [\tilde{P} - V(P\otimes P_\textnormal{anc}) V^\dag ]\| \le \|\tilde{P} - V(P\otimes P_\textnormal{anc}) V^\dag \|\nonumber \\ &\le& \epsilon. \end{eqnarray} We now prove Lemma~\ref{lem:equiv}, which shows that in the case of a unique groundstate, the opposite direction holds as well. We reproduce this lemma below: { \renewcommand{\thelemma}{\ref{lem:equiv}} \begin{lemma}[Equivalence of coherent and incoherent gap-simulation when groundstate is unique] Suppose the Hamiltonian $H$ has a unique groundstate, i.e. its groundspace projector $P=\ketbra{g}$. If $\tilde{H}$ gap-simulates $H$ with unfaithfulness $\delta <1$, then it also gap-simulates $H$ with incoherence $\epsilon \le \sqrt{2}\delta/\sqrt{1-\delta^2}$. \end{lemma} \addtocounter{lemma}{-1} } It turns out that it'll be helpful to first prove the following technical lemma (which will also be used in Appendix~\ref{sec:degree-reduction-poly} to prove Theorem~\ref{thm:degree-reduction-poly}). \begin{lemma}[Projector Difference Lemma] \label{lem:proj-diff} Consider two Hermitian projectors $\Pi_A$ and $\Pi_B$, such that $\rank(\Pi_A)\le \rank(\Pi_B)$. Suppose that for all normalized $\ket{\phi}\in \tilde{\Pi_B}$, $\|(\Pi_B-\Pi_A)\ket{\phi}\| \le \delta$. Then $\|\Pi_B-\Pi_A\| \le \sqrt{2}\delta/\sqrt{1-\delta^2}$. \end{lemma} \begin{proof} Let us denote $\rank(\Pi_A) =k$ and $\rank(\Pi_B)=\ell$. First, we observe that for all normalized $\ket{\phi}\in \Pi_B$, $\ket{\phi} = \Pi_A\ket{\phi} + \Pi_A^\perp\ket{\phi}$, and thus \begin{equation} \|\Pi_A\ket{\phi}\|^2 = 1-\|\Pi_A^\perp\ket{\phi}\|^2 = 1-\|(\Pi_B-\Pi_A)\ket{\phi}\|^2 \ge 1-\delta^2. \label{eq:bound-BinA} \end{equation} Now consider any normalized $\ket{\phi^\perp} \perp \Pi_B$. We want to bound $\|(\Pi_B-\Pi_A)\ket{\phi^\perp}\|$. To this end, consider the space $\mathcal{V}=\spn\{\Pi_B, \ket{\phi^\perp}\}$, which has dimension $\dim(\mathcal{V})=\ell+1$. We first argue that there exists $\ket{v}\in \mathcal{V}$ such that $\ket{v}\perp \Pi_A$. To see this, pick an orthonormal basis for $\Pi_A$: $\{\ket{\beta_1},\ldots,\ket{\beta_k}\}$. We then write $\ket{\beta_i} = \ket{\beta_i^\mathcal{V}} + \ket{\beta_i^\perp}$, with $\ket{\beta^\mathcal{V}_i}\in \mathcal{V}$ and $\ket{\beta^\perp_i}\perp \mathcal{V}$. Let $W\subseteq \mathcal{V}$ be the subspace of $\mathcal{V}$ spanned by the $\ket{\beta^\mathcal{V}_i}$. Note $\dim(W)\le k < \ell+ 1= \dim (\mathcal{V})$. Hence, there exists a unit vector $\ket{v}\in \mathcal{V}$, $\ket{v}\perp W$, which means that $\ket{v} \perp \ket{\beta^\mathcal{V}_i}$ for all $1\le i\le k$. But since $\ket{v}\in \mathcal{V}$, $\ket{v}$ is also orthogonal to all $\ket{\beta^\perp_i}\perp \mathcal{V}$. Therefore, $\ket{v}$ is orthogonal to all $\{\ket{\beta_i}\}_{i=1}^k$, and so $\ket{v}\perp \Pi_A$. Write $\ket{v}=a\ket{\phi}+b\ket{\phi^\perp}$ for some complex numbers $a,b$ and unit vector $\ket{\phi}\in \Pi_B$. Note by our assumption, $\ket{\phi^\perp}\perp\Pi_B$. Since $\ket{v}\perp \Pi_A$, we have: $a\Pi_A\ket{\phi}=-b\Pi_A \ket{\phi^\perp}$. Hence there exists a unit vector $\ket{\gamma}\in \Pi_A$ and complex numbers $x,y$ such that \begin{align} \Pi_A\ket{\phi} &= y\ket{\gamma}, \\ \Pi_A \ket{\phi^\perp}&= x\ket{\gamma},\\ \text{and} \quad ay&=-bx. \end{align} We can thus write: \begin{align} \ket{\phi} &= y\ket{\gamma}+\ket{\gamma'}, \quad \ket{\gamma'}\perp \Pi_A \\ \ket{\phi^\perp}&= x\ket{\gamma}+\ket{\gamma''}, \quad \ket{\gamma''}\perp \Pi_A \end{align} From $\ket{\phi}\in \Pi_B$ we have $\ket{\phi} \perp \ket{\phi^\perp}$ which implies \begin{equation} \label{eq:bound-xy} xy+\langle\gamma'|\gamma''\rangle=0. \end{equation} By Eq.~\eqref{eq:bound-BinA}, we have $|y|=\|\Pi_A\ket{\phi}\|\ge \sqrt{1-\delta^2}$ and therefore \begin{equation} \|\ket{\gamma'}\|=\sqrt{1-|y|^2}\le \delta. \end{equation} Due to Eq.~\eqref{eq:bound-xy} we have $|xy|\le \delta$. Since $|y|\ge \sqrt{1-\delta^2}$, we have for any $\ket{\phi^\perp}\perp \Pi_B$, \begin{equation} \|(\Pi_B-\Pi_A)\ket{\phi^\perp}\| = \|\Pi_A\ket{\phi^\perp}\| = |x|\le \delta/\sqrt{1-\delta^2}. \label{eq:bound-Bout} \end{equation} Given any unit vector $\ket{\psi}$ in the Hilbert space, write $\ket{\psi}=\sqrt{1-z}\ket{\phi}+\sqrt{z}\ket{\phi^\perp}$ for some unit vectors $\ket{\phi}\in \Pi_B$ and $\ket{\psi^\perp} \perp \Pi_B$, along with some number $z$ where $0\le z\le 1$. Using the triangle inequality, it follows from Eq.~\eqref{eq:bound-Bout} and the given fact of $\|(\Pi_B-\Pi_A)\ket{\phi}\|\le \delta$ that \begin{eqnarray} \|(\Pi_B-\Pi_A)\ket{\psi}\| &\le& \sqrt{1-z} \delta + \sqrt{z}\frac{\delta}{\sqrt{1-\delta^2}} \le \sqrt{\delta^2+\frac{\delta^2}{1-\delta^2}} \nonumber \\ &\le& \sqrt{2}\delta/\sqrt{1-\delta^2} \end{eqnarray} where in last part of the first line we used the fact that $\sqrt{1-z}a+\sqrt{z}b\le\sqrt{a^2+b^2}$ for any real numbers $a,b$ and $0\le z\le 1$. \end{proof} \begin{proof}[\textbf{Proof of Lemma~\ref{lem:equiv}}] We know that $\ket{g}$ is the unique groundstate of $H$. We can always represent in a unique way any state in the Hilbert space of $\tilde{H}$ by what we call the {\it $g$-representation}: \begin{equation} \ket{\alpha}=V(\ket{g}\ket{\alpha_g})+\ket{\alpha^\perp}. \end{equation} such that the reduced matrix of $V^\dag \ket{\alpha^\perp}$ to the left register, has zero support on $\ket{g}$. We call $\ket{\alpha_g}$ the $g$-vector of $\ket{\alpha}$. Keep in mind that the two components are orthogonal since $\big(\bra{g}\braket{\alpha_g|\big)V^\dag \,|\alpha^\perp} = 0$. We now construct the projector $P_\textnormal{anc}$ and show that it satisfies the requirement of the Lemma. We define: \begin{defn} $P_\textnormal{anc}$ is defined to be the projector onto the span of all $g$-vectors $\ket{\alpha_g}$ of all vectors $\ket{\alpha}\in \tilde{P}$. \end{defn} Consider the space on which $\Pi_A = V(P\otimes P_\textnormal{anc})V^\dag = V(\ketbra{g}\otimes P_\textnormal{anc})V^\dag $ projects on. We first show that $\rank(\Pi_A) \le \rank(\tilde{P})$. Let $\ell=\rank(\tilde{P})$, $k=\rank(\Pi_A)$. We first show $k\le \ell$. Let $\ket{\alpha_1},...,\ket{\alpha_\ell}$ be an orthonormal basis for $\tilde{P}$, and let $\ket{\alpha_g^i}$ be their $g$-vectors, respectively. Note that $\{\ket{\alpha_g^i}\}_{i=1}^\ell$ span $P_\textnormal{anc}$. This follows from the definition of $P_\textnormal{anc}$ as the span of the $g$-vectors for all $\ket{\alpha}\in \tilde{P}$, since $\ket{\alpha}$ can be written as a linear combination of $\ket{\alpha_i}$, which implies that its $g$-vector is also a linear combination of their $g$-vectors. Hence, $k=\rank(\Pi_A)\le \ell$. Consider any unit vector $\ket{\alpha}\in \tilde{P}$. We have $\tilde{P}\ket{\alpha}=\ket{\alpha}$, $V (P\otimes \mathds{1}_\textnormal{anc}) V^\dag \tilde{P}\ket{\alpha}= V \ket{g}\ket{\alpha_g}$. By the unfaithfulness condition we have $\|\ket{\alpha}- V \ket{g}\ket{\alpha_g}\| = \|\ket{\alpha^\perp}\| \le \delta$. Since $\Pi_A\ket{\alpha}= V \ket{g}\ket{\alpha_g}$, we also have $\|(\tilde{P} - \Pi_A)\ket{\alpha}\| = \|\ket{\alpha^\perp}\| \le \delta$. Hence, we can apply Lemma~\ref{lem:proj-diff} by identifying $\Pi_B \equiv \tilde{P}$, and obtain the desired bound: \begin{equation} \|\tilde{P}-V(P\otimes P_\textnormal{anc})V^\dag\| = \|\tilde{P} - \Pi_A\| \le \sqrt{2}\delta/\sqrt{1-\delta^2}. \end{equation} \end{proof} \subsection{Composition of Gap-Simulations} We now prove Lemma~\ref{lem:composition}, which demonstrates that composition of gap-simulations behaves as expected: { \renewcommand{\thelemma}{\ref{lem:composition}} \begin{lemma}[Composition] Suppose $H_1$ (incoherently) gap-simulates $(H_0,P_0)$ with encoding $V_1$, incoherence $\epsilon_1$ (or unfaithfulness $\delta_1$), energy spread $\tilde{w}_1$, and a corresponding quasi-groundspace projector $P_1$. Also suppose $H_2$ (incoherently) gap-simulates $(H_1, P_1)$ with encoding $V_2$, incoherence $\epsilon_2$ (or unfaithfulness $\delta_2$), and energy spread $\tilde{w}_2$. Then $H_2$ (incoherently) gap-simulates $(H_0,P_0)$ with encoding $V_2 (V_1\otimes \mathds{1}_{\textnormal{anc},1})$, incoherence $\le\epsilon_2+\epsilon_1$ (or unfaithfulness $\le 2\delta_2+\delta_1$), and energy spread $\tilde{w}_2$. \end{lemma} \addtocounter{lemma}{-1} } \begin{proof} Below, we denote $P_2$ as the corresponding quasi-groundspace projector of $H_2$. Let us first prove the case of coherent gap-simulation. Note by definition, the quasi-spectral gap of $H_2$ is the same as $H_1$, which is the same as $H_0$. As the energy spread of $H_2$ is already given as $\tilde{w}_2$, condition 1 of Def.~\ref{defn:hamsimul} is satisfied. Let us denote $P_{\textnormal{anc},i}$ as the ancilla projector for gap-simulating $(H_i,P_i)$, $i\in\{0,1\}$. It remains to satisfy the condition 2 of bounded incoherence, i.e. Eq.~\eqref{eq:incoherence}. Let us denote $\mathcal{E}_1 = P_1-V_1(P_0\otimes P_{\textnormal{anc},0})V_1^\dag$, which satisfies $ \|\mathcal{E}_1\| \le \epsilon_1$. Then \begin{eqnarray} &&P_2-V_2(P_1\otimes P_{\textnormal{anc},1})V_2^\dag \nonumber \\ &=& P_2 - V_2 \left[V_1(P_0\otimes P_{\textnormal{anc},0})V_1^\dag \otimes P_{\textnormal{anc},1}\right]V_2^\dag- V_2\left[\big(P_1-V_1(P_0\otimes P_{\textnormal{anc},0})V_1^\dag\big) \otimes P_{\textnormal{anc},1}\right]V_2^\dag \nonumber \\ &=& P_2 - V_2(V_1\otimes \mathds{1}_{\textnormal{anc},1})(P_0\otimes P_{\textnormal{anc},0}\otimes P_{\textnormal{anc},1}) (V_1\otimes \mathds{1}_{\textnormal{anc},1})^\dag V_2^\dag - V_2(\mathcal{E}_1\otimes P_{\textnormal{anc},1})V_2^\dag \end{eqnarray} By defining an isometry $V_{21}\equiv V_2 (V_1\otimes \mathds{1}_{\textnormal{anc},1})$ as in the statement of the Lemma, we see that \begin{align} P_2 - V_{21}(P_0\otimes P_{\textnormal{anc},0}\otimes P_{\textnormal{anc},1})V_{21}^\dag &= P_2-V_2(P_1\otimes P_{\textnormal{anc},1})V_2^\dag + V_2(\mathcal{E}_1\otimes P_{\textnormal{anc},1}) V_2^\dag \nonumber \\ \|P_2 - V_{21}(P_0\otimes P_{\textnormal{anc},0}\otimes P_{\textnormal{anc},1})V_{21}^\dag \| & \le \|P_2-V_2(P_1\otimes P_{\textnormal{anc},1})V_2^\dag\| + \|\mathcal{E}_1\| \nonumber \\ &\le \epsilon_2 + \epsilon_1. \end{align} Now let's consider the case of incoherent gap-simulation with bounded unfaithfulness. Again, $H_2$ satisfies condition 1 of Def.~\ref{defn:hamsimul-incoherent} for incoherently gap-simulating $(H_0,P_0)$ as given, with energy spread $\tilde{w}_2$. It remains to satisfy the condition 2 of bounded unfaithfulness Eq.~\eqref{eq:unfaithfulness}. Let us denote $P_0' = V_1(P_0\otimes \mathds{1}_{\textnormal{anc},0} )V_1^\dag$, and $P_1' = V_2(P_1\otimes\mathds{1}_{\textnormal{anc},1})V_2^\dag$.By assumption, $\|P_1-P_0' P_1\|\le \delta_1$ and $\|P_2-P_1' P_2\|\le \delta_2$. In the following, we will omit the $\mathds{1}_{\textnormal{anc},i}$ for readability. Observe \begin{eqnarray} && P_2 - V_2V_1 P_0 V_1^\dag V_2^\dag P_2 = P_2 - V_2 P_0' V_2^\dag P_2 \nonumber \\ &=& P_2-P_1'P_2 + P_1'P_2 - V_2 P_0'V_2^\dag P_1' P_2 + V_2 P_0'V_2^\dag P_1' P_2 - V_2P_0'V_2^\dag P_2 \nonumber \\ &=& (P_2-P_1'P_2) + (P_1'- V_2 P_0'V_2^\dag P_1')P_2 + V_2 P_0'V_2^\dag ( P_1' P_2 -P_2) \nonumber \\ &=& (P_2-P_1'P_2) + (V_2 P_1V_2^\dag - V_2 P_0'V_2^\dag V_2 P_1 V_2^\dag)P_2 + V_2 P_0'V_2^\dag ( P_1' P_2 -P_2) \nonumber \\ &=& (P_2-P_1'P_2) + V_2 (P_1 - P_0' P_1) V_2^\dag P_2 + V_2 P_0'V_2^\dag ( P_1' P_2 -P_2) . \end{eqnarray} Hence, by identifying $V_{21} = V_2V_1$, we can bound the RHS above as \begin{equation} \|P_2 - V_{21} P_0 V_{21}^\dag\| \le \|P_2-P_1'P_2\| + \|(P_1 - P_0' P_1)\| + \|P_1' P_2 -P_2\| \le 2\delta_2+\delta_1, \end{equation} as stated. \end{proof} \subsection{Comparison of Gap-Simulation to Full-Spectrum Simulation \label{sec:comp-defns}} Generally, analog Hamiltonian simulators are designed to reproduce the spectral properties (both eigenvalues and eigenvectors) of a given Hamiltonian. In Ref.~\cite{BravyiHastingsSim}, Bravyi and Hastings introduced a definition that quantifies how well Hamiltonian $\tilde{H}$ simulates a given Hamiltonian $H$, while allowing some encoding by a ``sufficiently simple'' isometry $V$, which can be summarized roughly as $\|H - V^\dag \tilde{H} V\|\le \xi$. Ref.~\cite{UniversalHamiltonian} refines this definition by allowing for the more general case of simulating complex Hamiltonians by a real ones, but imposes a more explicit constraint that the isometries to be local, i.e. $V=\bigotimes_i V_i$. We reproduce that definition below: \begin{defn}[Full-spectrum simulation, adapted from Def.~1 of \cite{UniversalHamiltonian}] \label{defn:CMPsimul} A many-body Hamiltonian $\tilde{H}$ \emph{full-spectrum-simulates} a Hamiltonian $H$ to precision $(\eta,\xi)$ below an energy cut-off $\Delta$ if there exists a local encoding $\mathcal{E}(H)=V(H\otimes P + \bar{H}\otimes Q)V^\dag$, where $V=\bigotimes_i V_i$ for some isometries $V_i$ acting on 0 or 1 qubits of the original system each, and $P$ and $Q$ are locally orthogonal projectors, such that \begin{enumerate} \item There exists an encoding $\tilde{\mathcal{E}}(H) = \tilde{V}(H\otimes P + \bar{H}\otimes Q)\tilde{V}^\dag$ such that $\|\tilde{V}-V\|\le \eta$ and $\tilde{\mathcal{E}}(\mathds{1}) = P_{\le\Delta(\tilde{H})}$, where $P_{\le\Delta(\tilde{H})}$ is the projector onto eigenstates of $\tilde{H}$ with eigenvalue $\le \Delta$. \item $\|\tilde{H}P_{\le\Delta(\tilde{H})} - \tilde{\mathcal{E}}(H)\| \le \xi$. \end{enumerate} \end{defn} The condition of local orthogonality of $P$ and $Q$ means that there exist orthogonal projectors $P_{i}$ and $Q_i$ acting on the same qubits as $V_i$, such that $P_i P=P$ and $Q_i Q = Q$. The appearance of $\bar{H}$, which is the complex-conjugate of $H$, is necessary to allow for encoding of complex Hamiltonians into real ones. Note that for any real-valued Hamiltonian $H$, we can simply write $\mathcal{E}(H) = V(H\otimes P_\textnormal{anc}) V^\dag$, where $P_\textnormal{anc} = P+Q$ is a projector since $P$ are $Q$ are orthogonal. Note the definition of Ref.~\cite{BravyiHastingsSim} can be considered as a special case of the one above by setting $P=\mathds{1}$ and $Q=0$, while allowing more general isometry $V$ for encoding. Hence, we focus our comparison to the above Definition~\ref{defn:CMPsimul} from Ref.~\cite{UniversalHamiltonian}. We also note that our restriction to localized encodings per Definition~\ref{defn:localized-encoding} is somewhat different than the notion of ``local encoding" $V=\bigotimes_i V_i$ in Ref.~\cite{UniversalHamiltonian}. For example, constant-depth circuit qualifies as a localized encoding but not a ``local encoding'', due to the possibility of overlaps between supports of encoded qubits (and hence cannot be written in tensor-product form). On the other hand, Ref.~\cite{UniversalHamiltonian} does not appear to place any explicit restriction on the size of the support of each encoded qubit, other than the fact that each qubit is encoded independently. Now, we show that full-spectrum simulation by Def.~\ref{defn:CMPsimul} with an encoding of the form $\mathcal{E}(H)=V(H\otimes P_\textnormal{anc})V^\dag$ and sufficiently small precision ($\xi\ll(1-w)\gamma$) implies a coherent gap-simulation by our Def.~\ref{defn:hamsimul}. The restriction of the encoding format simplifies the comparison, and has no loss of generality when considering real-valued Hamiltonians. \begin{lemma}[Full-spectrum simulation implies coherent gap-simulation] \label{lem:relate-CMP} Let $H$ be a Hamiltonian that has a quasi-groundspace projector $P$ with quasi-spectral gap $\gamma$ and energy spread $w \le 1/2$. Suppose $\tilde{H}$ full-spectrum-simulates $H$ to precision $(\eta,\xi)$ according to Def.~\ref{defn:CMPsimul} with encoding $\mathcal{E}(H)=V(H\otimes P_\textnormal{anc})V^\dag$, such that $\xi \le (1-w)\gamma/8$. Then $\tilde{H}'= \frac{4}{3} \tilde{H}$ gap-simulates $(H,P)$ with encoding $V$, incoherence $\epsilon\le 32\xi/\gamma + 2\eta$, and energy spread $\tilde{w}\le (w + 2\xi/\gamma)/(1-2\xi/\gamma)$, per our Def.~\ref{defn:hamsimul}. \end{lemma} To show this, we first need to state a Lemma that bounds error of groundspace due to perturbations: \begin{lemma}[Error bound on perturbed groundspace] \label{lem:PPgroundspace} Let $\tilde{H}$ and $\tilde{H}'$ be two Hamiltonians. Per Def.~\ref{defn:gap}, let $\tilde{P}$ project onto a quasi-groundspace of $\tilde{H}$ with energy spread $\tilde{w}$ and quasi-spectral gap $\gamma$. Assume $\tilde{w}\le 1/2$ and $\|\tilde{H}' - \tilde{H}\| \le \kappa$, where $\kappa \le (1-\tilde{w})\gamma/8$. Then there is a quasi-groundspace projector $\tilde{P'}$ of $\tilde{H}'$ with quasi-spectral gap at least $\gamma'$, comprised of eigenstates of $\tilde{H}'$ up to energy at most $\lambda_1(\tilde{H}') + \tilde{w}'\gamma'$, where \begin{equation} \gamma' > \gamma-2\kappa, \quad \tilde{w}'\gamma' \le \tilde{w}\gamma + 2\kappa, \quad \text{and} \quad \|\tilde{P}'-\tilde{P}\| < \frac{32\kappa}{\gamma}. \end{equation} \end{lemma} While this may be simple to understand in the case of unique groundstates (see e.g. Lemma 2 of Ref.~\cite{BravyiHastingsSim}), it is not obvious when there are degenerate groundstates. The proof of the above Lemma~\ref{lem:PPgroundspace} makes use of the Green's function machinery seen in Ref.~\cite{KKR06, OliveiraTerhal}, which we describe in a self-contained manner in Appendix~\ref{sec:PPgroundspace-proof}. \begin{proof}[\textbf{Proof of Lemma~\ref{lem:relate-CMP}}] Given encoding of the form $\mathcal{E}(H) = V(H\otimes P_\textnormal{anc}) V^\dag$, we write the corresponding encoding $\tilde{\mathcal{E}}(H)=\tilde{V}(H\otimes P_\textnormal{anc}) \tilde{V}^\dag$ such that $\|\tilde{V}-V\|\le\eta$. Let $H_P \equiv \tilde{\mathcal{E}}(H)$ and $\tilde{H}_P \equiv \tilde{H} P_{\le{\Delta(\tilde{H})}}$; note both are Hermitian and hence (non-local) Hamiltonians. Note $V (P\otimes P_\textnormal{anc}) V^\dag$ is a quasi-groundspace projector of $H_P$ with quasi-spectral gap $\gamma$ and energy spread $w$. Since $\|\tilde{H}_P - H_P\| \le \xi$ by Def.~\ref{defn:CMPsimul}, then due to Lemma~\ref{lem:PPgroundspace}, there is a quasi-ground space projector $\tilde{P}$ of $\tilde{H}_P$ (and thus also $\tilde{H}$) with quasi-spectral gap at least $\gamma_P\ge \gamma - 2\xi$ with energy spread $\tilde{w}_P$, where $\tilde{w}_P\gamma_P\le w\gamma+ 2\xi$, and \begin{equation} \|\tilde{P} - \tilde{V} (P\otimes P_\textnormal{anc}) \tilde{V}^\dag \| \le 32\xi/\gamma. \end{equation} Note for any constant $\alpha >0 $, $\tilde{P}$ is also a quasi-groundspace projector of $\tilde{H}'=\alpha \tilde{H}$ with quasi-spectral gap $\ge\alpha \gamma_P $ and energy spread \begin{equation} \tilde{w} = \tilde{w}_P \le \frac{w \gamma+2\xi}{\gamma-2\xi} \end{equation} To satisfy condition 1 of Def.~\ref{defn:hamsimul}, i.e. $\alpha\gamma_P \ge \gamma$, it suffices to choose $\alpha=\gamma/(\gamma-2\xi) \le 4/3$. For simplicity, we choose $\alpha=4/3$, as stated in the Lemma. We note that since $\|V-\tilde{V}\|\le \eta$, we have \begin{equation} \|\tilde{P} - V (P\otimes P_\textnormal{anc}) V^\dag \| \le \|\tilde{P} - \tilde{V} (P\otimes P_\textnormal{anc}) \tilde{V}^\dag \| + 2\|\tilde{V}-V\| \le 32\xi/\gamma + 2\eta, \end{equation} satisfying condition 2 of Def.~\ref{defn:hamsimul} with $\epsilon \le 32\xi/\gamma + 2\eta$. Hence, $\tilde{H}'=8\tilde{H}$ gap-simulates $H$. \end{proof} We remark the constraints on $w$ and $\xi$ in the Lemma~\ref{lem:relate-CMP} can be relaxed, since the Lemma~\ref{lem:PPgroundspace} used is a more restricted (but simpler) version of the more general Lemma~\ref{lem:PPgsv2} that we prove in Appendix.~\ref{sec:PPgroundspace-proof}. The above Lemma~\ref{lem:relate-CMP} implies that our Definition~\ref{defn:hamsimul} is indeed a more relaxed version of the simulation definitions from Ref.~\cite{BravyiHastingsSim,UniversalHamiltonian}, at least for real-valued Hamiltonians and sufficiently small simulation error $\xi\ll \gamma$. In fact, our Definition~\ref{defn:hamsimul-incoherent} provides an even more relaxed notion of simulation, where it is not required to preserve groundspace coherence or even all the groundstates. \section{Properties of our Example Hamiltonian $H_B$\label{sec:HamProperties}} Here we prove the properties of $H_B$ required for the impossibility proofs in this paper. We start by reintroducing this Hamiltonian (first given in Eq.~\eqref{eq:Hdicke}). Let us denote collective angular momentum operator on $n$ qubits as \begin{equation}\label{eq:J} \mathcal{J}_\alpha = \sum_{i=1}^n \sigma_\alpha^{(i)}/2, \end{equation} for $\alpha\in\{x,y,z\}$. Our example family of 2-local $n$-qubit Hamiltonian $H_B$ is the following Hamiltonian, restricted to even system size $n=2s$: \begin{equation} H_B = \mathcal{J}_z^2 - \frac12 \mathcal{J}^2 + b_n = \frac12(\mathcal{J}_z^2-\mathcal{J}_x^2-\mathcal{J}_y^2) +b_n = \frac14 \sum_{i<j}^{n} (\sigma_z^{(i)}\sigma_z^{(j)} - \sigma_x^{(i)}\sigma_x^{(j)} - \sigma_y^{(i)}\sigma_y^{(j)}) -\frac{n}{8} + b_n \end{equation} where $b_n\equiv \frac12 s(s+1) = \frac18 n (n + 2)$ is a constant chosen so the ground state energy is zero. After expansion into sum of 2-local operators, this Hamiltonian has $M_0(n)=n(n-1)/2=\Omega(n^2)$ terms, and each qubit has degree $n-1$. Since $[\vec{\mathcal{J}}^2,\mathcal{J}_z]=0$, the eigenstates of $H_B$ can be written in eigenbasis of both $\vec{\mathcal{J}}^2$ and $\mathcal{J}_z$. Observe that $\mathcal{J}_z^2$ has eigenvalues $\{0,1,2^2,\ldots,s^2\}$ and $\vec{\mathcal{J}}^2$ has eigenvalues $\{s(s+1), (s-1)s,\ldots,6,2,0\}$. The ground state is thus a state that has minimal $\mathcal{J}_z^2=0$ and maximal total angular momentum $J=s=\frac{n}{2}$. Such a state is well-known in atomic physics as a Dicke state~\cite{Dicke}, and it is uniquely defined as \begin{equation} \ket{g_B} = \ket{\mathcal{J}=\frac{n}{2}; \mathcal{J}_z = 0} = \binom{n}{n/2}^{-1/2} \sum_{|\{i\,:\,x_i=1\}| = n/2} \ket{x_1\cdots x_n}. \end{equation} where the state can be explicitly written as a symmetric superposition of all strings $x$ with Hamming weight $h(x)=|\{i:x_i=1\}|=n/2$. This ground state $\ket{g_B}$ has energy $0$. Meanwhile, all other eigenstates must have energy at least 1. In particular, any eigenstate with $\vec{\mathcal{J}}^2<s(s+1)$ must have energy $\ge -\frac12 (s-1)s+b_n = s = \frac{n}{2} \ge 1$. Thus, the system is spectrally gapped with energy spread $w_n=0$ and $\gamma_n = 1$. \section{Information-Theoretical Impossibility Results} In what follows, we will denote $X_i\equiv \sigma_x^{(i)}$ for simplicity and clarity. \subsection{Impossibility of DR and Dilution with Close-to-Perfect Coherence (Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute})\label{sec:imposs1}} In this section, we prove Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute} together, essentially showing impossibility of DR and dilution for perfect coherence. The proof of these results contains the idea of {\it contradiction-by-energy}, which is the seed to the idea for the proof of our main Theorem~\ref{thm:main} in the next section; in that proof, contradiction-by-energy is too weak, and instead we use the related idea of {\it contradiction-by-correlation}. Towards proving Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute}, we prove a more general result in the following Lemma~\ref{lem:imposs1}, of which Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute} are special cases obtained by setting $\epsilon=0$. To this end, let us recall the definition of $H_A$: \begin{equation} H_A = \left(\mathcal{J}_z+\frac{n}{2} \right)\left(\mathcal{J}_z+\frac{n}{2}-1\right), \end{equation} with $\mathcal{J}_z$ defined in Eq.~\eqref{eq:J}. The $n+1$ groundstates of $H_A$ are \begin{equation} \ket{00\cdots00}, \ket{00\cdots01}, \ket{00\cdots10}, \ldots, \ket{10\cdots00}. \end{equation} \begin{lemma}[Limitation on $\epsilon$-incoherent degree-reduction and dilution of $H_A$] \label{lem:imposs1} Suppose we require $\epsilon$-incoherence and energy spread $\tilde{w}$, then any $k$-local $[r,M,J]$-gap-simulator $\tilde{H}_A$ of the $n$-qubit Hamiltonian $H_A$ with localized encoding must satisfy at least one of the following conditions: \begin{enumerate} \item $\epsilon = 0$ and $\tilde{w} \ge 1/2$, or \item $\epsilon > 0$ and $\|\tilde{H}_A\| \ge [1-2\tilde{w}(1+2\epsilon)]/(4\epsilon+6\epsilon^2)$, or \item $\tilde{H}_A$ contains qubits with degree $r=\Omega({n/k})$ and has a total number of terms $M=\Omega(n^2/k^2)$. \end{enumerate} \end{lemma} In other words, the above Lemma shows that if we require inverse-polynomially small incoherence and some corresponding polynomial bound on the resources of gap-simulation, then it is impossible to degree-reduce or dilute $H_A$. In particular, if $\tilde{w}<1/2$, then for any $\xi>0$ and $p,q\ge 0$, there does not exists any $[O(1),O(n^p),O(n^q)]$-degree-reducer of $H_A$ with $O(1/n^{p+q+\xi})$-incoherence, nor any $[r,o(n^2),O(n^q)]$-diluter of $H_A$ with $O(1/n^{2+q})$-incoherence regardless of degree $r$. To prove the above results, we first prove the following Lemma: \begin{lemma}\label{lem:TotalCoherentImpossible} Suppose $\tilde{H}_A$ gap-simulates $H_A$ with any encoding $V$, $\epsilon$-incoherence, such that either (a) $\epsilon=0$ and $\tilde{w} < 1/2$, or (b) $\|\tilde{H}_A\| < [1-2\tilde{w}(1+2\epsilon)]/(4\epsilon+6\epsilon^2)$. For every original qubit $i$, let $S_i$ be the support of $V \sigma_x^{(i)} V^\dag$ on the interaction graph of $\tilde{H}_A$. Then for every pair of original qubits $(i,j)$, $\tilde{H}_A$ must contain a term that acts nontrivially on both a qudit in $S_i$ and a qudit in $S_j$. \end{lemma} \begin{proof} For the sake of contradiction, suppose $\tilde{H}_A$ contains no term that interacts $S_i$ and $S_j$. This means we can decompose $\tilde{H}_A$ into two parts: $\tilde{H}_A=\tilde{H}_{A,i}+\tilde{H}_{A,j}$, where $\tilde{H}_{A,i}$ acts trivially on $S_i$. In other words, $[\tilde{H}_{A,i},\tilde{O}_i]=0$ for any operator $\tilde{O}_i$ whose support is contained in $S_i$. Let us denote $\tilde{P}$ as the projector onto groundspace of $\tilde{H}_A$, and $P$ the projector onto groundspace of $H_A$. Since we assume that $\tilde{H}_A$ gap-simulates $H_A$ with $\epsilon$-incoherence according to Def.~\ref{defn:hamsimul}, then for some projector $P_\textnormal{anc}$, we must have $\|\tilde{P}-Q\| \le \epsilon$, where $Q=V(P\otimes P_\textnormal{anc})V^\dag$. We write $P=\sum_{i=0}^n \ketbra{g_i}$, where states $\ket{g_0}=\ket{0\cdots0}$ and $\ket{g_i}=X_i\ket{g_0}=\ket{0\cdots01_i0\cdots0}$ are ground states of $H_A$. Let $\ket{\alpha}\in P_\textnormal{anc}$, and denote $\ket{\bar{g}_i} \equiv V\ket{g_i}\ket{\alpha}$ for $0\le i \le n$. Observe that $Q\ket{\bar{g}_i}=V(P\otimes P_\textnormal{anc})V^\dag V\ket{g_i}\ket{\alpha} = V\ket{g_i}\ket{\alpha} = \ket{\bar{g_i}}$, and so \begin{equation} \tilde{P} \ket{\bar{g}_i}= (\tilde{P} - Q + Q)\ket{g_i}\ket{\alpha} = (\tilde{P} - Q)\ket{\bar{g}_i} + \ket{\bar{g}_i} \equiv \ket{\epsilon_i} + \ket{\bar{g}_i} \end{equation} where we denoted $\ket{\epsilon_i} \equiv (\tilde{P}-Q)\ket{\bar{g}_i}$ satisfying $\|\ket{\epsilon_i}\|\le \epsilon$. Now consider the state $\ket{e_{ij}}=X_iX_j\ket{g_0}$, which is an excited state of $H_A$ outside groundspace $P$, and thus satisfies $P\ket{e_{ij}}=0$. Consider correspondingly the state $\ket{\bar{e}_{ij}} = V\ket{e_{ij}}\ket{\alpha}$. Observe that $Q\ket{\bar{e}_{ij}} = V(P\otimes P_\textnormal{anc}) V^\dag V\ket{e_{ij}}\ket{\alpha} = V(P\ket{e_{ij}}\ket{\alpha}) = 0$, and so \begin{equation} \tilde{P}^\perp \ket{\bar{e}_{ij}} = \ket{\bar{e}_{ij}} - \tilde{P} \ket{\bar{e}_{ij}} =\ket{\bar{e}_{ij}} - (\tilde{P}-Q+Q) \ket{\bar{e}_{ij}} = \ket{\bar{e}_{ij}} - (\tilde{P}-Q ) \ket{\bar{e}_{ij}} \equiv \ket{\bar{e}_{ij}} - \ket{\epsilon_{ij}}, \end{equation} where we denoted $\ket{\epsilon_{ij}} \equiv (\tilde{P}- Q)\ket{\bar{e}_{ij}}$ satisfying $\|\ket{\epsilon_{ij}}\|\le \epsilon$. Now, let $\tilde{X}_i = V X_i V^\dag$ be the encoded Pauli spin flip operator, which satisfies $[\tilde{H}_{A,i},\tilde{X}_i]=0$. Observe that $\tilde{X}_i\tilde{X}_j = VX_i X_j V^\dag = VX_j X_i V^\dag = \tilde{X}_j \tilde{X}_i$. Additionally, $\tilde{X}_i^2=VV^\dag$, which acts like identity since $\tilde{X}_j^2\ket{\bar{g}_i} = V V^\dag V \ket{g_i}\ket{\alpha} = V\ket{g_i}\ket{\alpha} = \ket{\bar{g}_i}$, and similarly $\tilde{X}_k^2\ket{\bar{e}_{ij}}=\ket{e_{ij}}$. Note that $\ket{\bar{g}_i} = \tilde{X}_i\ket{\bar{g}_0}$ and $\ket{\bar{e}_{ij}} = \tilde{X}_i\tilde{X}_j\ket{\bar{g}_0}$, for any $1\le i < j \le n$. Then from the assumption that no term in $\tilde{H}_A$ would interact supports $S_i$ and $S_j$, we can derive the following identity: \begin{eqnarray} \braket{\bar{e}_{ij}|\tilde{H}_A|\bar{e}_{ij}} &=& \braket{\bar{g}_0|\tilde{X}_i\tilde{X}_j (\tilde{H}_{A,i}+\tilde{H}_{A,j})\tilde{X}_i\tilde{X}_j|\bar{g}_0} = \braket{\bar{g}_0|\tilde{X}_i\tilde{H}_{A,j}\tilde{X}_i\tilde{X}_j^2|\bar{g}_0} + \braket{\bar{g}_0|\tilde{X}_j\tilde{H}_{A,i}\tilde{X}_j\tilde{X}_i^2|\bar{g}_0} \nonumber \\ &=& \braket{\bar{g}_0|\tilde{X}_i\tilde{H}_{A,j}\tilde{X}_i|\bar{g}_0} + \braket{\bar{g}_0|\tilde{X}_j\tilde{H}_{A,i}\tilde{X}_j|\bar{g}_0} \nonumber \\ &=& \braket{\bar{g}_i|\tilde{H}_{A,j}|\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_{A,i}|\bar{g}_j} \nonumber\\ &=& \braket{\bar{g}_i|\tilde{H}_{A}|\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_{A}|\bar{g}_j} - \braket{\bar{g}_i|\tilde{H}_{A,i}|\bar{g}_i} - \braket{\bar{g}_j|\tilde{H}_{A,j}|\bar{g}_j} \nonumber \\ &=& \braket{\bar{g}_i|\tilde{H}_{A}|\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_{A}|\bar{g}_j} - \braket{\bar{g}_0|\tilde{X}_i\tilde{H}_{A,i}\tilde{X}_i|\bar{g}_0} - \braket{\bar{g}_0|\tilde{X}_j\tilde{H}_{A,j}\tilde{X}_j|\bar{g}_0} \nonumber \\ \braket{\bar{e}_{ij}|\tilde{H}_A|\bar{e}_{ij}} &=& \braket{\bar{g}_i|\tilde{H}_{A}|\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_{A}|\bar{g}_j} - \braket{\bar{g}_0|\tilde{H}_{A}|\bar{g}_0}. \label{eq:lemma-2-energy-trick} \end{eqnarray} To simplify expressions, let us denote $\tilde{H}_A' \equiv \tilde{H}_A - \tilde{E}^g \mathds{1}$, where $\tilde{E}^g \equiv \lambda_1(\tilde{H}_A)$ is groundstate energy of $\tilde{H}_A$. We note the above identity of Eq.~\eqref{eq:lemma-2-energy-trick} remains true if we replace $\tilde{H}_A$ with $\tilde{H}_A'$, since the constant offsets cancel. Now let us consider the energy of states $\ket{\bar{g}_i}$ and $\ket{\bar{e}_{ij}}$ with respect to $\tilde{H}_A'$. Since we allow energy spread $\|\tilde{P}\tilde{H}_A'\tilde{P}\|\le \tilde{w}$ for the gap-simulation, and the spectral gap of $H_A$ is $\gamma=1$, we must have \begin{equation} \label{eq:lemma-2-energy-bound} 0 \le \braket{\bar{g}_i|\tilde{P}\tilde{H}_A'\tilde{P}|\bar{g}_i}\le\tilde{w}. \end{equation} And keeping in mind that $\tilde{H}_A' \tilde{P} = \tilde{P}\tilde{H}_A' \tilde{P}$, and $\braket{\psi|\tilde{H}_A'|\psi}\ge 0$ for any state $\ket{\psi}$ because $\tilde{H}_A'$ is positive semi-definite, we have \begin{eqnarray} 0 \le \braket{\bar{g}_i|\tilde{H}_{A}'|\bar{g}_i} &=& (\bra{\bar{g}_i}\tilde{P} - \bra{\epsilon_i}))\tilde{H}_A '(\tilde{P}\ket{\bar{g}_i} - \ket{\epsilon_i}) = \braket{\bar{g}_i|\tilde{P}\tilde{H}_A'\tilde{P}|\bar{g}_i} - 2\Re \braket{\epsilon_i|\tilde{H}_A'\tilde{P}|\bar{g}_i} + \braket{\epsilon_i | \tilde{H}_A' | \epsilon_i} \nonumber \\ &\le & \tilde{w} + 2\epsilon\tilde{w} + \epsilon^2 \|\tilde{H}_A'\|= \tilde{w}(1+2\epsilon) + \epsilon^2 \|\tilde{H}_A'\|. \end{eqnarray} Furthermore, we observe that \begin{eqnarray} \braket{\bar{e}_{ij} | \tilde{P}^\perp \tilde{H}_A' \tilde{P}^\perp | \bar{e}_{ij}} &=& (\bra{\bar{e}_{ij}}-\bra{\epsilon_{ij}}) \tilde{H}_A' (\ket{\bar{e}_{ij}}-\ket{\epsilon_{ij}}) = \braket{\bar{e}_{ij}|\tilde{H}_A' |\bar{e}_{ij}} - 2\Re\braket{\epsilon_{ij}|\tilde{H}_A'|\bar{e}_{ij}} + \braket{\epsilon_{ij}|\tilde{H}_A'|\epsilon_{ij}}\nonumber \\ &\le& \braket{\bar{e}_{ij}|\tilde{H}_A' |\bar{e}_{ij}} + 2\epsilon\|\tilde{H}_A' \| + \epsilon^2 \|\tilde{H}_A'\| \\ &=& \braket{\bar{g}_i|\tilde{H}_{A}' |\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_A' |\bar{g}_j} - \braket{\bar{g}_0|\tilde{H}_{A}' |\bar{g}_0} + (2\epsilon + \epsilon^2)\|\tilde{H}_A' \| \nonumber \\ &\le& 2\tilde{w}(1+2\epsilon) + (2\epsilon+3\epsilon^2)\|\tilde{H}_A' \| \le 2\tilde{w}(1+2\epsilon) + (4\epsilon+6\epsilon^2)\|\tilde{H}_A\|. \label{eq:lemma2-excited-energy-bound} \end{eqnarray} where we used the identity \eqref{eq:lemma-2-energy-trick} and the fact that $\|\tilde{H}_A'\| = \|\tilde{H}_A-\tilde{E}_g\| \le 2\|\tilde{H}_A\|$. This implies $\tilde{P}^\perp\tilde{H}_A'\tilde{P}^\perp$ has an eigenvalue $\le 2\tilde{w}(1+2\epsilon) + (4\epsilon+6\epsilon^2)\|\tilde{H}_A\| $. This contradicts the gap-simulation assumption $\lambda_j(\tilde{P}^\perp \tilde{H}_A'\tilde{P}^\perp+\gamma \tilde{P})\ge \gamma=1$ if \begin{equation} 2\tilde{w}(1+2\epsilon) + (4\epsilon+6\epsilon^2)\|\tilde{H}_A\| < 1 \quad \Longleftrightarrow \quad \begin{dcases} \tilde{w} < 1/2, &\text{ if } \epsilon = 0\\ \|\tilde{H}_A\| < \frac{1-2\tilde{w}(1+2\epsilon)}{4\epsilon+6\epsilon^2}, &\text{ if } \epsilon > 0 \end{dcases}. \end{equation} Hence, if either (a) $\epsilon=0$ and $\tilde{w}<1/2$, or (b) $\|\tilde{H}_A\| < [1-2\tilde{w}(1+2\epsilon)]/(4\epsilon+6\epsilon^2)$, then $\tilde{H}_A$ must contain a term that acts nontrivially on both qubit $i$ and $j$. \end{proof} \begin{proof}[\textbf{Proof of Lemma~\ref{lem:imposs1}}] Suppose a gap-simulator of $\tilde{H}_A$ of $H_A$ does not satisfy any of the first two conditions enumerated in Lemma~\ref{lem:imposs1}, then it must either (a) has $0$-incoherence and energy spread $\tilde{w}<1/2$, or (b) $\|\tilde{H}_A\| < [1-2\tilde{w}(1+2\epsilon)]/(4\epsilon+6\epsilon^2)$. Thus, by Lemma~\ref{lem:TotalCoherentImpossible} above, there must be at least $n-1$ terms, each interact a qudit in $S_1$ with a qudit in $S_2,S_3,\ldots S_{n}$. Let us consider the first variant of localized encoding $V=\bigotimes_i V_i$, where the range of $V_i$ is supported by $O(1)$ qudits in $\tilde{H}$. Here, the supports $S_i$ are mutually disjoint, with bounded maximum size $\max_i |S_i|\le a = O(1)$. Since each $k$-local term can couple a qubit to up to $k-1$ other qubits, the average degree of qudits in $S_1$ is $r\ge (\tilde{n}-1)/[a(k-1)]=\Omega(n/k)$. Furthermore, note that there are $\binom{n}{2}$ required pairwise interactions between supports $(S_i, S_j)$. Since each $k$-local term can act on up to $k$ qubits, it can cover up to $\binom{k}{2}$ such pairwise interactions. Thus, the minimum number of terms in $\tilde{H}_A$ to account for all the pairwise interactions of $H_A$ is \begin{equation} M \ge \frac{\binom{n}{2}}{\binom{k}{2}} = \frac{n(n-1)}{k(k-1)} = \Omega(n^2/k^2). \end{equation} To prove the Lemma for the second variant of localized encoding where $V$ is a constant-depth quantum circuit, we modify the above argument by considering $\tilde{H}_A' = V^\dag \tilde{H}_A V$. Note $\tilde{H}_A$ gap-simulates $H_A$ with trivial encoding. Since $V^\dag$ is also a constant-depth quantum circuit, one can see that each term in $\tilde{H}_A$ is mapped into a term in $\tilde{H}_A'$ whose locality blows up by a constant factor. Hence, if $\tilde{H}_A$ has maximum degree $r$ and $M$ Hamiltonian terms that are $k$-local, then $\tilde{H}_A'$ has maximum degree $r'=\Theta(r)$, and $M'=M$ terms that are $k'=\Theta(k)$-local. Since the encoding is trivial, then $\tilde{H}$ must interact every pairs of qubit $(i,j)$. This would imply $r'=\Omega(n/k')$, and $M'=\Omega(n^2/(k')^2)$. Consequently, $r=\Omega(n/k)$, and $M=\Omega(n^2/k^2)$, proving our Lemma. \end{proof} \paragraph{Remark}--- Note that there is a difficulty to extend the proof of Lemma~\ref{lem:imposs1} (and thus Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute}) to the case where we allow $\epsilon=\Theta(1)$-incoherence, even if we require bounded interaction strength $J=O(1)$. This difficulty is apparent in Eq.~\eqref{eq:lemma2-excited-energy-bound}, where the bound on the excited state's energy has an energy uncertainty on the order of $\O(\epsilon\|\tilde{H}_A\|)$, which would grow as system size due to the dependence on $\|\tilde{H}_A\|$. Hence, in order to extend this impossibility result to $\epsilon=\Theta(1)$-incoherence, more innovation is required -- this is done in the next section. \subsection{Impossibility of DR with Constant Coherence or Faithfulness (Theorem~\ref{thm:main})\label{sec:imposs2}} From Lemma~\ref{lem:imposs1-DR}, it appears that if perfect coherence is required, any meaningful reduction in the maximum degree or the total number of terms cannot in general be possible due to our first counterexample $H_A$. Here, we strengthen this to $\epsilon$-incoherence for constant $\epsilon$: We show that reduction in the maximum degree remains impossible for $H_A$, by arriving at a contradiction via a correlation-based argument, rather than one relying on the energy. Furthermore, impossibility of incoherent degree-reduction can also be shown by applying the same idea, now to our second counterexample $H_B$ (see Appendix~\ref{sec:HamProperties} for its properties), which has a unique groundstate (so incoherent and coherent degree-reduction are equivalent due to Lemma~\ref{lem:equiv}). This is our main impossibility result: { \renewcommand{\thethm}{\ref{thm:main}} \begin{thm}[Main: Impossibility of constant coherence (faithfulness) DR for $H_A$ ($H_B$)] For sufficiently small constants $\epsilon\ge0$ $(\delta\ge0)$ and $\tilde{w}\ge0$, there exists system size $n_0$ where for any $n\ge n_0$, there is no $O(1)$-local $[O(1),M,O(1)]$-degree-reducer of the $n$-qubit Hamiltonian $H_A$ $(H_B)$ with localized encoding, $\epsilon$-incoherence ($\delta$-unfaithfulness), and energy spread $\tilde{w}$, for any number of Hamiltonian terms $M$. \end{thm} \addtocounter{thm}{-1} } To prove Theorem~\ref{thm:main}, we rely on the Hastings-Koma result\cite{HastingsKoma} demonstrating exponential decay of correlation in a spectrally gapped groundspace of Hamiltonians with {\it exponentially decaying interaction}, which we define below: \begin{defn}[Exponentially decaying interaction, adapted from~\cite{HastingsKoma}] \label{defn:expdecayint} Consider a graph given by $G=(\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ is a set of vertices and $\mathcal{E}=\{(i,j):i,j\in \mathcal{V}\}$ is a set of edges. A Hamiltonian $H=\sum_{X\subset V} h_X$ defined on such a graph $G$ has \emph{exponentially decaying interaction} if $h_X$ satisfies \begin{equation} \sup_x \sum_{X\ni x} \|h_X\| |X| \exp[\mu \diam(X)] \le s_1 < \infty \end{equation} for positive constants $\mu$ and $s_1$. Here $\diam(X)=\max_{x,y\in X} \dist(x,y)$, and $\dist(x,y)$ is the graph-theoretic distance. \end{defn} It can be seen that any local Hamiltonian with constant degree and bounded interaction strength satisfies the criterion in the above definition: \begin{lemma} \label{lem:HamExpDecayInt} Any $n$-qudit $[k=O(1)]$-local Hamiltonian with maximum degree $r=O(1)$ and bounded interaction strength $J=O(1)$ has exponentially decaying interaction per Definition~\ref{defn:expdecayint}. \end{lemma} \begin{proof} Let us construct the graph on which we embed the Hamiltonian. The set of vertices $V$ corresponds to the set of qudits. We can then write the Hamiltonian as $H = \sum_{X\subset \mathcal{V}} h_X$, where $|X|\le k$ since the Hamiltonian is $k$-local. We then choose the set of edges as $\mathcal{E}=\{(x,y): x,y\in X \text{ for some } h_X\}$. In other words, for any set of qubits that is directly interacting through a term in the Hamiltonian, we assign a clique to their vertices on the graph. Then, $H$ has exponential decaying interaction on this graph $G=(\mathcal{V},\mathcal{E})$ per Definition~\ref{defn:expdecayint} since \begin{equation} \sup_x \sum_{X\ni x} \|h_X\| |X| \exp[\mu\diam(X)] \le \sum_{i=1}^r J k e^\mu = r J k e^{\mu} = O(1) < \infty, \end{equation} where we used the fact that each qudit is contained in at most $r$ terms by definition of Hamiltonian degree, and that each term $h_X$ has norm $\|h_X\|\le J$, acts on at most $|X|\le k$ qudits with diameter $\diam(X)=1$. \end{proof} We now give a strengthened version of the Hastings-Koma result, that we will use in the proof of Theorem~\ref{thm:main}. \begin{lemma}[Hastings-Koma theorem for non-zero energy spread, generalized from Ref.~\cite{HastingsKoma}] \label{lem:HastingsKoma} Suppose we have a $n$-qudit Hamiltonian defined on a graph $G=(\mathcal{V},\mathcal{E})$ with exponential decaying interactions (Def.~\ref{defn:expdecayint}). Also suppose for some constants $0\le w_\infty < 1$ and $\gamma_\infty >0$ independent of system size $n$, the Hamiltonian is quasi-spectrally gapped (Def.~\ref{defn:gap}) with energy spread $w_n \le w_\infty$ and quasi-spectral gap $\gamma_n \ge \gamma_\infty$. Let $P_0$ be the projector onto the corresponding quasi-groundspace. Let $A_X$ and $B_Y$ be observables with bounded norm $\|A_X\|,\|B_Y\|=O(1)$ and compact support $X,Y\subset V$, where $[A_X,B_Y]=0$ and $X\cap Y=\emptyset$. Then there exists some constants $C,\tilde{\mu}>0$, independent of $n$, such that for any normalized quasi-groundstate $\ket{\psi}\in P_0$, we have \begin{equation} Ce^{-\tilde{\mu}\dist(X,Y)} \ge \left| \braket{\psi|A_X B_Y|\psi} - \frac12 (\braket{\psi|A_XP_0B_Y|\psi} + \braket{\psi|B_YP_0A_X|\psi}) + \O(w_\infty\log^{\frac12}\frac{1}{w_\infty})\right| \label{eq:MHK-inequality} \end{equation} In the case when $w_\infty=0$, we can ignore the $\O( w_\infty \log^{\frac12} (1/w_\infty))$ term. \end{lemma} Our proof of this Theorem, which is modified from the proof of Theorem 2.8 in Ref.~\cite{HastingsKoma}, can be found in Appendix~\ref{sec:MHK}. Note the apparent singularity of $1/w_\infty$ in the last term of Eq.~\eqref{eq:MHK-inequality} is somewhat artificial, since $w\log^{1/2}(1/w) = \O(w^{1-\epsilon})\to 0$ as $w\to 0$. Its appearance is due to our decision to consider the case where the energy spread is non-zero, even when the system size $n\to\infty$. Lastly, we prove the following property for constant-degree gap-simulation with localized encoding: \begin{lemma}\label{lem:encoded-support} Let $\tilde{H}$ gap-simulates $H$ with a some encoding $V$. Let $S_i$ be the support of $V \sigma_i V^\dag$ on the interaction graph of $\tilde{H}$, where $\sigma_i$ is any operator acting on the $i$-th original qudit. Suppose $\tilde{H}$ has maximum degree $r$, and $\max_i |S_i| = a$. Then there exist two qudits $L$ and $R$ where the distance between the sets $S_L$ and $S_R$ (in the graph metric) satisfies $\dist(S_L, S_R)\ge \log(n/a)/\log(1+r+ra)$. Specifically, for constant degree $r=O(1)$ and localized encoding $a=O(1)$, $\dist(S_L,S_R)=\Omega(n)$. \end{lemma} \begin{proof} We define a sequence of subset of qudits, $R_\ell$, and let $R_0=S_1$. For $\ell=1,2,\ldots$, we form $R_\ell$ by joining to $R_{\ell-1}$ both (1) all qudits $\le 1$ distance to any qudit in $R_{\ell-1}$, and (2) any $S_{i}$ containing qudit(s) with $\le 1$ distance to any qudit in $R_{\ell-1}$. In other words \begin{equation} R_\ell = R_{\ell-1} \cup \{v: \exists w\in R_{\ell-1}\text{ s.t. } \dist(v,w)\le 1\} \cup \{ S_i: \exists v \in S_i, w \in R_{\ell-1} \text{ s.t. } \dist(v,w)\le 1\} \end{equation} By construction, if there exists $S_i\not\subset R_{\ell-1}$, then the set difference $R_{\ell} \backslash R_{\ell-1}$ must contain a support $S_j$ where $\dist(S_1,S_j)\ge \ell$. Note that $|R_\ell| \le |R_{\ell-1}|(1+r+ra)$. Since $|R_0|=|S_1|\le a$, we have \begin{equation} |R_\ell| \le a (1+r+ra)^\ell. \end{equation} Since we have $\left|\bigcup_{i=1}^n S_i\right| \ge n $ qudits, then in order to cover all the supports, we must have \begin{equation} |R_\ell|\ge n \quad \Longrightarrow \quad \ell \ge \frac{\log(n/a)}{\log(1+r+ra)} \end{equation} For $r,a=O(1)$, this shows that there exists a support $S_j$ such that $\dist(S_1, S_j) \ge \Omega(\log(n))$. We note that more generally, for $r=O(\log^c n)$ and $a=O(\log^c n)$, the above also shows that there exists a support $S_j$ such that $\dist(S_1, S_j) \ge \Omega(\log(n)/\log(\log n))$. \end{proof} We are now ready to prove our main theorem. \begin{proof}[\textbf{Proof of Theorem~\ref{thm:main}}] \textbf{Part I}--- We first show impossibility of coherent degree-reduction for $H_A$. For the sake of contradiction, suppose there exists an $O(1)$-local $[O(1),M,O(1)]$-degree-reducer $\tilde{H}_A$ of $H_A$ with localized encoding $V$, $\epsilon$-incoherence and energy spread $\tilde{w}$, but without restriction on the number of terms $M$. Then it has exponential decaying interaction due to Lemma~\ref{lem:HamExpDecayInt}. Additionally, since the original Hamiltonian $H_A$ is spectrally gapped with gap $\gamma_n=\gamma_\infty =1$, the gap-simulator should also be quasi-spectrally gapped with gap $\gamma_n=\gamma_\infty =1$ in order to gap-simulate its groundspace. Nevertheless, we may allow some small and possibly non-zero energy spread $\tilde{w} = \tilde{w}_n\le \tilde{w}_\infty$ for the gap-simulator. Since we assumed in the premise of the Theorem that $\tilde{w}$ is sufficiently small, it follows that $\tilde{w}_\infty = \sup_n \tilde{w}_n$ should also be a small constant $<1$. Hence, $\tilde{H}_A$ satisfy the requirements for applying Lemma~\ref{lem:HastingsKoma}. Let us denote $Q = V(P\otimes P_\textnormal{anc})V^\dag$ as the encoded groundspace projector. Since we also require $\epsilon$-incoherence, the groundspace projector $\tilde{P}$ of $\tilde{H}_A$ satisfies $\|\tilde{P}-Q\| = \|\tilde{P} - V(P\otimes P_\textnormal{anc})V^\dag \|\le \epsilon$, where $P$ is the groundspace projector of $H_A$, and $P_\textnormal{anc}$ is some projector on the ancilla. Consider the unencoded $X_i$ operator on the original qubit $i$, which corresponds to $\tilde{X}_i = V X_i V^\dag$ in the encoded Hamiltonian. Let the support of the observable $\tilde{X}_i$ be $S_i$. Because of the assumption of constant degree and the fact that the encoding is localized, there exists two qubit $L$ and $R$ where $\dist(S_L,S_R)\ge K \log n$ for some constant $K>0$ by Lemma~\ref{lem:encoded-support}. Consider the following \emph{approximate} groundstates of $\tilde{H}$ \begin{equation} \ket{g_{00}} = V \ket{0_L0_R}\ket{0\cdots}\ket{a}_\textnormal{anc}, \quad \ket{g_{01}} = V \ket{0_L1_R}\ket{0\cdots}\ket{a}_\textnormal{anc}, \quad \ket{g_{10}} = V \ket{1_L0_R}\ket{0\cdots}\ket{a}_\textnormal{anc} \end{equation} where $P_\textnormal{anc}\ket{a}=\ket{a}$, so $Q\ket{g_{ij}} = V(P\otimes P_\textnormal{anc})V^\dag \ket{g_{ij}}=\ket{g_{ij}}$. Also let us denote \begin{equation} \ket{e_{11}} = V \ket{1_L1_R}\ket{0\cdots}\ket{a} \end{equation} which satisfies $Q\ket{e_{11}}=0$. Now consider an approximate groundstate of $\tilde{H}_A$ \begin{equation} \ket{\phi} = \frac{1}{\sqrt{2}}(\ket{g_{01}}+\ket{g_{10}}). \end{equation} Let $\tilde{X}_L = V X_L V^\dag$ and $\tilde{X}_R = V X_R V^\dag$. It's easy to see that $\tilde{X}_R\ket{g_{01}} = \ket{g_{00}}$, etc. Observe \begin{eqnarray} \braket{\phi|\tilde{X}_L \tilde{X}_R|\phi} &=& 1 \\ \braket{\phi|\tilde{X}_R\tilde{P}\tilde{X}_L|\phi} = \braket{\phi|\tilde{X}_L\tilde{P}\tilde{X}_R|\phi} &=& \frac{1}{2} (\bra{e_{11}}+\bra{g_{00}})\tilde{P}(\ket{g_{00}}+\ket{e_{11}}) \end{eqnarray} Due to the assumption of $\epsilon$-incoherence, we require \begin{eqnarray*} \braket{g_{00}|\tilde{P}|g_{00}} &=& \|\tilde{P}\ket{g_{00}}\|^2 \le \left(\|Q\ket{g_{00}}\| + \|(\tilde{P}-Q\ket{g_{00}}\| \right)^2 \le (1+\epsilon)^2, \\ \braket{e_{11}|\tilde{P}|e_{11}} &=& \|\tilde{P}\ket{e_{11}}\|^2 = \|(\tilde{P}- Q)\ket{e_{11}}\|^2 \le \epsilon^2, \\ \left|\braket{e_{11}|\tilde{P}|g_{00}}\right| &=& \left|\braket{e_{11}|\tilde{P} - Q|g_{00}}\right| \le \|\tilde{P}-Q\| \le \epsilon. \end{eqnarray*} Hence \begin{equation} \braket{\phi|\tilde{X}_L\tilde{P}\tilde{X}_R|\phi} = \frac{1}{2} \left(\braket{g_{00}|\tilde{P}|g_{00}} + \braket{e_{11}|\tilde{P}|e_{11}} + 2\Re[\braket{e_{11}|\tilde{P}|g_{00}}]\right) \le \frac{1}{2}+\O(\epsilon) \end{equation} Note in order to apply the Hastings-Koma theorem, we need to convert $\ket{\phi}$ into an actual groundstate of $\tilde{H}$, i.e. a state fixed by $\tilde{P}$. Again using the $\epsilon$-incoherence condition, we have \begin{eqnarray*} \epsilon \ge \|(\tilde{P}-Q)\ket{\phi}\| = \|\tilde{P}\ket{\phi}-\ket{\phi}\| \quad \Longrightarrow \quad \tilde{P}\ket{\phi} = \ket{\phi} + \ket{\epsilon} \end{eqnarray*} where $\|\ket{\epsilon}\| \le \epsilon$. Now let \begin{equation} \ket{\psi} = \frac{\tilde{P}\ket{\phi} }{\|\tilde{P}\ket{\phi} \|} = \mathcal{N}(\ket{\phi}+\ket{\epsilon}) \end{equation} be a normalized state in the groundspace, where $\mathcal{N} \equiv \|\ket{\phi}+\ket{\epsilon}\|^{-1}$ is a normalization constant satisfying $(1+\epsilon)^{-1} \le \mathcal{N} \le (1-\epsilon)^{-1}$. Thus \begin{eqnarray} \braket{\psi|\tilde{X}_L \tilde{X}_R|\psi} &=& \mathcal{N}^2\braket{\phi|\tilde{X}_L\tilde{X}_R|\phi} + \O(\epsilon) \ge 1 + \O(\epsilon)\\ \braket{\psi|\tilde{X}_L \tilde{P} \tilde{X}_R|\psi} &=& \mathcal{N}^2\braket{\phi|\tilde{X}_L\tilde{P}\tilde{X}_R|\phi} + \O(\epsilon) \le \frac{1}{2} + \O(\epsilon) \end{eqnarray} For any given small constant $\epsilon\ge0$ and $\tilde{w}_\infty\ge 0$, the existence of $\tilde{H}_A$ contradicts Lemma~\ref{lem:HastingsKoma} since \begin{eqnarray} && \left|\braket{\psi|\tilde{X}_L\tilde{X}_R|\psi} - \frac{1}{2}(\braket{\psi|\tilde{X}_L\tilde{P}\tilde{X}_R|\psi} + c.c.) + \O(\tilde{w}_\infty\log^{\frac12}\frac{1}{\tilde{w}_\infty}) \right| \ge \left|1-\frac{1}{2} + \O(\epsilon) +\O(\tilde{w}_\infty\log^{\frac12}\frac{1}{\tilde{w}_\infty}) \right| \nonumber \\ &\ge& \left| \frac{1}{2} + \O(\epsilon) + \O(\tilde{w}_\infty\log^{\frac12}\frac{1}{\tilde{w}_\infty}) \right| \not\le C\exp(-\tilde\mu K \log n) = Cn^{-\tilde{\mu} K}. \label{eq:contradict-correlation-A} \end{eqnarray} This contradiction arises because for sufficiently small $\epsilon$ and $\tilde{w}_\infty$, the LHS of the final inequality is a constant of roughly $1/2$, which is not less than the RHS $Cn^{-\tilde{\mu} K}$ when $n> n_0$, for some cutoff system size $n_0 \approx (C/2)^{1/\tilde{\mu} K}$. \textbf{Part II}--- It remains to show the impossibility of incoherent degree-reduction for $H_B$. Suppose FSOC there exists an $O(1)$-local $[O(1),M,O(1)]$-degree-reducer $\tilde{H}_B$ of $H_B$ with localized encoding $V$, $\delta$-unfaithfulness and energy spread $\tilde{w}$. Since there $H_B$ has a unique groundstate $\ket{g_B}$, $\tilde{H}_B$ must also have $\epsilon$-incoherence for some $\epsilon\le\sqrt{2}\delta/\sqrt{1-\delta^2} = \O(\delta)$ due to Lemma~\ref{lem:equiv}. In other words, there must exists a projector $P_\textnormal{anc}^B$ onto the ancilla Hilbert space such that $\|\tilde{P}_B - Q_B\|\le\epsilon$, where $Q_B = V(P_B \otimes P_\textnormal{anc}^B)V^\dag$, $P_B = \ketbra{g_B}$, and $\tilde{P}_B$ is groundspace projector of $\tilde{H}_B$. Then consider the approximate groundstate $\ket{\phi_B}= V \ket{g_B}\ket{b}$ for some ancilla state $\ket{b}=P_\textnormal{anc}^B \ket{b}$. By Lemma~\ref{lem:encoded-support}, there exists two original qubits $L,R$ such that the support $S_i$ of the encoded observable $\tilde{X}_i = V X_i V^\dag$ satisfies $\dist(S_L,S_R)\ge K\log n$. Let us denote $h(x)$ as the Hamming weight (number of 1s) of the binary string $x$. Observe that \begin{eqnarray} && \braket{\phi_B|\tilde{X}_L \tilde{X}_R|\phi_B} = \braket{g_B|X_L X_R|g_B} = \binom{n}{n/2}^{-1} \sum_{x, y: h(x)=h(y) = \frac{n}{2}} \bra{x} X_L X_R\ket{y} \nonumber \\ &=& \text{\scalebox{0.9}{ $\binom{n}{n/2}^{-1} \sum_{x: h(x) = n/2} \bra{x} X_L X_R \left[\sum_{h(\bar{y})=\frac{n}{2}-1} (\ket{0_L1_R} + \ket{1_L 0_R}) \ket{\bar{y}} + \sum_{h(\bar{y}')=\frac{n}{2}}\ket{0_L 0_R}\ket{\bar{y}'} + \sum_{h(\bar{y})=\frac{n}{2}-2}\ket{1_L 1_R} \ket{\bar{y}''} \right]$ }} \nonumber \\ &=& \binom{n}{n/2}^{-1} \sum_{x: h(x) = n/2} \bra{x} \sum_{\bar{y} : h(\bar{y})=\frac{n}{2}-1} (\ket{1_L0_R} + \ket{0_L 1_R}) \ket{\bar{y}} = \binom{n}{n/2}^{-1} \times 2\binom{n-1}{n/2-1} \nonumber \\ &=& \frac{n}{2(n-1)} \ge \frac12 \end{eqnarray} Additionally, note $P_B X_i\ket{g_B}=0$ since $X_i$ changes the Hamming weight of all strings in $\ket{g_B}$. Consequently, $Q_B \tilde{X}_i\ket{\phi_B} = V [(P_B X_i\ket{g_B})\otimes\ket{b}]=0$, and \begin{equation} \left|\braket{\phi_B |\tilde{X}_L \tilde{P}_B \tilde{X}_R | \phi_B}\right| = \left|\braket{\phi_B |\tilde{X}_L (\tilde{P}_B - Q_B ) \tilde{X}_R | \phi_B}\right| \le \|\tilde{P}_B - Q_B \| \le \epsilon = \O(\delta) \end{equation} The rest of the proof for impossibility of $\tilde{H}_B$ follows the same argument as in the previous part for $\tilde{H}_A$, where we convert $\ket{\phi_B}$ to a true groundstate $\ket{\psi_B} \propto \tilde{P}_B \ket{\phi_B}$ with up to $\O(\epsilon)=\O(\delta)$ error. Then the existence of $\tilde{H}_B$ contradicts Lemma~\ref{lem:HastingsKoma} since \begin{eqnarray} &&\left| \braket{\psi_B|\tilde{X}_L \tilde{X}_R|\psi_B} - \frac{1}{2}(\braket{\psi_B|\tilde{X}_L\tilde{P}_B \tilde{X}_R|\psi_B} + c.c.) + \O(\tilde{w}_\infty \log^{\frac12}\frac{1}{\tilde{w}_\infty}) \right| \nonumber \\ &\ge& \frac12+ \O(\delta) + \O(\tilde{w}_\infty \log^{\frac12}\frac{1}{\tilde{w}_\infty}) \not\le C\exp(-\tilde\mu K \log n) = Cn^{-\tilde{\mu} K} \label{eq:contradict-correlation-B} \end{eqnarray} for sufficiently large $n>n_0$, where $n_0\approx (C/2)^{1/\tilde{\mu} K}$. \end{proof} We remark that Theorem~\ref{thm:main} in fact holds for encodings which is \emph{quasi}-localized, i.e. when the support of encoded local variable $|S_i|=O(\poly\log n)$. This is because by Lemma~\ref{lem:encoded-support}, there exists two original qubits $L,R$ whose encoded support are distance $\Omega(\log n/\log (\log n)))$ apart for quasi-localized encodings, which would lead to a contradiction by correlation decay since $e^{-K\log n/\log (\log n))}\to0$ as $n\to\infty$. \subsection{Impossibility of Full-Spectrum Degree-Reduction\label{sec:imposs-full-spec-DR}} The above result of Theorem~\ref{thm:main} can be used to also rule out degree-reduction of $H_A$ or $H_B$ with up to constant error $(\eta,\xi)$ in the framework of full-spectrum simulation of Hamiltonian~\cite{BravyiHastingsSim, UniversalHamiltonian}, as per Definition~\ref{defn:CMPsimul} (assuming the encoding is localized). This is true since $H_A$ and $H_B$ are real-valued and spectrally gapped, and thus by Lemma \ref{lem:relate-CMP} such a simulation would imply a gap-simulation by our Definition~\ref{defn:hamsimul}, which was already found to be impossible by Theorem~\ref{thm:main} with localized encoding. Hence, it's a simple deduction that: \begin{coro}[Impossibility of full-spectrum degree-reduction, corollary to Theorem~\ref{thm:main}] \label{coro:imposs-CMP} It is impossible for an $O(1)$ degree $\tilde{H}_A$ (or $\tilde{H}_B$) with bounded interaction strength to full-spectrum-simulate $H_A$ (or $H_B$) to precision $(\eta,\xi)$ per Def.~\ref{defn:CMPsimul}, where the isometry in the encoding $V$ is an localized encoding per Def.~\ref{defn:localized-encoding}, for sufficiently small constants $\eta$ and $\xi$ and large enough system size. \end{coro} \section{Generalized Hastings-Koma Theorem for Decay of Correlation with Non-Zero Energy Spread (Lemma~\ref{lem:HastingsKoma})\label{sec:MHK}} In this Appendix, we prove a stronger version of Hastings-Koma's result\cite{HastingsKoma}, which we need in order to prove our main result of Theorem~\ref{thm:main}. To this end, we relax the assumption in Hastings and Koma's theorem so that the energy spread of the groundspace is non-zero even in the limit of system size $n\to\infty$; this enables us to handle cases where $w_n>0$ for all $n$, namely small but non-vanishing perturbation to the groundspace. We show that this still allows us to obtain decay of correlation. The proof requires a modification of Hastings and Koma's proof, where we calculate better bounds and choose optimal integration parameters, allowing us to mitigate the errors due to the non-zero energy spread. { \renewcommand{\thelemma}{\ref{lem:HastingsKoma}} \begin{lemma}[Hastings-Koma theorem for non-zero energy spread, generalized from Ref.~\cite{HastingsKoma}] Suppose we have a $n$-qudit Hamiltonian defined on a graph $G=(V,E)$ with exponential decaying interactions (Def.~\ref{defn:expdecayint}). Also suppose for some constants $0\le w_\infty < 1$ and $\gamma_\infty >0$ independent of system size $n$, the Hamiltonian is quasi-spectrally gapped (Def.~\ref{defn:gap}) with energy spread $w_n \le w_\infty$ and quasi-spectral gap $\gamma_n \ge \gamma_\infty$. Let $P_0$ be the projector onto the corresponding quasi-groundspace. Let $A_X$ and $B_Y$ be observables with bounded norm $\|A_X\|,\|B_Y\|=O(1)$ and compact support $X,Y\subset V$, where $[A_X,B_Y]=0$ and $X\cap Y=\emptyset$. Then there exists some constants $C,\tilde{\mu}>0$, independent of $n$, such that for any normalized quasi-groundstate $\ket{\psi}\in P_0$, we have \begin{equation} Ce^{-\tilde{\mu}\dist(X,Y)} \ge \left| \braket{\psi|A_X B_Y|\psi} - \frac12 (\braket{\psi|A_XP_0B_Y|\psi} + \braket{\psi|B_YP_0A_X|\psi}) + \O(w_\infty\log^{\frac12}\frac{1}{w_\infty})\right| \label{eq:MHKineq} \end{equation} In the case when $w_\infty=0$, we can ignore the $\O( w_\infty \log^{\frac12} (1/w_\infty))$ term. \end{lemma} \addtocounter{lemma}{-1} } \begin{proof} Let us call the given $n$-qudit Hamiltonian $H_{(n)}$. To avoid dealing with variations in the quasi-spectral gap $\gamma_n$, we rescale each Hamiltonian to $H_{(n)}' = [H_{(n)}-\lambda_1(H_{(n)})]/\gamma_n$, where $H_{(n)}'$ has the same quasi-groundspace projector $P_0$ with energy spread $w_n'=w_n$ and $\gamma_n' = 1$. Since $\gamma_n \ge \gamma_\infty > 0$, $H_{(n)}'$ also has exponential decaying interactions since the interaction strength increase by a factor of at most $1/\gamma_\infty$, a constant. Note that the new Hamiltonian $H_{(n)}'$ has identical eigenstates as $H_{(n)}$, whose eigenvalues are simply rescaled. For $H_{(n)}'$, let us denote \begin{itemize} \item $\{\ket{\phi_{0,\nu}}\}$ as its quasi-groundstates with energy $E_{0,\nu} \le w_n'\gamma_n'$, corresponding to Hermitian projector $P_0$; \item $\{\ket{\phi_j}\}_{j>0}$ as the rest of its eigenstates with energy $\gamma_n'$ or higher. \end{itemize} Due to the condition that $H_{(n)}'$ be quasi-spectrally gapped with energy spread $w_n'=w_n$ and quasi-spectral gap $\gamma_n'=1$, we have that the eigenvalues $\{E_{0,\nu}\} \cup \{E_j\}_{j>0}$ of $H_{(n)}'$ must satisfy: \begin{equation} \label{eq:QDcond} \max_{\nu,\nu'} |E_{0,\nu}-E_{0,\nu'}| \le w_n' \gamma_n' = w_n \le w_\infty, \end{equation} \begin{equation} \label{eq:gapcond} \min_{E_j\not\in\{E_{0,\nu}\}} E_j-E_{0,\nu} \ge (1-w_n')\gamma_n' = (1-w_n) \ge (1-w_\infty) \equiv \Delta. \end{equation} Now consider any groundstate $\ket{\psi}\in P_0$, which can be written as \begin{equation} \ket{\psi} = \sum_\nu c_\nu \ket{\phi_{0,\nu}}. \end{equation} Let us denote $\braket{\cdots}=\braket{\psi|\cdots|\psi}$. To obtain the inequality Eq.~\eqref{eq:MHKineq}, we begin with the equation \begin{eqnarray} \braket{[A_X(t),B_Y]} &=& \braket{A_X(t)(\mathds{1}-P_0)B_Y} - \braket{B_Y(\mathds{1}-P_0)A_X(t)} \nonumber \\ &&+ \braket{A_X(t)P_0B_Y} - \braket{B_Y P_0 A_X(t)}, \label{eq:commexpand} \end{eqnarray} where $A_X(t)=e^{iH_{(n)}'t}A_X e^{-iH_{(n)}'t}$ is the Heisenberg picture of observable $A_X$. The proof of the inequality proceeds by (1) applying a filter function to the RHS of Eq.~\eqref{eq:commexpand} to get rid of time-dependence, and show that it is equivalent to the RHS of Eq.~\eqref{eq:MHKineq}, and then (2) bound the quantity on the LHS through Lieb-Robinson bound. Note each term on the RHS of Eq.~\eqref{eq:commexpand} can be expanded as \begin{eqnarray} \braket{A_X(t)(\mathds{1}-P_0)B_Y} &=& \sum_{\nu,\nu'} \sum_{j\neq 0} c_\nu^* c_{\nu'} \braket{\phi_{0,\nu}|A_X|\phi_j}\braket{\phi_j|B_Y|\phi_{0,\nu'}} e^{-it(E_j-E_{0,\nu})}, \\ \braket{B_Y(\mathds{1}-P_0)A_X(t)} &=& \sum_{\nu,\nu'} \sum_{j\neq 0} c_\nu^* c_{\nu'} \braket{\phi_{0,\nu}|B_Y|\phi_j}\braket{\phi_j|A_X|\phi_{0,\nu'}} e^{it(E_j-E_{0,\nu'})}, \\ \braket{A_X(t)P_0B_Y} &=& \sum_{\nu,\nu'} \sum_{\mu} c_\nu^* c_{\nu'} \braket{\phi_{0,\nu}|A_X|\phi_{0,\mu}}\braket{\phi_{0,\mu}|B_Y|\phi_{0,\nu'}} e^{-it(E_{0,\mu}-E_{0,\nu})}, \\ \braket{B_YP_0A_X(t)} &=& \sum_{\nu,\nu'} \sum_{\mu} c_\nu^* c_{\nu'} \braket{\phi_{0,\nu}|B_Y|\phi_{0,\mu}}\braket{\phi_{0,\mu}|A_X|\phi_{0,\nu'}} e^{it(E_{0,\mu}-E_{0,\nu'})} . \end{eqnarray} Using the gap condition Eq.~\eqref{eq:gapcond}, we can extract the positive frequency part of the first two terms. And due to the quasi-degeneracy condition Eq.~\eqref{eq:QDcond}, we should be able to eliminate the time-dependence of the last two terms. We rely on the following Lemma, which is an extension of Lemma 3.1 in Hastings-Koma\cite{HastingsKoma}: \begin{lemma}[Filtering] \label{lem:filter} For any $\alpha > 0$, consider the linear operator $\mathcal{I}_\alpha$ on space of function $\{f(t):\mathds{R}\to\mathds{C}\}$: \begin{equation} \mathcal{I}_\alpha[f(t)] \equiv \lim_{T\to\infty} \lim_{\epsilon\to0^+} \frac{i}{2\pi} \int_{-T}^T \frac{f(t)e^{-\alpha t^2}}{t+i\epsilon} dt. \end{equation} Then \begin{eqnarray} \mathcal{I}_\alpha[e^{-iEt}] &=& \frac{1}{2\pi}\sqrt{\frac{\pi}{\alpha}} \int_{-\infty}^0 d\omega e^{-\frac{(\omega+E)^2}{4\alpha}} = \frac12 \left(1+\erf(\frac{E}{\sqrt{4\alpha}}) \right) \nonumber \\ &=& \begin{dcases} 1 - \O((\sqrt{\alpha}/\Delta)e^{-\Delta^2/(4\alpha)}) & \text{ if } E\ge \Delta \\ \O((\sqrt{\alpha}/\Delta)e^{-\Delta^2/(4\alpha)}) & \text{ if } E \le -\Delta \\ \frac{1}{2} + \O(w_\infty/\sqrt{4\alpha}) & \text{ if } |E|\le w_\infty \\ \end{dcases}. \label{eq:filterfunction} \end{eqnarray} \end{lemma} { \renewcommand{\qedsymbol}{$\blacklozenge$} \begin{proof}[Proof of Lemma~\ref{lem:filter}] The first equality of $\mathcal{I}_\alpha[e^{-iEt}] =\int d\omega(\cdots)$ is found in the proof of Lemma 3.1 in Ref.~\cite{HastingsKoma}. Then the rest follows from evaluating the integral. \end{proof} } It is then clear that by applying the linear operator $\mathcal{I}_\alpha$ (for some $\alpha$ that we will choose later) on the first term of the RHS of Eq.~\eqref{eq:commexpand}, we obtain \begin{eqnarray} \mathcal{I}_\alpha[\braket{A_X(t)(\mathds{1}-P_0)B_Y}] &=& \sum_{\nu,\nu'} \sum_{j> 0} c_\nu^* c_{\nu'} \braket{\phi_{0,\nu}|A_X|\phi_j}\braket{\phi_j|B_Y|\phi_{0,\nu'}} + \O(e^{-\Delta^2/(4\alpha)}) \nonumber \\ &=& \braket{A_X (\mathds{1}-P_0) B_Y} + \O((\sqrt{\alpha}/\Delta) e^{-\Delta^2/(4\alpha)}). \end{eqnarray} And similarly on the second term, we obtain \begin{equation} \mathcal{I}_\alpha[\braket{B_Y(\mathds{1}-P_0)A_X(t)}] = \O((\sqrt{\alpha}/\Delta)e^{-\Delta^2/(4\alpha)}). \end{equation} Lastly, on the third and fourth term, we obtain \begin{eqnarray} \mathcal{I}_\alpha[\braket{A_X(t)P_0B_Y}] &=& \frac{1}{2} \braket{A_XP_0B_Y} + \O(w_\infty/\sqrt{4\alpha}), \\ \mathcal{I}_\alpha[\braket{B_YP_0A_X(t)}] &=& \frac{1}{2} \braket{B_YP_0A_X} + \O(w_\infty/\sqrt{4\alpha}). \end{eqnarray} Now, we make use of the Lieb-Robinson bound for Hamiltonians with exponentially decaying interaction (see Theorem A.2 in Ref.~\cite{HastingsKoma}) which states that: \begin{equation} \left\|[A_X(t),B_Y]\right\| \le K\|A_X\|\|B_Y\||X||Y| (e^{v|t|}-1) e^{-\mu D}, \quad \text{where} \quad D \equiv \dist(X,Y), \end{equation} and $K$, $v$ are positive constants that depend only on the structure of $H$ and the graph. This allows us to bound the LHS of Eq.~\eqref{eq:commexpand}, by applying the linear operator $\mathcal{I}_\alpha$. Since $(e^{v|t|}-1)/|t| \le v e^{v|t|}$, we have \begin{eqnarray} \left| \int_{-\infty}^{\infty} \frac{e^{v|t|}-1}{|t|} e^{-\alpha t^2} \right| \le \int_{-\infty}^\infty v e^{v|t|-\alpha t^2} = \frac{v\sqrt{\pi}}{\sqrt{\alpha}} e^{v^2/4\alpha}(1+\erf(\frac{v}{\sqrt{4\alpha}})) \le 2\sqrt{\pi} \frac{v}{\sqrt{\alpha}} e^{v^2/4\alpha} \end{eqnarray} where we used the fact that $|\erf(x)|\le 1$. Thus, for some $C_1(\alpha) = K'\frac{v}{\sqrt{\alpha}} e^{v^2/4\alpha}$, where $K'$ is a constant, we have \begin{equation} \left| \int_{-\infty}^\infty \frac{\braket{[A_X(t),B_Y]} e^{-\alpha t^2}}{t+i\epsilon} dt\right| \le C_1(\alpha) e^{-\mu D} \end{equation} Let us choose $\alpha = \Delta/ (4\tau)$. After applying $\mathcal{I}_\alpha$ to both sides of Eq.~\eqref{eq:commexpand}, we obtain \begin{eqnarray} C e^{- \mu D} &\ge& \left|\braket{A_X (\mathds{1}-P_0) B_Y} + \frac12\braket{A_XP_0B_Y} - \frac12\braket{B_YP_0A_X} + \O\left(\frac{w_\infty}{\sqrt{\alpha}}\right) + \O\left(\frac{\sqrt{\alpha}}{\Delta}e^{-\Delta^2/(4\alpha)}\right)\right| \nonumber \\ &=&\left|\braket{A_X B_Y} - \frac12 (\braket{A_XP_0B_Y} + \braket{B_YP_0A_X} ) + \O\left(\frac{w_\infty\sqrt{\tau}}{\sqrt{\Delta}}\right) + \O\left(\frac{1}{\sqrt{\Delta\tau}}e^{-\Delta \tau}\right) \right|. \label{eq:MHK-non-zero-spread} \end{eqnarray} Note $w_\infty/\sqrt{\Delta} = w_\infty/\sqrt{1-w_\infty}=\O(w_\infty)$. In the case where $w_\infty > 0$, let us choose constants $\tau=(1/\Delta) \log(1/w_\infty)$, $\tilde{\mu}=\mu$, and $C=C_1(\alpha=\Delta/(4\tau))$, we have \begin{equation} C e^{-\tilde{\mu} D} \ge \left|\braket{A_X B_Y} - \frac12(\braket{A_XP_0B_Y} + \braket{B_YP_0A_X}) + \O(w_\infty\log^{\frac12}\frac{1}{w_\infty}) \right|, \end{equation} which is the desired inequality. In the case where $w_\infty=0$, let $\xi = \mu \Delta^2/(\Delta^2+v^2)$ be a constant, and we choose $\tau = (\xi/\Delta) D$. Then \begin{eqnarray} && \left|\braket{A_X B_Y} - \frac12(\braket{A_XP_0B_Y} + \braket{B_YP_0A_X})\right| \le C_1(\alpha) e^{-\mu D} + \O \left(\frac{1}{\sqrt{\Delta \tau}} e^{-\Delta \tau}\right) \nonumber \\ &= & K'\frac{2v\sqrt{\tau}}{\sqrt{\Delta}} e^{-\mu D -v^2\tau/\Delta } + \O\left( \frac{1}{\sqrt{\xi D}}e^{-\xi D}\right) = K'\frac{2v\sqrt{D}}{\Delta} e^{-\xi D} + \O\left( \frac{1}{\sqrt{\xi D}}e^{-\xi D}\right) \end{eqnarray} Now observe that for any $\kappa>0$, if we set $\tilde{\mu}=\xi/(1+\kappa)$, then $e^{(\xi-\tilde{\mu})D} = e^{\kappa\xi D/(1+\kappa)} \ge \kappa\xi \sqrt{D}/(1+\kappa)$ for any $D\ge 0$. Hence, there exists some positive constant $C$ and $\tilde{\mu}=\xi/(1+\kappa)=\mu \Delta^2/[(\Delta^2+v^2)(1+\kappa)]$ such that \begin{equation} C e^{-\tilde{\mu} D} \ge \left|\braket{A_X B_Y} - \frac12(\braket{A_XP_0B_Y} + \braket{B_YP_0A_X})\right| \end{equation} which is the desired inequality in this special case of $w_\infty=0$. \end{proof} \paragraph{Remark}--- We note that our proof diverges from Hastings and Koma's \cite{HastingsKoma} most nontrivially at Eq.~\eqref{eq:MHK-non-zero-spread}, which is the reason we have an extra term $\O(w_\infty \log(1/w_\infty))$. Note that unlike our approach, Hastings and Koma worked under the assumption that $w_n\to 0$ as $n\to\infty$; hence, in the $n\to\infty$ limit, they can neglect the error term $\O(w_\infty \sqrt{\tau/\Delta})$ in Eq.~\eqref{eq:MHK-non-zero-spread} since they can choose $w_\infty$ arbitrarily close to 0, and make the second error term $\O((1/\sqrt{\Delta \tau})e^{-\Delta \tau})$ small by choosing $\tau \sim \dist(X,Y)$. With this assumption, Hastings and Koma arrived at an alternative bound on the decay or correlation with $\tilde{\mu}=\mu/(1+2v/\Delta)$. In contrast to their approach, we are interested also in situations where $w_n$ is bounded from below by a non-zero constant, even as $n\to\infty$. We thus have to optimize the parameter $\tau$ so that the terms $\O(w_\infty\sqrt{\tau})$ and $\O((1/\sqrt{\tau})e^{-\Delta \tau})$ are balanced and do not grow unboundedly. \section{Impossibility of Dilution Algorithm for Classical Hamiltonians (Theorem~\ref{thm:imposs-dilute}) \label{sec:imposs-dilute}} In this Appendix, we prove Theorem~\ref{thm:imposs-dilute}, namely the impossibility to dilute classical Hamiltonians with an efficient classical algorithm. { \renewcommand{\thethm}{\ref{thm:imposs-dilute}} \begin{thm}[Impossibility of dilution algorithm for classical Hamiltonians] If $\textnormal{\texttt{coNP}} \not\subseteq \textnormal{\texttt{NP/poly}}$, then for any $\xi>0$, $\delta < 1/\sqrt{2}$, $\tilde{w} \le 1/2$, there is no classical algorithm that given a $k$-local $n$-qubit classical Hamiltonian $H$, runs in $O(\poly(n))$ time to find an $[r,O(n^{k-\xi}),J]$-diluter of $H$ with $\delta$-unfaithfulness, energy spread $\tilde{w}$, and any encoding $V$ that has an $O(n^{k-\xi})$-bit description. This holds for any $r$ and $J$. \end{thm} \addtocounter{thm}{-1} } For this, we use a result from Ref.~\cite{DellvanMelkebeek}: \begin{defn}[Vertex Cover] Consider any $n$-vertex $k$-uniform hypergraph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}$ is the set of vertices and $\mathcal{E}\subseteq \mathcal{V}^k$ is the set of hyperedges. A vertex cover on $\mathcal{G}$ is a subset of vertices $S\subseteq \mathcal{V}$ such that $\forall e\in \mathcal{E}$, $S\cap e \neq \emptyset$. The language $k$-$\textsc{VertexCover}$ is a set consisting of tuples $(\mathcal{G},m)$, where $\mathcal{G}$ is a $k$-uniform hypergraph with a vertex cover of $\le m$ vertices. To \emph{decide} $k$-$\textsc{VertexCover}$ means to output ``yes'' if a given input tuple $(\mathcal{G},m)\in k$-$\textsc{VertexCover}$, and ``no'' if $(\mathcal{G},m)\not \in k$-$\textsc{VertexCover}$. \end{defn} \begin{defn}[oracle communication protocol, adapted from\cite{DellvanMelkebeek}] An \emph{oracle communication protocol} for a decision problem is a communication protocol between two players. Player 1 is given the input $x$ and has to run in time polynomial in the length of the input. Player 2 (the oracle) is computationally unbounded but is not given any part of $x$. At the end of the protocol the first player should be able to decide whether $x$ is accepted (i.e. the answer to the decision problem is yes). The cost of the protocol is the number of bits of communication from the Player 1 to Player 2. \end{defn} \begin{lemma}[No compressed communication protocol to decide vertex cover, Theorem 2 of \cite{DellvanMelkebeek}] \label{lem:nocompression} If $\textnormal{\texttt{coNP}} \not\subseteq \textnormal{\texttt{NP/poly}}$, for any $k\ge 2$ and $\xi>0$, there is no oracle communication protocol of cost $O(n^{k-\xi})$ to decide $k$-$\textsc{VertexCover}$. This is true even when Player 1 is co-non-deterministic. \end{lemma} We also use Lemma~\ref{lem:PPgroundspace} that bounds the error of groundspace projectors due to perturbations. This Lemma is first stated in Sec.~\ref{sec:comp-defns} and proved in Sec.~\ref{sec:PPgroundspace-proof}. We restate it here for convenience. { \renewcommand{\thelemma}{\ref{lem:PPgroundspace}} \begin{lemma}[Error bound on perturbed groundspace (restatement)] Let $\tilde{H}$ and $\tilde{H}'$ be two Hamiltonians. Per Def.~\ref{defn:gap}, let $\tilde{P}$ project onto a quasi-groundspace of $\tilde{H}$ with energy spread $\tilde{w}$ and quasi-spectral gap $\gamma$. Assume $\tilde{w}\le 1/2$ and $\|\tilde{H}' - \tilde{H}\| \le \kappa$, where $\kappa \le (1-\tilde{w})\gamma/8$. Then there is a quasi-groundspace projector $\tilde{P'}$ of $\tilde{H}'$ with quasi-spectral gap at least $\gamma'$, comprised of eigenstates of $\tilde{H}'$ up to energy at most $\lambda_1(\tilde{H}') + \tilde{w}'\gamma'$, where \begin{equation} \gamma' > \gamma-2\kappa, \quad \tilde{w}'\gamma' \le \tilde{w}\gamma + 2\kappa, \quad \text{and} \quad \|\tilde{P}'-\tilde{P}\| < \frac{32\kappa}{\gamma}. \end{equation} \end{lemma} \addtocounter{lemma}{-1} } \begin{proof}[\textbf{Proof of Theorem~\ref{thm:imposs-dilute}}] Suppose for the sake of contradiction that there is a polynomial-time classical algorithm that given a $k$-local $n$-qubit classical Hamiltonian $H$, runs in $O(\poly(n))$ time to find an $[r,O(n^{k-\xi}),J]$-diluter of $H$ with unfaithfulness $\delta < 1/\sqrt{2}$, energy spread $\tilde{w}\le 1/2$, and some encoding $V$ described by $O(n^{k-\xi})$ classical bits. We can then construct an oracle communication protocol to decide $k$-$\textsc{VertexCover}$: \begin{enumerate} \item Player 1 takes the input $(\mathcal{G},m)$ for $k$-$\textsc{VertexCover}$, where $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is a $k$-uniform $n$-vertex hypergraph, and encodes $\mathcal{G}$ as a $k$-local $n$-qubit classical Hamiltonian $H$. Specifically, we can encode the problem as the following Hamiltonian \begin{gather} H = 2H_\text{con} + H_\text{count}, \nonumber \\ \text{where} \quad H_\text{con} = \sum_{(i_1,\ldots,i_k) \in \mathcal{E}} \ketbra{0}^{(i_1)}\otimes \ketbra{0}^{(i_2)}\otimes \cdots \otimes \ketbra{0}^{(i_k)} \quad \text{and} \quad H_\text{count} = \sum_{i=1}^n \ketbra{1}_i. \end{gather} The eigenstates of $H$ are classical strings $\ket{z_1z_2\cdots z_n}$, where $z_i=1$ means that vertex $i$ is chosen to be in the vertex cover. Note $H_\text{con}$ ensures that every hyperedge in $\mathcal{E}$ is covered by some vertex, and $H_\text{count}$ penalizes any extra vertices used to cover the hypergraph. Note that for any computational basis state with energy $E$ not representing a vertex cover (i.e. for some hyperedge $e\in \mathcal{E}$, $z_i=0$ for all $i\in e$), there is a state with energy $E-1$ (by changing one of the $z_i=1$ for some $i\in e$ to cover the hyperedge $e$). Hence, the groundstates of $H$ represent minimum vertex covers on $\mathcal{G}$. Let $P=\sum_\mu \ketbra{g_\mu}$ be the projector onto groundstates $\ket{g_\mu}$ of $H$, where each $\ket{g_\mu}$ is written in the computational basis and represents a minimum vertex cover on $\mathcal{G}$. Observe that $H$ is spectrally gapped with energy spread $w=0$ and spectral gap $\gamma=1$, since any state not representing a minimum vertex cover would incur an energy penalty of at least 1. \item Player 1 then uses the supposed polynomial-time classical algorithm to generate a diluter $\tilde{H}$ of $H$ with encoding $V$, unfaithfulness $\delta$ and energy spread $\tilde{w}$, such that $\tilde{H}=\sum_{i=1}^M \tilde{H}_i$, where $\tilde{H}_i$ are $O(1)$-local and $M=O(n^{k-\xi})$. Furthermore, Player 1 takes each term $\tilde{H}_i$ and rewrites it as $\tilde{H}_i'$ in $s$-bit precision, producing $\tilde{H}'=\sum_i \tilde{H}_i'$. Noting that $\|\tilde{H}'-\tilde{H}\| \le O(M/2^s)$, we can simply choose $s=O(\log_2 (M/\kappa)) = O(\log_2 (n/\kappa))$ to ensure $\|\tilde{H}'-\tilde{H}\|\le \kappa$, for some \begin{equation} \kappa < \min\{(1-\tilde{w})/8, (1/\sqrt{2} - \delta)/64\}. \end{equation} Player 1 then communicates $(\tilde{H}', V, n, m)$ to Player 2, incurring a protocol cost of $O(Ms) = O(n^{k-\xi}\log n)$. \item Player 2 uses their unbounded computational resource to diagonalize $\tilde{H}'$ and find all its eigenstates. Let us first bound the distance between the groundspace of $\tilde{H}'$ and that of $\tilde{H}$. To that end, let us denote $\tilde{P}$ as the quasi-groundspace projector of $\tilde{H}$ with quasi-spectral gap $\gamma=1$ and energy spread $\tilde{w}$. According to Lemma~\ref{lem:PPgroundspace}, since $\tilde{w}\le 1/2$ and $\|\tilde{H}'-\tilde{H}\|\le \kappa \le (1-\tilde{w})/8$, there is a quasi-groundspace projector $\tilde{P}'$ of $\tilde{H}'$ comprised of eigenstates of $\tilde{H}'$ with energy at most $\lambda_1(\tilde{H}')+ \tilde{w}+2 \kappa$, satisfying $\|\tilde{P}'-\tilde{P}\| < 32\kappa$. Furthermore, since $\tilde{H}$ is a diluter of $H$ with unfaithfulness $\delta$, we have $\|\tilde{P}-V P V^\dag \tilde{P}\|\le \delta$. Denoting $P'=V P V^\dag$, this implies $\|\tilde{P}'-P'\tilde{P}'\| \le \|\tilde{P}'-\tilde{P} + \tilde{P} - P'\tilde{P}+ P'\tilde{P} - P'\tilde{P}'\| \le \delta + 64\kappa$. Note since Player 1 had chosen $\kappa < (1/\sqrt{2}-\delta)/64$, we have $\delta+64\kappa < 1/\sqrt{2}$, which implies $\|\tilde{P}'-VPV^\dag \tilde{P}'\| = \|\tilde{P}'- P' \tilde{P}'\|^2 < 1/2$. We now show how Player 2 can decide whether $(\mathcal{G},m) \in k$-$\textsc{VertexCover}$ by using any groundstate $\ket{\tilde{g}}$ of $\tilde{H}'$. Let us denote $Q_{\le m}^n$ as the projector onto all computational basis states on $n$ qubits with $\le m$ qubits in the state $\ket{1}$. Then we claim \begin{equation} (\mathcal{G},m) \in k\text{-}\textsc{VertexCover} ~ \Longleftrightarrow ~ \braket{\tilde{g}|V(Q_{\le m}^n \otimes \mathds{1}_\textnormal{anc})V^\dag |\tilde{g}} \ge \frac12 \text{ for any groundstate } \ket{\tilde{g}} \text{ of } \tilde{H}'. \label{eq:Player2-decision} \end{equation} This is because: \begin{itemize} \item[(a)] If $(\mathcal{G},m)\in k$-$\textsc{VertexCover}$, the original groundspace projector $P$ of $H$ contains only states with $\le m$ 1's on the original $n$ qubits. Then $Q_{\le m}^n - P$ is positive semi-definite, and $\braket{\psi|V Q_{\le m}^n V^\dag |\psi} \ge \braket{\psi|V P V^\dag|\psi}$ for any state $\ket{\psi}$. Note for any groundstate $\ket{\tilde{g}}$ of $\tilde{H}'$, we have $\tilde{P}'\ket{\tilde{g}}=\ket{\tilde{g}}$, which means \begin{gather} \frac12 > \|(\tilde{P}'-VPV^\dag \tilde{P}')\ket{\tilde{g}}\|^2 = 1 - \braket{\tilde{g}|VPV^\dag |\tilde{g}} \nonumber \\ \Longrightarrow \quad \braket{\tilde{g}|V Q_{\le m}^n V^\dag|\tilde{g}} \ge \braket{\tilde{g}|V P V^\dag|\tilde{g}} > \frac12. \end{gather} \item[(b)] If $(\mathcal{G},m)\not \in k$-$\textsc{VertexCover}$, the original groundspace projector $P$ contains only states with $> m$ 1's on the original $n$ qubits. Then $\mathds{1} - Q_{\le m}^n - P$ is positive semi-definite, and $\braket{\psi|Q_{\le m}^n|\psi} \le 1-\braket{\psi|P|\psi}$ for any state $\ket{\psi}$. For any groundstate $\ket{\tilde{g}}$ of $\tilde{H}'$, we have similarly \begin{gather} \frac12 > \|(\tilde{P}'-V P V^\dag\tilde{P}')\ket{\tilde{g}}\|^2 = 1-\braket{\tilde{g}|V P V^\dag|\tilde{g}} \ge \braket{\tilde{g}|V Q_{\le m}^n V^\dag|\tilde{g}} \nonumber \\ \Longrightarrow \quad \braket{\tilde{g}|V Q_{\le m}^n V^\dag|\tilde{g}} \not\ge \frac12. \end{gather} \end{itemize} In other words, Player 2 may take any groundstate $\ket{\tilde{g}}$ of $\tilde{H}'$, compute its expectation value of $V Q_{\le m}^n V^\dag$, and transmit the decision yes to Player 1 if and only if $\braket{\tilde{g}|V Q_{\le m}^n V^\dag|\tilde{g}} \ge 1/2$. \item Player 1 receives the decision from Player 2, which decides whether $(\mathcal{G},m) \in k$-$\textsc{VertexCover}$. \end{enumerate} Since this oracle communication protocol can decide the vertex cover problem with $O(n^{k-\xi}\log n) = O(n^{k-\xi/2})$ cost for any $\xi > 0$, it directly contradicts Lemma~\ref{lem:nocompression}. Hence, for any $\delta < 1/\sqrt{2}$ and $\tilde{w}\le 1/2$, no polynomial-time classical algorithm exists to find diluters of classical Hamiltonians with $\delta$-unfaithfulness, energy spread $\tilde{w}$, and encoding $V$ that is described by $O(n^{\xi-k})$ classical bits. \end{proof} \section{Incoherent Degree-Reduction and Dilution} \subsection{Degree-Reduction of Classical Hamiltonians (Proposition~\ref{prop:classical-deg-reduct})\label{sec:classical-degree-reduction}} In the classical world, degree-reduction of Constraint Satisfaction Problems is famously used for proving the important result of PCP theorem~\cite{dinur}. Here we show that the same construction can be used to degree-reduce $k$-local ``classical" Hamiltonians, where all terms are diagonal in the computational basis, albeit \emph{incoherently}. { \renewcommand{\theprop}{\ref{prop:classical-deg-reduct}} \begin{prop}[Incoherent degree-reduction of classical Hamiltonian] Consider an $n$-qudit $k$-local \emph{classical} Hamiltonian $H = \sum_{S\subset \{1,\ldots, n\}} C_S$, where each $C_S:\{z_i:i\in S\} \to [0, 1]$ is a function of $d$-ary strings of length $|S|\le k$ representing states of qudits in $S$. Let the number of terms in $H$ be $M_0=|\{S\}|=O(n^k)$. Then there is a $k$-local $[3,O(kM_0),O(1)]$-degree-reducer of $H$ with $0$-unfaithfulness, no energy spread, and trivial encoding $V=\mathds{1}$. \end{prop} \addtocounter{prop}{-1} } \begin{proof} Since the original Hamiltonian $H$ is diagonalizable in the computational $z$-basis, from now on we will slightly abuse notation to use $z_i$ to denote both the operator $z_i=\sum_{z=0}^{d-1}\ketbra{z}_i$ and (the state of) the $i$-th qudit interchangeably. To construct a degree-reducer of $H$, we first replace each original qudit $z_i$ with a cluster of $r_i=|\{S\ni i: S\}|$ qudits, each of which we denote $\tilde{z}_{i,S\ni i}$, corresponding to the term $C_S$ that qudit $i$ participates in the original Hamiltonian. We then add a ring of equality constraints to make sure that all $r_i$ qudits in each cluster $i$ have the same value in the computational basis. Subsequently, by substituting each $k$-local term in the original Hamiltonian with a $k$-local term that acts on one qudit per the cluster corresponding to an original qudit, we are able to produce an equivalent Hamiltonian with degree of at most 3. More concretely, if we denote $\gamma$ as the spectral gap of the original Hamiltonian $H$, this construction yields the following sparsifier Hamiltonian: \begin{gather} \tilde{H} = H_\text{con} + H_\text{eq},\nonumber \\ \text{where} \quad H_\text{con} = \sum_S C_S(\{ \tilde{z}_{i,S} : i\in S\}) \quad \text{and} \quad H_\text{eq} = \frac{\gamma}{4} \sum_{i=1}^n \sum_{j=1}^{r_i-1} (\tilde{z}_{i,j}-\tilde{z}_{i, j+1})^2. \end{gather} Note that in $H_\text{eq}$, we denote $j$ as some index labeling the $r_i$ qudits in the $i$-th cluster corresponding to different $S\ni i$. It can then be seen that every qudit $\tilde{z}_{i,S}$ has at most degree 3, appearing in the term $C_S$ as well as up to two 2-local terms in a ring of equality constraints. Since for each $k$-local term $C_S$, we introduce $k$ qudits, then the number of terms (as well as the number of qudits) in the sparsifier is $M=O(kM_0)$. Additionally, since the constraint functions $\|C_S\|\le 1$ are bounded in the original Hamiltonian by assumption, then both $\gamma$ and thus strength of individual terms in sparsifier $\tilde{H}$ is also bounded by $J=O(1)$. It remains to show that $\tilde{H}$ gap-simulates $H$. Let us denote $\ket{\vect{z}} = \bigotimes_{i=1}^n \ket{z_i}$, for every $\vect{z}\in \mathds{Z}_d^n$, which are computational basis states of the original Hilbert space $\H$ and are also eigenstates of $H$. Then let us denote a corresponding state $\ket{\tilde{\vect{z}}} = \bigotimes_{i=1}^n \bigotimes_{S\ni i} \ket{\tilde{z}_{i,S\ni i} = z_i}$ in the extended Hilbert space. We can then show that for any groundstate $\ket{\vect{z}}$ of the original Hamiltonian, where $H\ket{\vect{z}}=E_{\vect{z}}\ket{\vect{z}}$ for $E_{\vect{z}} \le w\gamma$, the corresponding state $\ket{\tilde{\vect{z}}}$ is an eigenstate of $\tilde{H}$ with the same energy $\ket{\tilde{\vect{z}}} = E_{\vect{z}}\ket{\tilde{\vect{z}}}$. Any other state will have energy at least $\gamma$ since it must either correspond to an excited state of $H$ or violate a constraint in $H_\text{eq}$. Hence, $\tilde{H}$ reproduces the spectrum of $H$ up to $\gamma$ (we can easily increase the range of spectrum it reproduces by increasing the strength of $H_\text{eq}$). Lastly, by identifying one qudit in $i$-th cluster as the original qudit $i$ (and hence yielding a trivial encoding), we can easily see that $\delta \equiv \|\tilde{P}-P\tilde{P}\|=0$ where $P=\sum_{E_{\vect{z}}\le w\gamma} \ketbra{\vect{z}}$ and $\tilde{P} = \sum_{E_{\vect{z}}\le w\gamma} \ketbra{\tilde{\vect{z}}}$. Therefore, $\tilde{H}$ is a $k$-local $[3,O(km),O(1)]$-degree-reducer of $H$ with zero unfaithfulness and identical low-energy spectrum. \end{proof} \subsection{Incoherent Tree-Graph Diluter and Degree-Reducer of $H_A$ (Proposition~\ref{prop:incoherent-tree})\label{sec:incoherenttree}} While Proposition~\ref{prop:classical-deg-reduct} shows that it is always possible to \emph{constructively} degree-reduce classical Hamiltonians incoherently, we know this is not possible for dilution due to Theorem~\ref{thm:imposs-dilute}. Nevertheless, in some cases, such as our example Hamiltonian $H_A$, we show that a non-trivial incoherent diluter exists. The key idea behind our construction of this sparsifier of $H_A$ is to use additional ancilla and constraints to simulate a counting operation, so that only states with less than two excitations on the original set of qubits do not lead to violation of any constraints. By using ancilla qubits as memory and arranging them in a recursive, tree-like geometry, we are able to limit the maximum degree and the number of interactions required in the sparsified Hamiltonian to perform this counting operation. \begin{figure}[htb] \centering \begin{tikzpicture}[level distance=10mm,grow'=up] \tikzstyle{level 1}=[sibling distance=40mm] \tikzstyle{level 2}=[sibling distance=20mm] \tikzstyle{level 3}=[sibling distance=10mm] \coordinate child { child { child {[fill] circle (2pt)} child {[fill] circle (2pt)} } child { child {[fill] circle (2pt)} child {[fill] circle (2pt)} } } child { child {[fill] circle (2pt)} child {[fill] circle (2pt)} }; \end{tikzpicture} \caption{\label{fig:tree}Example incoherent degree-reducer/diluter geometry for $H_A$ on 6 qubits, where $\bullet$ on the leaf nodes denote the original qubits, and the ancilla qubits are located at the internal nodes.} \end{figure} { \renewcommand{\theprop}{\ref{prop:incoherent-tree}} \begin{prop}[$0$-unfaithfulness incoherent dilution and DR for $H_A$] There is a 3-local incoherent $[2,n-1,1]$-diluter of $H_A$ with 0-unfaithfulness, energy spread $\tilde{w}=0$, and trivial encoding. This is also an incoherent $[2,n-1,1]$-degree-reducer of $H_A$. \end{prop} \addtocounter{prop}{-1} } \begin{proof} Let us now describe our construction. For the original Hamiltonian $H_A$ with $n$ qubits, the incoherent sparsifier we propose involves an additional $n-1$ ancilla qubits. We arrange them on a binary tree of height $\lceil\log_2 n\rceil$, with the original qubits placed on the leaf nodes, as shown in Fig.~\ref{fig:tree}. The sparsifier we propose consists of a sum of 3-local, commuting, and positive semi-definite terms, one per branching at each internal node: \begin{equation} \tilde{H}_A^\text{tree} = \sum_{i=1}^{n-1} \tilde{H}_{A,i}^\text{tree} \quad \text{with} \quad \tilde{H}_{A,i}^\text{tree} = \sum_{l_i,r_i,b_i=0}^1 h(l_i,r_i,b_i)\ketbra{l_ir_ib_i} \end{equation} where $\ket{b_i}$ is the qubit state at $i$-th internal node, and $\ket{l_i}$ ($\ket{r_i}$) is the qubit state of its left (right) child node. The energy cost function $h(l,r,b)$ is \begin{equation} h\Bigg( \begin{tikzpicture}[baseline=1.6ex, level distance = 10mm, sibling distance = 8mm] \node {$b$}[grow'=up] {child {node {$l$}} child {node {$r$}}}; \end{tikzpicture} \Bigg) = \begin{cases} 0, & \begin{tikzpicture}[baseline=1.6ex, level distance = 7mm, sibling distance = 4mm] \node {$b$}[grow'=up] {child {node {$l$}} child {node {$r$}}}; \end{tikzpicture} \in \Big\{ \begin{tikzpicture}[baseline=1.6ex, level distance = 7mm, sibling distance = 4mm] \node {0}[grow'=up] {child {node {0}} child {node {0}}}; \end{tikzpicture}, \begin{tikzpicture}[baseline=1.6ex, level distance = 7mm, sibling distance = 4mm] \node {1}[grow'=up] {child {node {0}} child {node {1}}}; \end{tikzpicture}, \begin{tikzpicture}[baseline=1.6ex, level distance = 7mm, sibling distance = 4mm] \node {1}[grow'=up] {child {node {1}} child {node {0}}}; \end{tikzpicture} \Big\} \\ 1, & \text{otherwise} \end{cases} \end{equation} which imposes a constraint that forces the parent node of any branching to be excited whenever any of its children nodes contains an excitation. This information is passed down towards the root node at the bottom, and it's easy to see that we are effectively counts the number of excitations among the original $n$ qubits. No constraint is violated if and only if there are no more than one excitations. In other words, the zero-energy groundspace of $\tilde{H}_A^\text{tree}$ consists of the $n+1$ states corresponding to zero or one excitations in the original $n$ qubits. Since the $\tilde{H}_A^\text{tree}$ is commuting, it is easy to solve for all the eigenstates. By inspection, $\tilde{H}_A^\text{tree}$ is spectrally gapped with energy spread 0 and gap 1, where the excited manifold consists of both states with more than one excitations among the original qubits as well as ``illegal'' states that violate the constraints unnecessarily. The groundspace consists of $n+1$ states $\{\ket{\tilde{g}_i}\}_{i=0}^n$, where $\ket{\tilde{g}_0}$ has zero excitations among the original qubits, and $\ket{\tilde{g}_i}$ has the $i$-th original qubits excited. Some example groundstate configurations are shown below in Fig.~\ref{fig:treeincoherent}. Let us now analyze the performance of this sparsifier construction. Note that this construction contains $2n-1$ qubits, each involved in at most 2 Hamiltonian terms -- a maximum degree of $r=2$. Furthermore, there are only $M=n-1$ terms in the sparsifier Hamiltonian, compared to the original Hamiltonian of $n(n-1)/2$ terms. Each term has bounded strength $J=\|\tilde{H}_{A,i}^\text{tree}\|= 1$. Additionally, $\tilde{H}_{A}^\text{tree}$ yields the same energy spread of $\tilde{w}=w=0$ and gap of $\gamma=1$ as the original. Furthermore, since the groundspace of $\tilde{H}_A$ faithfully reproduces the original groundspace configuration on the $n$ original qubits, we have $P\tilde{P}=\tilde{P}$ and $\delta=\|\tilde{P}-P\tilde{P}\|=0$. Hence, $\tilde{H}_A^\text{tree}$ gap-simulates $H_A$ with 0-unfaithfulness. Therefore, our construction of $\tilde{H}_A^\text{tree}$ is a 3-local $[2,n-1,1]$-gap-simulator of $H_A$, with energy spread $\tilde{w}=0$ and zero unfaithfulness. \end{proof} \begin{figure}[h] \centering $\ket{\tilde{g}_3}=$ \begin{tikzpicture}[baseline=8ex,level distance=8mm,grow'=up] \tikzstyle{level 1}=[sibling distance=20mm] \tikzstyle{level 2}=[sibling distance=10mm] \tikzstyle{level 3}=[sibling distance=5mm] \node {1} child { node {1} child { node {0} child {node {\textbf0}} child {node {\textbf0}} } child { node {1} child {node {\textbf1}} child {node {\textbf0}} } } child { node {0} child {node {\textbf0}} child {node {\textbf0}} }; \end{tikzpicture} \hspace{15pt} $\ket{\tilde{g}_4}=$ \begin{tikzpicture}[baseline=8ex,level distance=8mm,grow'=up] \tikzstyle{level 1}=[sibling distance=20mm] \tikzstyle{level 2}=[sibling distance=10mm] \tikzstyle{level 3}=[sibling distance=5mm] \node {1} child { node {1} child { node {0} child {node {\textbf0}} child {node {\textbf0}} } child { node {1} child {node {\textbf0}} child {node {\textbf1}} } } child { node {0} child {node {\textbf0}} child {node {\textbf0}} }; \end{tikzpicture} \hspace{15pt} $\ket{\tilde{g}_6}=$ \begin{tikzpicture}[baseline=8ex,level distance=8mm,grow'=up] \tikzstyle{level 1}=[sibling distance=20mm] \tikzstyle{level 2}=[sibling distance=10mm] \tikzstyle{level 3}=[sibling distance=5mm] \node {1} child { node {0} child { node {0} child {node {\textbf0}} child {node {\textbf0}} } child { node {0} child {node {\textbf0}} child {node {\textbf0}} } } child { node {1} child {node {\textbf0}} child {node {\textbf1}} }; \end{tikzpicture} \caption{\label{fig:treeincoherent}Three example groundstates of $\tilde{H}_A^\text{tree}$ for 6 original qubits. Note $\ket{\tilde{g}_3}$ and $\ket{\tilde{g}_4}$ have the same ancilla states, but $\ket{\tilde{g}_6}$ has a different ancilla state, so the sparsifier is incoherent.} \end{figure} \paragraph{Remark}--- Unfortunately, the incoherence of this sparsifier construction is unavoidably high, since different groundstates on the original $n$ qubits can be strongly correlated with different ancilla states, as seen in Fig.~\ref{fig:treeincoherent}. To lower bound the incoherence, we note that among the groundstates $\ket{\tilde{g}_i}$, only $\ket{\tilde{g}_{2\nu-1}}$ and $\ket{\tilde{g}_{2\nu}}$ share the same ancilla state, which we denote $\ket{a_\nu}^\textnormal{anc}$, for $\nu=1,\ldots,\lceil n/2 \rceil$. (Note when $n$ is odd, we simply set $\ket{g_{2\nu}}=0$ when $\nu=\lceil n/2 \rceil$.) Let us also denote $\ket{a_0}^\textnormal{anc} = \ket{0^{n-1}}^\textnormal{anc}$ as the ancilla configuration in the groundstate $\ket{\tilde{g}_0}$. Then, we write $P_\nu = \ketbra{g_{2\nu-1}} + \ketbra{g_{2\nu}}$ for $\nu=1,\ldots,\lceil n/2 \rceil$, and $P_0 = \ketbra{g_0}$ as projectors onto disjoint subsets of groundstates of the original Hamiltonian. From this, we can write $\tilde{P}=\sum_{\nu=0}^{\lceil n/2 \rceil} P_\nu\otimes \ketbra{a_\nu}$. Since $P_\nu P_\nu' = \delta_{\nu\nu'} P_\nu$, we note the sum over $\sum_\nu P_\nu\otimes M_\nu$ has a block diagonal structure, where each block is $M_\nu$ supported by some state in $P_\nu$. Hence, \begin{eqnarray*} \tilde{P}-P\otimes P_\textnormal{anc} &=& \sum_{\nu=0}^{\lceil n/2 \rceil} P_\nu\otimes(\ketbra{a_\nu} - P_\textnormal{anc}) = \bigoplus_\nu (\ketbra{a_\nu} - P_\textnormal{anc}) \\ \epsilon \ge \| \tilde{P}-P\otimes P_\textnormal{anc} \| &=& \max_\nu \|\ketbra{a_\nu} - P_\textnormal{anc}\|. \end{eqnarray*} Let us denote $\ket{a}$ as some state that $P_\textnormal{anc}$ projects onto ($P_\textnormal{anc}\ket{a}=\ket{a}$). Noting that $\braket{a_\nu|a_{\nu'}}=\delta_{\nu\nu'}$, we have \begin{eqnarray} \epsilon^2 &\ge& \max_\nu \|\ketbra{a_\nu} - P_\textnormal{anc}\|^2 \ge \min_{\ket{a}} \max_\nu \braket{a|(\ketbra{a_\nu}-1)^2|a} = \min_{\ket{a}} \max_\nu 1-\left|\braket{a|a_\nu}\right|^2 \nonumber \\ &\ge& \min_{\ket{a}} \frac{1}{1+\lceil n/2\rceil}\sum_{\nu=0}^{\lceil n/2 \rceil} (1-\left|\braket{a|a_\nu}\right|^2 = 1-\frac{\max_{\ket{a}} \sum_{\nu} \left|\braket{a|a_\nu}\right|^2}{1+\lceil n/2 \rceil} \ge 1 - \frac{1}{1+\lceil n/2\rceil} \end{eqnarray} where in the beginning of the last line, we used the fact that the maximum is at least as large as the average of a set of numbers. Hence, we have necessarily large (close-to-1) incoherence $\epsilon \ge \sqrt{1-1/(1+\lceil n/2 \rceil)} \approx 1 - O(1/n)$. \section{Coherent DR with Polynomial Interaction Strength} In this Appendix, we provide constructions that perform degree-reduction using interaction strength $J=\poly(n,\epsilon^{-1})$ for a large class of Hamiltonian. In Sec.~\ref{sec:degree-reduction-poly}, we describe how to perform coherent degree-reduction for any Hamiltonians whose spectral gap decays at most inverse-polynomially with system size; in fact, the resultant degree-reducer Hamiltonian not only gap-simulates, but also full-spectrum-simulates the original Hamiltonian. Later in Sec.~\ref{sec:prop-circuit}, we show that for our example $H_A$, coherent dilution is also possible with polynomial strength. \subsection{Constructive Coherent DR with Polynomial Interaction Strength (Theorem~\ref{thm:degree-reduction-poly})\label{sec:degree-reduction-poly}} In this section, we show how to perform coherent degree-reduction for Hamiltonians whose quasi-spectral gap scales as $\gamma=\Omega(1/\poly(n))$. { \renewcommand{\thethm}{\ref{thm:degree-reduction-poly}} \begin{thm}[Coherent DR with polynomial interaction strength] Suppose $H$ is an $O(1)$-local Hamiltonian with a quasi-groundspace projector $P$, which has quasi-spectral gap $\gamma=\Omega(1/\poly(n))$ and energy spread $w$. Also assume $\|H\|=O(\poly(n))$. Then for every $\epsilon>0$, one can construct an $O(1)$-local $[O(1), O(\poly(n)/\epsilon^2), O(\poly(n,\epsilon^{-1}))]$-degree-reducer of $H$ with incoherence $\epsilon$, energy spread $w+O(1/\poly(n))$, and trivial encoding. \end{thm} \addtocounter{thm}{-1} } To prove the above Theorem, we first prove two smaller Lemmas~\ref{lem:circuit-ham-simul} and \ref{lem:circuit-idling} about different aspects of using Kitaev's circuit-to-Hamiltonian construction~\cite{KSV02} for Hamiltonian simulation. The following two concepts will be useful in the discussion: \begin{defn}[history states] Let $U=U_T \cdots U_2 U_1$ be a quantum circuit acting on $n+m$ qudits. Then for any input state $\ket{\psi_\mu}\in \mathds{C}^{d^n}$, the \emph{history state} with respect to $U$ and $\ket{\psi_\mu}$ is the following \begin{equation} \ket{\eta_\mu} = \frac{1}{\sqrt{T+1}}\sum_{t=0}^T \Big( U_{t}\cdots U_2 U_1 \ket{\psi_\mu}\ket{0^m}^\textnormal{anc} \Big)\ket{1^t 0^{T-t}}^\textnormal{clock} \end{equation} \end{defn} \begin{defn}[circuit degree] The degree of a quantum circuit $U=\prod_{t=1}^T U_t$ is $\deg(U) = \max_i |\{U_t: U_t \textnormal{ acts nontrivially on qudit } i\}|$. \end{defn} We now prove the first of the two Lemmas, which describes a circuit-to-Hamiltonian transformation that can be used for full-spectrum-simulation, assuming an appropriate energy penalty Hamiltonian $H_{out}$ can be constructed. \begin{lemma}[Circuit-Hamiltonian simulation] \label{lem:circuit-ham-simul} Consider an orthonormal basis of states $\{\ket{\psi_\mu}\}_{\mu=1}^{d^n}$ on $n$ qudits. Let $U=\prod_{t=1}^T U_t$ be a quantum circuit where $U_t$ is a $k$-local gate. Let $\L = \spn\{\ket{\eta_\mu}\}_{\mu=1}^{d^n}$ be the space of history states with respect to some and $\{\ket{\psi_\mu}\}$, and let $H_\textnormal{eff} = \sum_\mu \lambda_\mu \ketbra{\eta_\mu}$. Suppose there exists a Hamiltonian $H_{out}$ such that $\|H_\textnormal{eff} - H_{out}|_{\L}\| \le \xi/2$. Then for any $\eta>0$, we can construct a Hamiltonian $\tilde{H}_\textnormal{circuit}$ from the description of $U$ such that $\tilde{H}_\textnormal{circuit}$ full-spectrum-simulates $H_\textnormal{eff}$ to precision $(\eta,\xi)$ below some energy cut-off $\Delta\ge O (\xi^{-1}\|H_{out}\|^2 + \eta^{-1}\|H_{out}\|)$, per Def.~\ref{defn:CMPsimul}. The constructed $\tilde{H}_\textnormal{circuit}$ is $(k+3)$-local, has $O(\deg(U))$ degree, $O(T)$ number of terms, and $O(\poly(n,T,\xi^{-1},\eta^{-1}, \|H_{out}\|))$ interaction strength. \end{lemma} \begin{proof} For a given circuit $U = U_T \cdots U_2 U_1$, where $T=O(\poly(n))$, the corresponding circuit-Hamiltonian is \begin{align} \tilde{H}_{\textnormal{circuit}} &= H_0 + H_{out}\\ \text{where} \quad H_0 &= J_{clock} H_{clock} + J_{prop} H_{prop} + J_{in} H_{in} \end{align} The role of $H_0$ is to isolate $\L=\spn\{\ket{\eta_\mu}\}$ as the zero-energy groundspace separated by a large spectral gap. The first part of $H_0$ is \begin{equation} \label{eq:Hclock} H_{clock} = \sum_{t=1}^{T-1}\ketbra{01}^\textnormal{clock}_{t,t+1}, \end{equation} which sets the legal state configurations in the clock register to be of the form $\ket{t}^\textnormal{clock} \equiv \ket{1^t 0^{T-t}}^\textnormal{clock}$. Then, we simulate the state propagation under the circuit using \begin{eqnarray}\label{eq:Hprop} H_{prop} &=& \sum_{t=1}^T H_{prop,t},\\ \text{where} \quad H_{prop,t} &=& \mathds{1} \otimes \ketbra{100}^\textnormal{clock}_{t-1,t,t+1} - U_t\otimes\ketbrat{110}{100}^\textnormal{clock}_{t-1,t,t+1} \nonumber\\ && - U_t^\dag\otimes \ketbrat{100}{110}^\textnormal{clock}_{t-1,t,t+1} + \mathds{1}\otimes\ketbra{110}^\textnormal{clock}_{t-1,t,t+1} \quad \text{for} \quad 1<t<T, \nonumber\\ H_{prop,1} &=& \mathds{1}\otimes \ketbra{00}_{12}^\textnormal{clock} - U_1 \otimes\ketbrat{10}{00}_{12}^\textnormal{clock} - U_1^\dag \otimes \ketbrat{00}{10}_{12}^\textnormal{clock} + \mathds{1}\otimes\ketbra{10}_{12}^\textnormal{clock}, \nonumber\\ \text{and} \quad H_{prop,T} &=& \mathds{1}\otimes \ketbra{10}_{T-1,T}^\textnormal{clock} - U_T \otimes\ketbrat{11}{10}_{T-1,T}^\textnormal{clock} - U_T^\dag\otimes\ketbrat{10}{11}_{T-1,T}^\textnormal{clock} + \mathds{1}\otimes\ketbra{11}_{T-1,T}^\textnormal{clock}.\nonumber \end{eqnarray} These terms check the propagation of states from time $t-1$ to $t$ is correct. Now, we also need to ensure that the ancilla qudits are in the state $\ket{0^{m}}^\textnormal{anc}$ when the clock register is $\ket{0^T}^\textnormal{clock}$, using \begin{gather} H_{in} = \sum_{i=1}^{m} (\mathds{1} - \ketbra{0})_i^\textnormal{anc}\otimes\ketbra{0}^\textnormal{clock}_{t_{\min}(i)}, \\ \text{where} \quad t_{\min}(i) = \min \{t: U_t \text{ acts nontrivially on ancilla qudit } i\}. \nonumber \end{gather} In other words, for each ancilla qudit $i$, $H_{in}$ penalizes the ancilla if it's not in the state $\ket{0}$ before it is first used by the $t_{\min}(i)$-th gate. Note that $\tilde{H}_\textnormal{circuit}$ has $O(T)$ terms, each of which is most $(k+3)$-local when $U_t$ are $k$-local. It also has $O(\deg(U))$ degree, since each computational qubit is involved in at most $O(\deg(U))$ terms in $H_{prop}$, and each clock qubit is involved in $O(1)$ terms. It is easy to see that $H_0\L=0$. But we also need to lower bound the spectral gap of $H_0$, i.e. $\lambda_1(H_0|_{\L^\perp} )$. To that end, let us denote the following subspaces: \begin{align} \S_{clock} &= \spn\{ \ket{\psi}\ket{y}\ket{1^t 0^{T-t}}:\ket{\psi}\in \mathds{C}^{d^n} \text{ and } \ket{y} \in \mathds{C}^{d^m}, 0\le t\le T\}, \\ \S_{prop} &= \spn\{\ket{\eta_\mu, y} \equiv \frac{1}{\sqrt{T+1}}\sum_{t=0}^T \Big(U_t \cdots U_2 U_1 \ket{\psi}\ket{y} \Big) \ket{1^t 0^{T-t}}: 1\le \mu \le d^n, 0\le y \le d^{n-1}\}. \end{align} Note that $ \L \subset \S_{prop} \subset \S_{clock}$. Let us denote $\tilde{A}=\mathcal{A}\cap \L^\perp$ for any subspace $\mathcal{A}$. Note $H_{clock} \S_{clock} = 0$, $H_{prop} \S_{prop} =0$, $H_{in}\L=0$. We will use the following Projection Lemma~\ref{lem:projection}: \begin{lemma}[Projection Lemma, adapted from \cite{KKR06}] \label{lem:projection} Let $H=H_1+H_2$ be sum of two Hamiltonians operating on some Hilbert space $\S_0=\S\oplus\S^\perp$. Assuming that $H_2$ has a zero-energy eigenspace $\S\subseteq \S_0$ so that $H_2\S=0$, and that the minimum eigenvalue $\lambda_1(H_2|_{\S^\perp})\ge J > 2\|H_1\|$, then \begin{equation} \lambda_1(H_1|_\S) - \frac{\|H_1\|^2}{J-2\|H_1\|} \le \lambda_1 (H) \le \lambda_1 (H_1|_\S). \end{equation} In particular, if $J \ge K\|H_1\|^2 + 2\|H_1\|=O(K\|H_1\|^2)$, we have $ \lambda_1(H_1|_\S) - \frac{1}{K} \le \lambda_1 (H) \le \lambda_1 (H_1|_\S). $ \hfill\ensuremath{\blacklozenge} \end{lemma} Applying the above Lemma successively to $H_0$, we obtain \begin{align} \lambda_1(H_0|_{\L^\perp} ) &\ge \lambda_1\left[(J_{prop} H_{prop} + J_{in} H_{in} )|_{\tilde\S_{clock}} \right] - \frac{1}{K} \quad \text{if} \quad J_{clock} = O(K\|J_{prop} H_{prop} + J_{in} H_{in}\|^2) \\ &\ge \lambda_1\left[( J_{in} H_{in})|_{\tilde\S_{prop}} \right] - \frac{2}{K} \quad \text{if} \quad J_{prop}/T^2 = O(K\| J_{in} H_{in}\|^2) \end{align} where we used the fact that $\lambda_1(H_{clock}|_{\S_{clock}^\perp})\ge 1$, and $\lambda_1(H_{prop}|_{\S_{prop}^\perp}) \ge c/T^2$ for some constant $c$. We now lower bound the last term. Let us denote $\hat{n}=\mathds{1} - \ketbra{0}$. Then within $\S_{clock}$, we can rewrite \begin{align} H_{in}|_{\S_{clock}} &= \sum_{i=1}^m \hat{n}_{i}^\textnormal{anc} \otimes \sum_{0\le t \le t_{\min}(i)} \ketbra{t}^\textnormal{clock} = \sum_{t=0}^{\max_i t_{\min}(i)} H_{in,t} \\ \text{where} \quad H_{in,t} &= \sum_{\{i:~t \le t_{\min}(i)\}} \hat{n}_i^\textnormal{anc} \otimes \ketbra{t}^\textnormal{clock}. \nonumber \end{align} In particular, $H_{in,t=0}=\sum_{i=1}^m \hat{n}_i^\textnormal{anc} \otimes \ketbra{t=0}$. Thus, for any $\ket{\eta_\mu,y},\ket{\eta_\nu,y'}\in \tilde{\S}_{prop}$, where necessarily $y,y'>0$, we have \begin{eqnarray} \braket{\eta_\nu,y'|H_{in,t=0}|\eta_\mu,y} &=& \frac{1}{T+1} \bra{\psi_\nu}\bra{y'} H_{in,t=0} \ket{\psi_\mu}\ket{y} \nonumber \\ &=& \frac{1}{T+1} \delta_{\mu\nu} \braket{y'|\sum_{i=1}^{m}\hat{n}_i^\textnormal{anc} |y}= \frac{1}{T+1} \delta_{\mu\nu}\delta_{y,y'} \times w(y), \end{eqnarray} where $w(y)$ is the Hamming weight of $y$ in $d$-ary representation, which is at least 1 for any $y>0$. Hence, the minimum eigenvalue of $H_{in,t=0}|_{\L^\perp}$ is $1/(T+1)$. Since $H_{in}$ consists of only positive semi-definite terms, we have $\lambda_1(H_{in}|_{\tilde{\S}_{prop}}) \ge \lambda_1( H_{in,t=0}|_{\tilde{\S}_{prop}}) \ge 1/(T+1)$. Thus, to ensure that $H_0$ has spectral gap $\lambda_1(H_0|_{\L^\perp}) \ge \Delta$, we simply choose $J_{in} = O(\Delta(T+1))$, $J_{prop} = O(K T^2 J_{in}^2 m^2)$, and $J_{clock} = O(KJ_{prop}^2 T^2)=O(\poly(n, T, \Delta))$. We now show that $\tilde{H}_\textnormal{circuit}$ full-spectrum-simulates $H_\textnormal{eff}$ with only polynomial overhead in energy. To this end, we use the following result regarding perturbative reductions adapted from Lemma 4 of \cite{BravyiHastingsSim} (also Lemma 35 of \cite{UniversalHamiltonian}): \begin{lemma}[First-order reduction, adapted from \cite{BravyiHastingsSim}] Suppose $\tilde{H}=H_0+H_1$, defined on Hilbert space $\tilde\H=\L \oplus \L^\perp$ such that $H_0\L=0$ and $\lambda_1(H_0|_{\L^\perp})\ge \Delta$. Suppose $H_\textnormal{eff}$ is a Hermitian operator and $V$ is an isometry such that $\| V H_\textnormal{eff} V^\dag - H_1|_{\L} \| \le \xi/2$, then $\tilde{H}$ full-spectrum-simulates $H_\textnormal{eff}$ to precision $(\eta,\xi)$ below energy cut-off $\Delta/2$, as long as $\Delta \ge O(\xi^{-1}\|H_1\|^2 + \eta^{-1}\|H_1\|)$, per Def.~\ref{defn:CMPsimul}. In other words, $\|\tilde{H}_{\le\Delta/2} - \tilde{V} H_\textnormal{eff} \tilde{V}^\dag\| \le \xi$ for some isometry $\tilde{V}$ where $\|\tilde{V}-V\|\le \eta$. \hfill\ensuremath{\blacklozenge} \end{lemma} We apply the above Lemma with $H_1=H_{out}$ and $V=\mathds{1}$. Note that we are given in the premise of our Lemma~\ref{lem:circuit-ham-simul} that \begin{equation} \|H_\textnormal{eff} - H_1|_{\L}\| = \|H_\textnormal{eff} - H_{out}|_{\L}\| \le \xi/2. \end{equation} Hence, $\tilde{H}_\textnormal{circuit} = H_0+H_{out}$ full-spectrum-simulates $H_{\textnormal{eff}}$ to precision $(\eta,\xi)$ below energy cut-off $\Delta/2\ge O(\xi^{-1}\|H_{out}\|^2 + \eta^{-1}\|H_{out}\|)$. The maximum interaction strength in $\tilde{H}_\textnormal{circuit}$ is $J_{clock} = O(\poly(n, T, \Delta))= O(\poly(n,T,\xi^{-1},\eta^{-1}, \|H_{out}\|))$. This concludes the proof of our Lemma~\ref{lem:circuit-ham-simul}. \end{proof} We now prove the second Lemma, which shows that in order to ensure the circuit-Hamiltonian simulates a given quasi-groundspace coherently, one only need to add $O(\poly(n)/\epsilon^2)$ identity gates to the end of a polynomial-sized circuit before transforming the circuit to a Hamiltonian. \begin{lemma}[Idling to enhance coherence] \label{lem:circuit-idling} Consider an uncomputed quantum circuit $U_{D}\cdots U_1=\mathds{1}$. Suppose we add $L$ identity gates to the end of the circuit, so that we obtain a new circuit $U=\mathds{1}^L U_D\cdots U_1$ with length $T=D+L$. Let $P=\sum_{\mu=1}^q \ketbra{\psi_\mu}$ and $Q=\sum_{\mu=1}^q \ketbra{\eta_\mu}$, where $\ket{\eta_\mu}$ are history states with respect to $U$ and $\ket{\psi_\mu}$. If we choose $L=O(D/\epsilon^2)$, then $\|Q-P\otimes P_\textnormal{anc}\| \le \epsilon$ for some ancilla projector $P_\textnormal{anc}$, regardless of $q$. \end{lemma} \begin{proof} Note that we can write \begin{align} \ket{\eta_\mu} &= \sqrt{1-\chi^2}\ket{\psi_\mu}\otimes\ket{\alpha} + \chi \ket{\beta_\mu} \\ \text{where} \quad \ket{\alpha} &= \frac{1}{\sqrt{L+1}} \ket{0^m}^\textnormal{anc} \otimes \sum_{t=D}^{D+L} \ket{1^t 0^{T-t}}^\textnormal{clock} \\ \ket{\beta_\mu} &= \frac{1}{\sqrt{D}}\sum_{t=0}^{D-1}\Big( U_{t}\cdots U_2 U_1 \ket{\psi_\mu}\ket{0^{m}}^\textnormal{anc} \Big) \ket{1^t 0^{T-t}}^\textnormal{clock}\\ \chi &= \sqrt{D/(D+L+1)}. \end{align} Observe that $\bra{\beta_\mu}(\ket{\psi_\nu}\ket{\alpha}) = 0$ since the clock register are at different times, and \begin{equation} \braket{\beta_\mu|\beta_\nu} = \frac{1}{D}\sum_{t=0}^{D-1}\braket{\psi_\mu|\psi_\nu} = \delta_{\mu\nu}. \end{equation} Hence, for any normalized state $\ket{\phi}\in Q$, we write $\ket{\phi}=\sum_\mu c_\mu\ket{\eta_\mu}$, and find that \begin{equation} \left\|(Q-P\otimes\ketbra{\alpha})\ket{\phi}\right\| = \left\|\chi\sum_{\mu} c_\mu \ket{\beta_\mu} \right\| = \chi \left\|\sum_{\mu} c_\mu \ket{\beta_\mu}\right\| = \chi. \label{eq:bound-in} \end{equation} We now use the technical Lemma proved in Appendix~\ref{sec:uniqueGS}, which we restate here: { \renewcommand{\thelemma}{\ref{lem:proj-diff}} \begin{lemma}[Projector Difference Lemma (restatement)] Consider two Hermitian projectors $\Pi_A$ and $\Pi_B$, such that $\rank(\Pi_A)\le \rank(\Pi_B)$. Suppose that for all normalized $\ket{\phi}\in \tilde{\Pi_B}$, $\|(\Pi_B-\Pi_A)\ket{\phi}\| \le \delta$. Then $\|\Pi_B-\Pi_A\| \le \sqrt{2}\delta/\sqrt{1-\delta^2}$. \hfill\ensuremath{\blacklozenge} \end{lemma} \addtocounter{lemma}{-1} } Since $\rank(Q) = \rank(P\otimes \ketbra{\alpha})=q$, then by Lemma~\ref{lem:proj-diff} (identifying $\Pi_B \to Q$, $\Pi_A\to P\otimes\ketbra{\alpha}$, and $\delta\to\chi$), we have \begin{equation} \|Q-P\otimes \ketbra{\alpha}\| \le \sqrt{2}\chi/\sqrt{1-\chi^2} \end{equation} To make sure $\sqrt{2}\chi/\sqrt{1-\chi^2} \le \epsilon$, it is sufficient to choose $L=O(D/\epsilon^2)$. \end{proof} We are now ready to prove our theorem: \begin{proof}[\textbf{Proof of Theorem~\ref{thm:degree-reduction-poly}}] Let us denote the normalized eigenstates of $H$ as $\ket{\psi_\mu}$, with corresponding eigenvalues $E_\mu$. We assume they are ordered such that $E_1 \le E_2 \le E_3 \le \cdots \le E_{d^n}$. Since the energy spread is $w$ and quasi-spectral gap is $\gamma$, we have $E_1\le E_\mu\le E_1 + w\gamma$ for $1\le \mu \le q$, and $E_\mu \ge E_1 + \gamma$ for $\mu \ge q+1$, where $q=\rank(P)$ is the quasi-groundspace degeneracy. \vspace{2pt} \noindent \textbf{Part I - Energy measurement via Phase Estimation Circuit}--- Let us first consider the idealized version of the quantum phase estimation algorithm $U_{\rm PE}^\textnormal{ideal}$ for measuring energy with respect to Hamiltonian $H$. Here, the circuit uses the evolution operator $u_j=e^{-iH\tau 2^{j-1}}$ under $H$, and writes phase of the eigenvalues of $u_1=e^{-iH\tau}$ on some ancilla qubits. The eigenvalues of $u_1$ are $e^{i2\pi\varphi_\mu}$, where $\varphi_\mu=E_\mu \tau/(2\pi)$; we choose $\tau$ to satisfy $\tau \le 2\pi/\|H\|$, so that $0 \le \varphi_\mu\le 1$ and we can write $\varphi_\mu = 0.\varphi_{\mu,1}\varphi_{\mu,2}\varphi_{\mu,3}\cdots$. Ideally, the action of the phase-estimation circuit on input states $\{\ket{\psi_\mu}\ket{0^m}\}_{\mu=1}^{d^n}$ is \begin{align} \label{eq:ideal-PE} U_{\rm PE}^\textnormal{ideal} \ket{\psi_\mu}\ket{0^m} = \ket{\psi_\mu}\ket{E_{\mu}} \ket{\textnormal{rest}_\mu}, \end{align} where $\ket{\tilde{E}_\mu}=\ket{\varphi_{\mu,1}\varphi_{\mu,2}\varphi_{\mu,3}\cdots\varphi_{\mu,s}}$ is the $s$-bit string representation of the eigenvalue phase $\varphi_\mu$. Correspondingly, let us denote $\tilde{E}_\mu=2\pi\tilde{\varphi}_\mu/\tau$, where $\tilde{\varphi}_\mu = 0.\varphi_{\mu,1}\varphi_{\mu,2}\varphi_{\mu,3}\cdots\varphi_{\mu,s}$, as approximate values of the energy $E_\mu$. In the ideal case, $E_\mu=\tilde{E}_\mu$ for some sufficiently large $s$. In reality, there are two sources of errors that cause the phase-estimation circuit to deviate from $U_{\rm PE}^\textnormal{ideal}$. The first is due to the fact that the energy don't generally have finite-bit-precision representation, i.e., $|E_\mu-\tilde{E}_\mu|=O(2^{-s})$ is non-zero. In other words, since $\varphi_\mu \neq \tilde{\varphi}_\mu$, there's additional error from imprecise phase estimation. Let us consider a phase estimation circuit $U_{\rm PE}$ implemented to $p$-bit precision, where $p > s$. Let $b_\mu$ be the integer in the range $[0,2^{p}-1]$ such that $0\le \varphi_\mu - b_\mu/2^p \le 2^{-p}$. It is well-known~\cite{NielsenChuang} that the action of $U_{\rm PE}$ on any input state $\ket{\psi_\mu}\ket{0}$ result in the following state \begin{align} U_{\rm PE} \ket{\psi_\mu}\ket{0^m} &= \ket{\psi_\mu}\ket{\textnormal{rest}_\mu'} \otimes \frac{1}{2^{p}}\sum_{k,\ell=0}^{2^p-1} e^{-i2\pi k \ell/2^p} e^{i 2\pi \varphi_\mu k} \ket{\ell} = \ket{\psi_\mu}\ket{\textnormal{rest}_\mu'}\sum_{\ell=0}^{2^p-1}\alpha_\ell^\mu \ket{\ell} \end{align} where $\ket{\ell}=\ket{\ell_1\cdots \ell_p}$ is the binary representation of $\ell$, and \begin{align} \alpha_\ell^\mu &= \frac{1}{2^p}\sum_{k=0}^{2^p-1}[e^{i 2\pi (\varphi_\mu-\ell/2^p)}]^k = \frac{1}{2^p} \left[\frac{1-e^{i 2\pi (2^{s}\varphi_\mu-\ell)}}{1-e^{i 2\pi (\varphi_\mu-\ell/2^p)}} \right] \end{align} The analysis from Sec.~5.2.1 in \cite{NielsenChuang} shows that the probability of getting a state that is a distance of $e$ integer away is \begin{equation} p_\mu^\text{error}(e) \equiv \sum_{|\ell-b_\mu| > e} |\alpha_\ell^\mu|^2 \le \frac{1}{2(e-1)} \end{equation} Note that we only care about the first $s<p$ bits, so we can choose $e=2^{p-s}-1$.Hence, \begin{align} U_{\rm PE} \ket{\psi_\mu}\ket{0^m} &= \ket{\psi_\mu}\ket{\textnormal{rest}_\mu'} \otimes \left[ \sum_{|\ell-b_\mu|\le e} \alpha_\ell^\mu \ket{\ell} + \sum_{|\ell-b_\mu|> e} \alpha_\ell^\mu \ket{\ell} \right] \nonumber \\ &= \ket{\psi_\mu}\ket{\textnormal{rest}_\mu'}\otimes\left(\sqrt{1-p_\mu^\text{error}}\ket{\tilde{E}_\mu}\ket{\textnormal{rest}^1_\mu} + \sqrt{p_\mu^\text{error}}\ket{\textnormal{rest}^2_\mu}\right) \end{align} Comparing this with the idealized output in Eq.~\eqref{eq:ideal-PE}, we can identify $\ket{\textnormal{rest}_\mu}=\ket{\textnormal{rest}_\mu'}\ket{\textnormal{rest}_\mu^1}$, and observe that \begin{align} (U_{\rm PE} - U_{\rm PE}^\textnormal{ideal}) \ket{\psi_\mu}\ket{0^m} = \ket{\psi_\mu}\ket{\text{error}_\mu}, \quad \text{where} \quad \left\|\ket{\text{error}_\mu}\right\|^2 \le 2p^\text{error}_\mu = O(2^{-(p-s)}) \end{align} Thus, for any normalized state $\ket{\psi}=\sum_\mu c_\mu \ket{\psi_\mu}\ket{0^m}$, we have \begin{align} \|(U_{\rm PE} - U_{\rm PE}^\textnormal{ideal})\sum_\mu c_\mu \ket{\psi_\mu}\ket{0^m}\|^2 = O(2^{-(p-s)}) = O(1/\poly(n)) \end{align} where we assume that we can choose, for example, $p=2s$ and $s=O(\log(n))$, and thus make this first source of error due to imprecision to be polynomially small. The second source of error is due to the fact that we need to implement the circuit using only local gates, in order to ensure the corresponding circuit-Hamiltonian is local, The only non-local gates that we need to address are $u_j=e^{-iH\tau_j}$, where $\tau_j=2^{j-1}\tau$. This can be implemented with local gates via ``Trotterization''. Specifically, we write $H=\sum_a H_a$, where $H_a$ is a $k$-local term, and implement $\tilde{u}_j=(\prod_a e^{-iH_a \tau_j/r_j})^{r_j}$ for some integer $r_j$, so that $\|\tilde{u}_j-u_j\| \le O(\tau_j^2/r_j)$. Assuming $s=O(\log n)$ and so $\tau_j = O(\poly(n))$, we can then choose $r_j = O(\tau_j^2 \poly(n))= O(\poly(n))$ to ensure each such error is polynomially small. The error from Trotterization is bounded by \begin{equation} \|U_{\rm PE}^\textnormal{local} - U_{\rm PE}\| \le \sum_{j=1}^s O(\tau_j^2/r_j) = O(1/\poly(n)). \end{equation} In sum, we can choose any $\zeta=O(1/\poly(n)$, and construct the actual phase estimation circuit $U_{\rm PE}^\textnormal{local}$ in such a way that it is $\zeta$-close to $U_{\rm PE}^\textnormal{ideal}$ on any valid input state $\ket{\psi}\ket{0^m}$: \begin{align} \|(U_{\rm PE}^\textnormal{local} - U_{\rm PE}^\textnormal{ideal}) \ket{\psi}\ket{0^m} \| \le \zeta \equiv O(1/\poly(n)). \label{eq:PE-error} \end{align} \noindent \textbf{Part II -- Constructing degree-reducer Hamiltonian from circuit}--- We first replace $U_{\rm PE}^\textnormal{local}$ with a sparsified version, so that $\deg(U_{\rm PE}^\textnormal{local})=O(1)$. This can be done by adding swap gates and ancilla qudits after each gate, so that the computational states are mapped to the new ancilla qudits. Assuming each gate is $k$-local, this only increase the total number of gates and qudits by a factor of $k$, and the error from idealized phase-estimation is still bounded by Eq.~\eqref{eq:PE-error} Suppose the sparsified circuit $U_{\rm PE}^\textnormal{local}$ now has $t_0$ gates. Then we construct the following circuit \begin{align} U_\textnormal{circuit} = (\mathds{1})^L U_{\rm PE}^{\textnormal{local}\dag} (\mathds{1})^s U_{\rm PE}^\textnormal{local}. \end{align} Note we add $U_{\rm PE}^{\textnormal{local} \dag}$ for uncomputing and $s+L$ idling identity gates, making the entire circuit gate count $T=2t_0+s+L$. The $s$ identity gates are used for local measurements of energy to $s$-bit precision, and $L=O((2t_0+s)/\epsilon^2)=O(\poly(n)/\epsilon^2)$ identity gates are used to ensure $\epsilon$-incoherence as in Lemma~\ref{lem:circuit-idling}. The history states with respect to eigenstate $\ket{\psi_\mu}$ of $H$ and this circuit are \begin{align} \ket{\eta_\mu} =\frac{1}{\sqrt{T+1}}\sum_{t=0}^T \Big( U_t \cdots U_2 U_1 \ket{\psi_\mu}\ket{0^m} \Big) \ket{1^t 0^{T-t}} \end{align} We can convert the circuit to Hamiltonian $\tilde{H}_\textnormal{circuit}$ using the method described in Lemma~\ref{lem:circuit-ham-simul}, where $H_{out}$ is chosen to be \begin{align} H_{out} = (T+1)\sum_{b=1}^s \frac{2\pi}{\tau} 2^{-b}\ketbra{1}_b^\textnormal{anc} \otimes P^\textnormal{clock}(t=t_0+b). \end{align} Here, we denote $P^\textnormal{clock}(t)=\ketbra{110}_{t-1,t,t+1}^\textnormal{clock}$ as the effective projector onto legal clock states corresponding to time step $t$. To show that $\tilde{H}_\textnormal{circuit}$ simulates the original Hamiltonian $H$, we first show that $H_{out}$ restricted to the subspace of history states $\L=\spn\{\ket{\eta_\mu}:1\le \mu \le d^n \}$ can be approximated by the following effective Hamiltonian \begin{equation} H_\textnormal{eff} = \sum_\mu \tilde{E}_\mu \ketbra{\eta_\mu}. \end{equation} Consider arbitrary states $\ket{\eta}\in \L_-$. We write $\ket{\eta} = \sum_{\mu} a_\mu \ket{\eta_\mu}$, and observe \begin{align} \braket{\eta| H_{out}|\eta} &= \frac{2\pi}{\tau} \sum_{b=1}^s 2^{-b} \left[\sum_{\nu} a_\nu^* \bra{\psi_\nu}\bra{0^m} U_{\rm PE}^{\textnormal{local}\dag} \right] \ketbra{1}_b \left[\sum_{\mu} a_\mu U_{\rm PE}^{\textnormal{local}} \ket{\psi_\mu}\ket{0^m}\right] \nonumber \\ &= \frac{2\pi}{\tau} \sum_{b=1}^s 2^{-b} \left[\sum_{\nu} a_\nu^* \bra{\psi_\nu}\bra{\tilde{E}_\nu}\right] \ketbra{1}_b \left[\sum_{\mu} a_\mu \ket{\psi_\mu}\ket{\tilde{E}_\mu} \right] + O(\|H\|\|(U_{\rm PE}^{\textnormal{local}} - U_{\rm PE}^\text{ideal})\ket{\psi}\ket{0^m}\|) \nonumber \\ &= \sum_\mu |a_\mu|^2 \tilde{E}_\mu + O(\zeta\|H\|) \end{align} Note the extra factor of $\|H\|$ comes from $\tau=O(1/\|H\|)$. Since we can choose $\zeta=O(1/\poly(n))$ to be arbitrarily polynomially small, we write $\xi/2=O(\zeta\|H\|)=O(1/\poly(n))$. Hence, \begin{equation} \left|\braket{\eta | H_{out} - H_\textnormal{eff}|\eta}\right| \le \xi/2 \quad \forall \ket{\eta} \in \L_- \quad \Longrightarrow \quad \|H_\textnormal{eff} - H_{out}|_{\L_-} \| \le \xi/2 \end{equation} Thus by Lemma~\ref{lem:circuit-ham-simul}, for any $\eta>0$, the constructed $\tilde{H}_\textnormal{circuit}$ full-spectrum-simulates $H_\textnormal{eff}$ to precision $(\eta,\xi)$ below some energy cut-off $\Delta=O(\xi^{-1}\|H_{out}\|^2 + \eta^{-1}\|H_{out}\|)$. Finally, we show that a slightly rescaled $\tilde{H}_\textnormal{circuit}$ gap-simulates $H$. Note since we had chosen $O(log n)$-bit precision, $\|\tilde{E}_\mu - E_\mu|\le O(1/\poly(n)$. Thus, similar to the given Hamiltonian $H$, $H_{\textnormal{eff}}$ has the quasi-groundspace projector $Q=\sum_{\mu=1}^q \ketbra{\eta_\mu}$ with quasi-spectral gap $\gamma+O(1/\poly(n))$ and energy spread $w+O(1/\poly(n))$. Thus, we can choose $\xi=\Theta(\epsilon\gamma)=\Omega(\epsilon/\poly(n))$ and $\eta=\Theta(\epsilon)$ and $\alpha=\frac43 + O(1/\poly(n))$, and then use Lemma~\ref{lem:relate-CMP} to show that $\tilde{H}_\textnormal{circuit}' = \alpha \tilde{H}_\textnormal{circuit}$, gap-simulates $H_\textnormal{eff}$ with incoherence $\epsilon/2$, while the overall energy scale is still polynomial as $\Delta\ge (\xi^{-1}\|H_{out}\|^2 + \eta^{-1}\|H_{out}\|) = O(\poly(n,\epsilon^{-1}))$. In other words, if $\tilde{P}$ is the quasi-groundspace projector of $\tilde{H}_\textnormal{circuit}'$ with quasi-spectral gap $\gamma$, then $\|\tilde{P} - Q\| \le \epsilon/2$. Furthermore, by Lemma~\ref{lem:circuit-idling}, we have $\|Q-P\otimes P_\textnormal{anc}\|\le \epsilon/2$ since we have added $L=O(\poly(n)/\epsilon^2)$ identity gates to the end of the circuit. This implies that \begin{equation} \|\tilde{P} - P\otimes P_\textnormal{anc}\| \le \|\tilde{P} - Q\| + \|Q-P\otimes P_\textnormal{anc}\| \le \epsilon \end{equation} This means $\tilde{H}_\textnormal{circuit}'$ gap-simulates $(H,P)$ with incoherence $\epsilon$ and energy spread $w+O(1/\poly(n))$. Finally, we note that $\tilde{H}_\textnormal{circuit}$ has degree $O(\deg(U))=O(1)$, $O(T)=O(\poly(n)/\epsilon^2)$ number of terms, and $O(\poly(n,T,\xi^{-1},\eta^{-1}, \|H_{out}\|)) = O(\poly(n,\epsilon^{-1}))$ interaction strength. This concludes our proof of Theorem~\ref{thm:degree-reduction-poly}. \end{proof} \subsection{Coherent Dilution and DR of $H_A$ with Polynomial Interaction Strength (Proposition~\ref{prop:circuit})\label{sec:prop-circuit}} Recall that we have developed an incoherent diluter and degree-reducer of $H_A$ in Prop.~\ref{prop:incoherent-tree} from Appendix~\ref{sec:incoherenttree}. Although the construction only need bounded interaction strength, it was completely incoherent. As shown in Theorem~\ref{thm:degree-reduction-poly}, coherent degree-reduction is possible by allowing polynomial-strength interactions. It turns out that for some special cases such as $H_A$, coherent dilution is also possible with polynomial-strength interaction. This construction is done by constructing a similar circuit as in Prop.~\ref{prop:incoherent-tree} that counts the number of excitations locally and arranging them on a tree geometry so that individual qubit/qudit has constant degree. Since the circuit has $O(n)$ gates and constant degree, we can apply the circuit-to-Hamiltonian mapping as in Lemma~\ref{lem:circuit-ham-simul}, producing a coherent diluter and degree-reducer of $H_A$. { \renewcommand{\theprop}{\ref{prop:circuit}} \begin{prop}[constant-coherence dilution and DR for $H_A$ with polynomial interaction strength] There is a 6-local $[6, O(n/\epsilon^2),O(\poly(n,\epsilon^{-1}))]$-degree-reducer of $H_A$ with $\epsilon$-incoherence, energy spread $\tilde{w}=0$, and trivial encoding. This is also a $[6, O(n/\epsilon^2),O(\poly(n,\epsilon^{-1}))]$-diluter of $H_A$. \end{prop} \addtocounter{prop}{-1} } \begin{proof} First, let us construct a circuit to count the number of excitations. Similar to Fig.~\ref{fig:tree}, we add $n-1$ ancilla qutrits to the system of $n$ qubits and arrange them in a tree. In this arrangement, the original qubits are on the leaf nodes, and the ancilla qutrits are on the internal nodes. Note qutrits are particles with three possible states: $\ket{0},\ket{1},\ket{2}$. Starting from the internal nodes right below the leaf nodes, we label each internal node with an index $t$ and work our way down to the root node, so that no parent node has an index smaller than its children. For $t=1,2,\ldots,n-2$, we apply a gate $U_t$ for $t$-th internal node, where $U_t$ is a 3-local unitary satisfying \begin{gather} U_t\ket{00}\ket{z} = \ket{00}\ket{z},\quad U_t\ket{10}\ket{z} = \ket{10}\ket{z\oplus1},\quad U_t\ket{01}\ket{z} = \ket{01}\ket{z\oplus1} \\mathcal{U}_t\ket{xy}\ket{z} = \ket{xy}\ket{2-z} \quad \forall xy\neq00,01,10, \end{gather} where $\ket{lr}\ket{b}$ denote the state where the internal node qutrit is in the state $\ket{b}$, while its left (right) child node is in the state $\ket{l}$ ($\ket{r}$). Here, we denote $\oplus$ as addition modulo 3. For $t=n-1$, we apply $U_{n-1}$ on the root node and its two children where $U_{n-1}$ satisfies \begin{eqnarray} U_{n-1}\ket{xy}\ket{z}&=&\ket{xy}\ket{z} \quad \quad \text{for} \quad xy=00,01,10,\\ \text{and} \quad U_{n-1}\ket{xy}\ket{z}&=&\ket{xy}\ket{z\oplus1} \quad \textnormal{otherwise.} \end{eqnarray} Assuming the ancilla qutrits are initialized at $\ket{0}$, this circuit checks how many excitations (1s) are among the $n$ system qubits, and keep the root node qutrit at $\ket{0}$ if and only if the circuit accepts the input when less than two excitations are counted. Since there are $n-1$ internal nodes, there are also $n-1$ such gates. However, since we need to maintain coherence on the ancilla by un-computing, the full circuit should be \begin{equation} \label{eq:treecircuit} U_\textnormal{circuit} = U_1^\dag U_2^\dag \cdots U^\dag_{n-2} U_{n-1} U_{n-2}\cdots U_2 U_1. \end{equation} Note this circuit consists of $D\equiv 2(n-1)-1=2n-3$ gates, where $U_{n-1}$ is the only gate that acts on the root node. It is then clear that for input states of the form $\ket{x_1\cdots x_n}\otimes\ket{0^{n-1}}^\textnormal{anc}$, where $x_i\in\{0,1\}$ are states of the original qubits, the final output of the full circuit is $\ket{x_1\cdots x_n}\ket{0^{n-2}}\ket{0}$ if there are zero or one excitations amongst $x_i$, and $\ket{x_1\cdots x_n}\ket{0^{n-2}}\ket{1}$ otherwise. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{circuitsparsifier.pdf} \caption{\label{fig:circuit}Schematic diagram of a constant-degree, poly-strength diluter and degree-reducer of $H_A$ via circuit Hamiltonian on a tree, for $n=8$. $H_{prop,t}$ are 6-local term that connects all the qudits at each branching as well as the nearby clock qubits on the flow of time. The clock qubits are linked together in the blue line that represents flow of time through $H_{clock}$, which consists of 2-local terms acting on nearest neighbors on the line. Each ancilla qutrit is also connected to the clock qubit directly below through $H_{in}$ and $H_{out}$.} \end{figure} Per Lemma~\ref{lem:circuit-idling}, we add $L=O(D/\epsilon^2)$ identity gates to the end of $U_\textnormal{circuit}$ to ensure $\epsilon$-incoherence. We then convert the resultant circuit with $T=D+L$ gates to a Hamiltonian $\tilde{H}_A^\textnormal{circuit}=H_0 + H_A^{out}$ using the construction laid out in Lemma~\ref{lem:circuit-ham-simul}. To ensure that our circuit only accepts outputs where the root ancilla qutrit is $\ket{0}$ once the circuit reaches it (with $U_{n-1}$ at $t=n-1$), we use the following Hamiltonian \begin{equation} H_A^{out} = J_{out} (\ketbra{1}_{n-1}^\textnormal{anc}+\ketbra{2}_{n-1}^\textnormal{anc})\otimes \ketbra{1}^\textnormal{clock}_{n-1}. \end{equation} Let $\ket{\psi_\mu}$ be eigenstates of $H_A$. For $1\le \mu\le n+1$, let $\ket{\psi_\mu}$ be the groundstates of $H_A$. We consider the restriction to the subspace of history states, $\L=\spn\{\ket{\eta_\mu}:1\le\mu \le 2^n\}$. Let us write $\tilde{P}=\sum_{\mu=1}^{n+1} \ketbra{\eta_\mu}$ as the projector onto history states corresponding to groundstate input. It is easy to see that $\tilde{H}_A^\textnormal{circuit}\tilde{P} = H_A^{out} \tilde{P}=0$, so $\tilde{P}$ is in fact the groundspace projector of $\tilde{H}_A^\textnormal{circuit}$. Let us denote $U_{t\leftarrow0}=U_t\cdots U_2 U_1$. We note that for any $\ket{\psi_\mu},\ket{\psi_\nu} \in \L$ but $\perp \tilde{P}$, we have \begin{align} \braket{\eta_\mu|H_A^{out}|\eta_\nu} &= \frac{J_{out}}{T+1}\sum_{t',t=0}^T \left(\bra{\psi_\mu} \bra{0^{n-1}}\bra{t'}\right) U_{t'\leftarrow 0}^\dag H_{out} U_{t\leftarrow 0} \left(\ket{\psi_\nu}\ket{0^{n-1}}\ket{t}\right) \nonumber \\ &= \frac{J_{out}}{T+1}\sum_{t',t=n-1}^T \left(\bra{\psi_\mu} \bra{0^{n-1}}\right) U_{t'\leftarrow 0}^\dag U_{t\leftarrow 0} \left(\ket{\psi_\nu}\ket{0^{n-1}}\right) \delta_{t,t'}\nonumber \\ &= \frac{J_{out}}{T+1}\sum_{t=n-1}^T \braket{\psi'|\psi} = J_{out} \frac{T+2-n}{T+1}\braket{\psi'|\psi} = J_{out} \frac{n-1+L}{2(n-1)+L}\braket{\psi'|\psi}. \end{align} Consequently, $H_A^{out}|_{\L}$ is diagonal in the basis of history states, and we can write \begin{equation} H_A^{out}|_\L = \tilde\gamma (\mathds{1} - Q) \equiv H_\textnormal{eff}, \quad \text{where}\quad \tilde\gamma \equiv J_{out}\frac{n-1+L}{2(n-1)+L} \ge J_{out}/2 \end{equation} We can then apply Lemma~\ref{lem:circuit-ham-simul} to show that for any $\eta,\xi>0$, the constructed $\tilde{H}_A^\textnormal{circuit}$ full-spectrum-simulates $H_\textnormal{eff}$ to precision $(\eta,\xi)$ below energy cut-off $\Delta\ge O(\xi^{-1} J_{out}^2 + \eta^{-1}J_{out})$, with trivial encoding. By choosing $\xi=O(\epsilon J_{out})$ and $\eta=O(\epsilon$), we can apply Lemma~\ref{lem:relate-CMP} and show that $\frac{4}{3}\tilde{H}_A^\textnormal{circuit}$ gap-simulates $H_\textnormal{eff}$ with incoherence $\epsilon$ and trivial encoding. We note that by choosing $J_{out}=2$, we ensure that the spectral gap of $\tilde{H}_A^\textnormal{circuit}$ is $\ge 1$. Furthermore, we note that $\tilde{H}_A^\textnormal{circuit}$ in fact has energy spread $\tilde{w}=0$ since $\tilde{H}_A^\textnormal{circuit} \tilde{P}=0$. Since $\|\tilde{P}-P\otimes P_\textnormal{anc}\|\le \epsilon$ by Lemma~\ref{lem:circuit-idling}, we have shown that $\frac43 \tilde{H}_A^\textnormal{circuit}$ gap-simulates $H$ with incoherence $\epsilon$, energy spread $\tilde{w}=0$, and trivial encoding, whose maximum interaction strength is $O(\poly(n,\epsilon^{-1})$. A schematic illustrating the connectivity of the qubits/qutrits in $\tilde{H}_A^\textnormal{circuit}$ is shown in Fig.~\ref{fig:circuit}. By inspection we see that the computation qubits have max degree 2, the ancilla qutrits have max degree 5, and the clock qubits have max degree 6. The Hamiltonian consists of $M=O(n+T)=O(n/\epsilon^2)$ terms, each at most 6-local. Hence, $\tilde{H}_A^\textnormal{circuit}$ is a 6-local $[6, O(n/\epsilon^2),O(\poly(n,\epsilon^{-1}))]$-sparsifier of $H_A$ with $\epsilon$-incoherence. \end{proof} \section{Bounding error of perturbed groundspace (Lemma~\ref{lem:PPgroundspace} and \ref{lem:PPgsv2})\label{sec:PPgroundspace-proof}} In this Appendix, we prove Lemma~\ref{lem:PPgroundspace} which is used earlier in Appendix~\ref{sec:comp-defns} and \ref{sec:imposs-dilute}. In fact, we prove a more general version than the version previously stated. This is Lemma~\ref{lem:PPgsv2}, of which Lemma~\ref{lem:PPgroundspace} is a special case with the further restriction that $w\le1/2$. Note this Lemma uses essentially the same technique as Lemma A.1 from Ref.~\cite{OliveiraTerhal}. \begin{lemma}[Error bound on perturbed groundspace] \label{lem:PPgsv2} Let $H$ and $\tilde{H}$ be two Hamiltonians on the same Hilbert space. Per Def.~\ref{defn:gap}, let $P$ project onto a quasi-groundspace of $H$ with energy spread $w$ and quasi-spectral gap $\gamma$. If $\|\tilde{H}-H\| \le \kappa$, and $\kappa < (1-w)\gamma/4$, then there is a quasi-groundspace projector $\tilde{P}$ of $\tilde{H}$ with quasi-spectral gap at least $\tilde{\gamma}$, comprised of eigenstates of $\tilde{H}$ up to energy at most $\lambda_1(\tilde{H}) + \tilde{w}\tilde{\gamma}$, where $\tilde{\gamma} \ge \gamma-2\kappa$ and $\tilde{w}\tilde{\gamma} \le w\gamma + 2\kappa$. Furthermore, \begin{equation} \|\tilde{P}- P \| < \kappa \left(\frac{4}{(1-w)\gamma} + \frac{2(1+w)/(1-w)}{(1-w)\gamma-4\kappa} \right). \end{equation} In particular, if $w\le 1/2$ and $\kappa \le (1-w)\gamma/8$, then $\|\tilde{P}-P\| < 32\kappa/\gamma$. \end{lemma} To prove the above Lemma, we borrow the Green's function techniques from Ref.~\cite{KKR06,OliveiraTerhal} to bound error due to perturbations. First, let us establish some notations, similar to that in Ref.~\cite{KKR06,OliveiraTerhal}. We consider Hamiltonians of the form $\tilde{H} = H + V$, defined on some Hilbert space $\tilde{\H}$. (Note the symbol $V$ in this Appendix refers to a Hermitian operator, not an isometry.) Furthermore, we assume $H$ has a gap of width $\Delta>0$ in the spectrum centered at some value $\lambda_*$; in other words, no eigenvalue of $H$ lies between $\lambda_- = \lambda_* - \Delta/2$ and $\lambda_+ = \lambda_* + \Delta/2$. We decompose the Hilbert space $\tilde{\H}=\L_+\oplus\L_-$, where $\L_-$ is the low-energy subspace of eigenstates of $H$ with eigenvalue $\le \lambda_-$, and $\L_+$ corresponds to high-energy eigenstates of $H$ with eigenvalues $\ge \lambda_+$. Correspondingly, we denote $\Pi_\pm$ as projectors onto subspaces $\L_\pm$. Furthermore, let us denote the operator-valued Green's functions $G(z) = (z-H)^{-1}$ and $\tilde{G}(z) = (z - \tilde{H})^{-1}$. We can decompose all operators in the Hilbert space $\tilde{H}$ into four blocks according to $\L_{\pm}$: \begin{equation} \begin{aligned} H &= \begin{pmatrix} H_{+} & 0 \\ 0 & H_{-} \\ \end{pmatrix}, \quad &V &= \begin{pmatrix} V_{++} & V_{+-} \\ V_{-+} & V_{--} \\ \end{pmatrix}, &\quad \tilde{H} = \begin{pmatrix} \tilde{H}_{++} & \tilde{H}_{+-} \\ \tilde{H}_{-+} & \tilde{H}_{--} \end{pmatrix}, \\ G &= \begin{pmatrix} G_{+} & 0 \\ 0 & G_- \\ \end{pmatrix}, \quad &\tilde{G} &= \begin{pmatrix} \tilde{G}_{++} & \tilde{G}_{+-} \\ \tilde{G}_{-+} & \tilde{G}_{--} \end{pmatrix}. \end{aligned} \label{eq:gadget-convention-1} \end{equation} We denote $A_{\pm\pm}=\Pi_{\pm}A\Pi_{\pm}$ and $A_{\pm\mp}= \Pi_{\pm}A\Pi_{\mp}$ as parts of operator $A$ restricted to mapping between the corresponding subspaces. In cases when the operator is block diagonal in this basis, we simplify notation by denoting $G_+\equiv G_{++}$ for example. Finally, we define the self-energy $\Sigma_-(z)$ as the following operator acting on the subspace $\L_-$: \begin{equation} \Sigma_-(z) = z - \tilde{G}_{--}^{-1}(z) = H_{-} + V_{--} + \sum_{p=0}^\infty V_{-+} (G_+ V_{++})^p G_+ V_{+-} \label{eq:gadget-convention-2} \end{equation} where after the last equality, we wrote out the series expansion of $\Sigma_-(z)$ that will be very useful. Before we proceed to the proof of Lemma~\ref{lem:PPgsv2}, we first state a useful result proved in Ref.~\cite{KKR06}: \begin{lemma}[Error bound on perturbed eigenvalues, Theorem 3 of \cite{KKR06}] \label{lem:gadget-eigenvalue} Consider a Hamiltonian $\tilde{H}= H + V$. Let us denote a precision parameter $\mathcal{E}>0$, and assume the existence of a Hermitian operator $H_\textnormal{eff}$ whose eigenvalues lie in some range $[a,b]$. Suppose that all the following conditions are satisfied: \begin{itemize} \item For some constants $\Delta>0$ and $\lambda_* > b+\mathcal{E}$, $H$ has no eigenvalues between $\lambda_- = \lambda_* - \Delta/2$ and $\lambda_+ = \lambda_* + \Delta/2$. \item $\|V\| < \Delta/2$. \item For all $z\in [a-\mathcal{E},b+\mathcal{E}]$, the following inequality holds for the self-energy: \begin{equation} \|\Sigma_-(z) - H_\textnormal{eff}\| \le \mathcal{E} \end{equation} \end{itemize} Let $\tilde{H}|_{<\Delta/2}$ denote the operator $\tilde{H}$ restricted to eigenstates with eigenvalues $<\lambda_*$. Then \begin{equation} \left|\lambda_j\left(\tilde{H}|_{<\lambda_*}\right) - \lambda_j\left(H_\textnormal{eff}\right)\right| \le \mathcal{E} \quad \forall j \end{equation} where $\lambda_j(X)$ is the $j$-th eigenvalue of Hermitian operator $X$. \end{lemma} \begin{proof}[\textbf{Proof of Lemma~\ref{lem:PPgsv2}}] Let $E^g=\lambda_1(H)$. WLOG let us assume $E^g=0$, since otherwise we can simply redefine $H\mapsto H'=H-E^g$ and $\tilde{H} \mapsto \tilde{H}'=\tilde{H}-E^g$, which have the same spectrum as the original $H$ and $\tilde{H}$ except with the eigenvalues shifted by $E^g$. Note by Def.~\ref{defn:gap}, $H$ has no eigenvalue between $\lambda_- = w\gamma$ and $\lambda_+= \gamma$, so there is a gap $\Delta=(1-w)\gamma$ in the spectrum of $H$ centered at $\lambda_* = \frac12(\lambda_+ + \lambda_-) = (1+w\gamma)/2$. Thus let us decompose the Hilbert space $\H=\L_+ \oplus \L_-$, where $\L_+$ ($\L_-$) corresponds to eigenstates of $H$ with eigenvalue $\ge \lambda_+$ ($\le \lambda_-$). We note that $P=\Pi_-$. Now, let us denote $V = \tilde{H}-H$, which satisfies $\|V\|\le \kappa$ by assumption. We consider a region $R = \{z\in \mathds{C}:|z|\le (1+w)\gamma/2\}$ in the complex plane, which is a disk centered at $z=0$ with radius $r=(1+w)\gamma/2$. For any $z\in R$, we have $\|G_+(z)\| = \| \Pi_+(z-H)^{-1} \Pi_+ \| \le 2/[(1-w)\gamma] = 2/\Delta$. Thus, treating $H_-\equiv \Pi_- H \Pi_-$, which is $H$ restricted to $\L_-$, as the effective Hermitian operator $H_\textnormal{eff}=H_-$, we have \begin{eqnarray} \Sigma_-(z) - H_- &=& V_{--} + \sum_{p=0}^\infty V_{-+} (G_+ V_{++})^p G_+ V_{+-} \nonumber \\ \|\Sigma_-(z) - H_- \| &\le & \|V_{--}\|+ \sum_{p=0}^\infty \|V_{-+}\|^2 \|G_+\|^{p+1} \|V_{++}\|^p \nonumber \\ &\le& \kappa + \sum_{p=0}^\infty \frac{\kappa^{p+2}}{(\Delta/2)^{p+1}} = \frac{\kappa\Delta}{\Delta -2\kappa} \equiv \mathcal{E} \label{eq:PP-self-energy-bound} \end{eqnarray} Observe that the region $R$ includes the interval $[-\mathcal{E}, w\gamma+\mathcal{E}]$, if \begin{equation} \mathcal{E} < \frac12 (1-w)\gamma = \frac{\Delta}{2} \quad \Longleftrightarrow \quad \kappa < \frac{1}{4}(1-w)\gamma = \frac{\Delta}{4}, \label{eq:small-kappa} \end{equation} which is what we assumed in the premise of the Lemma. Since $\|\tilde{H}-H\|\le \kappa$, then by Weyl's inequality we have $|\lambda_j(\tilde{H})-\lambda_j(H)|\le \kappa$ for all $j$. Note $\lambda_1(H)=0$, so $|\lambda_1(\tilde{H})|\le \kappa$. Let us denote $\tilde{P}$ as the projector onto the corresponding eigenstates of $\tilde{H}$ with eigenvalue $< \lambda^*$. Note the eigenstates in $P$ of $H$ have maximum eigenvalue $\le \lambda_-$; consequently, the eigenstates in $\tilde{P}$ of $\tilde{H}$ has maximum eigenvalue $\le \lambda_-+\kappa$, and all other eigenstates of $\tilde{H}$ have eigenvalue $\ge \lambda^+-\kappa = \gamma - \kappa$. Hence, $\tilde{P}$ corresponds to a quasi-groundspace of $\tilde{H}$ with quasi-spectral gap $\tilde{\gamma}$ and energy spread $\tilde{w}$, given by: \begin{align} \tilde{\gamma} &\ge \lambda_+-\kappa -\lambda_1(\tilde{H}) \ge \gamma-2\kappa \\ \tilde{w}\tilde{\gamma} &\le \lambda_- + \kappa -\lambda_1(\tilde{H}) \le w\gamma + 2\kappa \end{align} Now, let us bound the error $\|\tilde{P}-P\|$ between the groundspace projectors, following the same idea in Ref.~\cite{OliveiraTerhal}. We first bound \begin{equation} \|\tilde{P}-\Pi_- \tilde{P}\Pi_-\| = \|\tilde{P} - \Pi_- \tilde{P} + \Pi_- \tilde{P} - \Pi_- \tilde{P}\Pi_-\| = \|\Pi_+ \tilde{P} + \Pi_- \tilde{P} \Pi_+\| \le 2\|\Pi_+ \tilde{P}\| \end{equation} Furthermore, we can bound the quantity $\|\Pi_+ H \tilde{P}\|$ from above: \begin{eqnarray} \|\Pi_+ H \tilde{P}\| &=& \|\Pi_+ (\tilde{H}-V) \tilde{P}\| \le \|\Pi_+ \tilde{H} \tilde{P}\| + \|\Pi_+ V \tilde{P}\| \le \|\Pi_+ \tilde{P} \tilde{H}\tilde{P}\| + \|V\| \nonumber \\ &\le& (\lambda_- + \mathcal{E})\|\Pi_+ \tilde{P}\| + \|V\|. \end{eqnarray} where we used the fact that $\|\tilde{P}\tilde{H}\tilde{P}\| \le \lambda_-+\mathcal{E}$. Using $\|\Pi_+ H\Pi_+\| \ge \lambda_+$, we can also bound $\|\Pi_+ H \tilde{P}\|$ from below: \begin{equation} \|\Pi_+ H \tilde{P}\| = \|\Pi_+ H \Pi_+ \tilde{P}\| \ge \lambda_+ \|\Pi_+ \tilde{P}\| \end{equation} Thus \begin{equation} \|\tilde{P}-\Pi_- \tilde{P}\Pi_-\| \le \frac{2\|V\|}{\lambda_+ - \lambda_- - \mathcal{E}} = \frac{2\|V\|}{\Delta - \mathcal{E}} \le \frac{2\kappa}{\Delta-\mathcal{E}} < \frac{4\kappa}{\Delta} = \frac{4\kappa}{(1-w)\gamma}, \label{eq:PP-bound-1} \end{equation} where we used the assumption in Eq.~\eqref{eq:small-kappa} that $\mathcal{E} < \Delta/2$, so $(\Delta-\mathcal{E})^{-1} < 2/\Delta$. Now, let us bound $\|\Pi_- \tilde{P} \Pi_- - P\|$. To this end, let us denote $C$ as the contour in the complex plane around the region $R$, centered at $z=0$ with radius $r=(1+w)\gamma/2$. Due to Lemma~\ref{lem:gadget-eigenvalue}, we know all eigenvalues of $\tilde{H}$ that correspond to $\tilde{P}$ are enclosed by $C$. Using the Cauchy integral formula, we have \begin{equation} \Pi_- \tilde{P} \Pi_- = \Pi_-\left(\frac{1}{2\pi i} \oint_C \tilde{G}(z) dz\right)\Pi_- = \frac{1}{2\pi i} \oint_C \tilde{G}_{--}(z)dz = \frac{1}{2\pi i} \oint_C (z-\Sigma_-(z))^{-1}dz. \end{equation} Also, observe that \begin{equation} P = \frac{1}{2\pi i} \oint_C G_-(z) = \frac{1}{2\pi i} \oint_C (z-H_-)^{-1} dz. \end{equation} To bound the difference between the two operators, we use the following identity \begin{equation} \|(A-B)^{-1} - A^{-1}\| = \|(\mathds{1}-A^{-1}B)^{-1}A^{-1} - A^{-1}\| \le \|A^{-1}\|\left( (1-\|A^{-1}\| \|B\|)^{-1} - 1\right), \end{equation} which is true if $\|A^{-1}\|\|B\|< 1$. By choosing $A=z-H_-$ and $B = \Sigma_-(z) - H_-$ for $z\in C$ (on the contour), we have $\|B\|\le \mathcal{E}$ by Eq.~\eqref{eq:PP-self-energy-bound}, and $\|A^{-1}\| \le (r-w\gamma)^{-1}=2/\Delta$ by inspection. Hence $\|A^{-1}\|\|B\| \le 2\mathcal{E}/\Delta$, which is $<1$ since we assumed $\mathcal{E} < \Delta/2$ as in Eq.~\eqref{eq:small-kappa}. Then, we can apply the aforementioned identity and bound \begin{eqnarray} \sup_{z\in C} \|(z-\Sigma_-(z))^{-1} - (z-H_\textnormal{eff})^{-1}\| \le \frac{4\mathcal{E}}{\Delta(\Delta-2\mathcal{E})}= \frac{4\kappa}{\Delta(\Delta-4\kappa)}. \end{eqnarray} Consequently, we have \begin{eqnarray} \|\Pi_-\tilde{P}\Pi_- - P\| &=& \left\| \frac{1}{2\pi i} \oint_C [(z-\Sigma_-(z))^{-1} - (z-H_\textnormal{eff})^{-1}]dz \right \| \nonumber \\ &\le& \frac{4\kappa r}{\Delta(\Delta-4\kappa)} = \frac{2\kappa(1+w)/(1-w)}{(1-w)\gamma-4\kappa} , \end{eqnarray} where we plugged in $r=(1+w)\gamma/2$ and $\Delta=(1-w)\gamma$. Combining with the first bound in Eq.~\eqref{eq:PP-bound-1}, we have \begin{equation} \|\tilde{P}- P \| < \kappa \left(\frac{4}{(1-w)\gamma} + \frac{2(1+w)/(1-w)}{(1-w)\gamma-4\kappa} \right) \end{equation} Let us now consider the particular case when $\kappa \le (1-w)\gamma/8$ and $w\le 1/2$. Given these constraints, we have $(1-w)^{-1}\le 2$ and $1/[(1-w)\gamma-4\kappa] \le 2/[(1-w)\gamma]$, which implies \begin{equation} \|\tilde{P}-P\| < \kappa \left(\frac{4}{(1-w)\gamma} + \frac{4(1+w)}{(1-w)^2\gamma} \right) \le \frac{32\kappa}{\gamma}. \end{equation} \end{proof} \section{General Coherent DR with Exponential Interaction Strength \\ (Theorem~\ref{thm:degree-reduction-exp})\label{sec:degree-reduction-unbounded} } In this Appendix, we prove Theorem~\ref{thm:degree-reduction-exp}, which shows that given unbounded interaction strength, one can perform degree-reduction for arbitrary local Hamiltonians. The proof makes heavy use of perturbative gadgets. Specifically, we use versions of subdivision, 3-to-2-to-local, and fork gadgets first presented in Ref.~\cite{OliveiraTerhal} to construct a 2-local coherent degree-reducer for any given local Hamiltonian. The analyses in Ref.~\cite{KKR06, CaoImprovedOTGadget} have also provided some inspirations. The proof can be divided into two sections. In Sec.~\ref{subsec:perturbativeGtools}, we will show that the three above mentioned perturbative gadget tools can indeed be used for gap-simulation (Definition~\ref{defn:hamsimul}). To this end we first prove Lemma~\ref{lem:gadget-ground-space} which is a cousin of Lemma~\ref{lem:PPgsv2} used previously, providing error bound on perturbed groundspace. Then, Claims~\ref{claim:subdiv}, \ref{claim:3-to-2} and \ref{claim:fork} prove the applicability of the three tools of perturbative gadgets to our coherent gap-simulation framework, respectively. Subsequently, in Sec.~\ref{sec:proof-theorem-gadget} we use these tools in a fairly straight-forward sequence of mappings to degree-reduce any $O(1)$-local Hamiltonian. \subsection{Gap-simulation by Perturbative Gadgets}\label{subsec:perturbativeGtools} Perturbative gadgets are Hamiltonians of the form $\tilde{H}=H_\textnormal{anc} + V$, where $V$ contains perturbations that act on highly degenerate groundstates of $H_\textnormal{anc}$ and produce effective interactions that mimic some target Hamiltonian $H_\textnormal{eff}$. (We emphasize that the symbol $V$ in this Appendix refers to a Hermitian operator, not an isometry.) Generally, the quality of how well the gadget Hamiltonian simulates the target Hamiltonian is given by a precision parameter $\mathcal{E} \ll 1$ that one can freely choose at the end of the construction (see for example the statement of Lemma~\ref{lem:gadget-eigenvalue}). To prove the results in this section, we use the same Green's function machinery described above in Appendix~\ref{sec:PPgroundspace-proof}, which studies perturbation theory on $\tilde{H}=H+V$. The notation we use is the same as in Eq.~\eqref{eq:gadget-convention-1}\eqref{eq:gadget-convention-2}, except we change $H\to H_\textnormal{anc}$. Note that, Lemma~\ref{lem:gadget-eigenvalue} already allow us to bound eigenvalues of $\tilde{H}$ relative to $H_\textnormal{eff}$, which can allow us to satisfy condition 1 of gap-simulation Definition~\ref{defn:hamsimul} We still need to bound errors of perturbed (quasi-)groundspace to satisfy condition 2 of the definition. To that end, we prove the following Lemma, which is a cousin of Lemma~\ref{lem:PPgsv2}. The proof uses essentially the same arguments as in Lemma A.1 from Ref.~\cite{OliveiraTerhal} and Lemma~\ref{lem:PPgsv2}, but adapted to prove statements more directly useful for the goals in the section. \begin{lemma}[Gadget groundspace error bound, modified from Ref.~\cite{OliveiraTerhal}] \label{lem:gadget-ground-space} Suppose we are given a target Hamiltonian $H_\textnormal{target}$ defined on Hilbert space $\H$, and let $E^g = \lambda_1(H_\textnormal{target})$. Let us denote its quasi-groundspace projector $P$, energy spread $w$, and quasi-spectral gap $\gamma$ per Def.~\ref{defn:gap}. Additionally, let us denote $q\equiv \textnormal{rank}(P)$ as the degeneracy of the quasi-groundspace. Now, consider a gadget Hamiltonian $\tilde{H} = H_\textnormal{anc} + V$ acting on Hilbert space $\tilde{\H}=\H\otimes \H_\textnormal{anc}$, and some precision parameter $\mathcal{E}$ such that $0<\mathcal{E} < (1-w)\gamma/2$. Suppose the following conditions are satisfied: \begin{itemize} \item $H_\textnormal{anc}$ acts trivially on $\H$. When restricted to the ancilla Hilbert space $\H_\textnormal{anc}$, we denote $P_\textnormal{anc}$ as the projector onto the eigenstates of $H_\textnormal{anc}$ with eigenvalue $\lambda_-=0$, and all other eigenvalues are $\ge \lambda_+ = \Delta$. In other words, the subspace $\L_- = \textnormal{range}(\mathds{1}\otimes P_\textnormal{anc})$. \item The conditions of Lemma~\ref{lem:gadget-eigenvalue} are satisfied with $H_\textnormal{eff} = (H_\textnormal{target} \otimes P_\textnormal{anc}) |_{\L_-}$ and precision parameter $\mathcal{E}$. \item Consider again the self energy $\Sigma_-(z) \equiv z - \tilde{G}_{--}^{-1}(z)$ now generalized for $z\in \mathds{C}$. For some constant $r$ satisfying $w\gamma + \mathcal{E} < r \le \frac12(1+w)\gamma$, we have for all $|z - E^g | \le r$, \begin{equation} \|\Sigma_-(z) - H_\textnormal{eff}\|\le \mathcal{E} \end{equation} \end{itemize} Let $\tilde{P}$ be the projector onto the $q$ lowest eigenstates of $\tilde{H}$. Then \begin{equation} \label{eq:ground-space-error} \| \tilde{P} - P\otimes P_\textnormal{anc}\| \le \frac{2 \|V\|}{\Delta- (|E^g| + w\gamma+\mathcal{E})} + \frac{\mathcal{E} r}{(r-w\gamma)(r-w\gamma-\mathcal{E})} \end{equation} In particular, if we choose $r=(1+w)\gamma/2$, then \begin{equation} \| \tilde{P} - P\otimes P_\textnormal{anc}\| \le \frac{2 \|V\|}{\Delta- (|E^g| + w\gamma+\mathcal{E})}+ 2\mathcal{E}\frac{1+w}{(1-w)[(1-w)\gamma-2\mathcal{E}]} \end{equation} \end{lemma} \begin{proof} Per our convention, we denote $\Pi_-=\mathds{1}\otimes P_\textnormal{anc}$ and $\Pi_+ = \mathds{1}-\Pi_-$ as projectors onto low- and high- energy subspace $\L_\mp$ of $H_\textnormal{anc}$. The proof proceeds in two parts: bounding $\|\tilde{P} - \Pi_- \tilde{P} \Pi_-\|$ and $\|\Pi_-\tilde{P} \Pi_- - P\otimes P_\textnormal{anc}\|$, which together yields Eq.~\eqref{eq:ground-space-error}. \emph{Part 1}--- Using the triangle inequality for the spectral norm, we can bound \begin{equation} \|\tilde{P} - \Pi_- \tilde{P} \Pi_-\| = \|\tilde{P} - \Pi_- \tilde{P} + \Pi_-\tilde{P} - \Pi_- \tilde{P}\Pi_- \| = \|\Pi_+\tilde{P} + \Pi_- \tilde{P} \Pi_+\| \le 2\|\Pi_+ \tilde{P}\|. \end{equation} Observe that since $\lambda_1(H_\textnormal{eff})=E^g$ and $\lambda_q(H_\textnormal{eff}) \le E^g + w\gamma$, Lemma~\ref{lem:gadget-eigenvalue} tells us $\lambda_1(\tilde{H}) \ge E^g -\mathcal{E}$ and $\lambda_q(\tilde{H}) \le E^g + w\gamma+\mathcal{E}$, which means $\|\tilde{P} \tilde{H} \tilde{P}\|\le \max\{|\lambda_1(\tilde{H})|, |\lambda_q(\tilde{H})| \} \le |E^g| + w\gamma + \mathcal{E}$. This allows us to bound the quantity from $\|\Pi_+ H_\textnormal{anc} \tilde{P}\|$ above: \begin{eqnarray} \|\Pi_+ H_\textnormal{anc} \tilde{P}\| &=& \|\Pi_+ (\tilde{H} - V)\tilde{P}\| \le \|\Pi_+ \tilde{H}\tilde{P}\| + \| \Pi_+ V \tilde{P} \| = \|\Pi_+ \tilde{P}\tilde{H}\tilde{P}\| + \| \Pi_+ V \tilde{P} \| \nonumber \\ &\le& (|E^g| + w\gamma + \mathcal{E})\|\Pi_+ \tilde{P}\| + \|V\|. \end{eqnarray} Also, we can bound the same quantity from below \begin{equation} \|\Pi_+ H_\textnormal{anc} \tilde{P}\| = \|\Pi_+ H_\textnormal{anc} \Pi_+ \tilde{P}\| \ge \lambda_+ \|\Pi_+ \tilde{P}\| = \Delta\|\Pi_+ \tilde{P}\|. \end{equation} Consequently, we obtain the bound \begin{equation} \|\tilde{P}- \Pi_- \tilde{P} \Pi_-\| \le \frac{2\|V\|}{\Delta - (|E^g| + w\gamma + \mathcal{E})} \end{equation} \emph{Part 2}--- Consider a circular contour $C$ in the complex plane centered at $z=E^g$ with radius $r$ satisfying the assumption $w\gamma + \mathcal{E} < r \le \frac12(1+w)\gamma$. Due to Lemma~\ref{lem:gadget-eigenvalue}, we know all eigenvalues of $\tilde{H}$ corresponding to $\tilde{P}$ is inside $C$. Using the Cauchy integral formula, we have \begin{equation} \Pi_- \tilde{P} \Pi_- = \Pi_-\left(\frac{1}{2\pi i} \oint_C \tilde{G}(z) dz\right)\Pi_- = \frac{1}{2\pi i} \oint_C \tilde{G}_{--}(z)dz = \frac{1}{2\pi i} \oint_C (z-\Sigma_-(z))^{-1}dz. \end{equation} Also, observe that \begin{equation} P\otimes P_\textnormal{anc} = \frac{1}{2\pi i} \oint_C (z-H_\textnormal{eff})^{-1} dz. \end{equation} To bound the difference between the two operators, we use the following identity \begin{equation} \|(A-B)^{-1} - A^{-1}\| = \|(\mathds{1}-A^{-1}B)^{-1}A^{-1} - A^{-1}\| \le \|A^{-1}\|\left( (1-\|A^{-1}\| \|B\|)^{-1} - 1\right), \end{equation} which is true if $\|A^{-1}\|\|B\|< 1$. Let us choosing $A=z-H_\textnormal{eff}$ and $B = \Sigma_-(z) - H_\textnormal{eff}$ for $z\in C$ (on the contour). Observe that we have $\|B\|\le \mathcal{E}$ by assumption. Since we assumed $r\le (1+w)\gamma/2$, we have $r-w\gamma \le \frac12 (1-w)\gamma \le \gamma - r$, and thus$\|A^{-1}\| \le (r-w\gamma)^{-1}$. Also, since we assumed $w\gamma+\mathcal{E} < r$, we have $\|A^{-1}\|\|B\| \le \mathcal{E}/(r-w\gamma) < 1$. Therefore, we can apply the aforementioned identity and bound \begin{eqnarray} \sup_{z\in C} \|(z-\Sigma_-(z))^{-1} - (z-H_\textnormal{eff})^{-1}\| \le \frac{\mathcal{E}}{(r-w\gamma)(r-w\gamma - \mathcal{E})}. \end{eqnarray} Consequently, using the estimation lemma for contour integrals, we have \begin{equation} \|\Pi_-\tilde{P}\Pi_- - P\otimes P_\textnormal{anc}\| = \left\| \frac{1}{2\pi i} \oint_C [(z-\Sigma_-(z))^{-1} - (z-H_\textnormal{eff})^{-1}]dz \right \| \le \frac{\mathcal{E} r}{(r-w\gamma)(r-w\gamma -\mathcal{E})}. \end{equation} Eq.~\eqref{eq:ground-space-error} thus follows. \end{proof} Now we will use the above Lemma~\ref{lem:gadget-eigenvalue} and \ref{lem:gadget-ground-space} to prove three Claims about how different gadget reductions can produce gap-simulating Hamiltonians. We will then use these Claims to prove Theorem~\ref{thm:degree-reduction-exp}. In the following, we denote $X\equiv \sigma_x$, $Y \equiv \sigma_y$, $Z\equiv \sigma_z$ for convenience. \begin{claim}[gap-simulation by subdivision gadget] \label{claim:subdiv} Given an $n$-qubit $k$-local Hamiltonian $H_\textnormal{target}$ with a quasi-groundspace projector $P$ with quasi-spectral gap $\gamma$ and energy spread $w$. Let us write it as \begin{equation} H_\textnormal{target} = H_\textnormal{else} + \sum_{\mu=1}^m c_{\mu} \sigma_{\mu_1}^{(s_{\mu_1})}\otimes \sigma_{\mu_2}^{(s_{\mu_2})} \otimes \cdots \otimes \sigma_{\mu_k}^{(s_{\mu_k})} , \quad \sigma_{\mu_i} \in \{ X, Y, Z\} \end{equation} where $H_\textnormal{else}$ is some $k'$-local for $k'\le \lceil k/2 \rceil + 1$. where $|c_\mu| \le J$. Let $A_\mu = \sigma_{\mu_1}^{(i_1)} \otimes \cdots \otimes \sigma_{\mu_{j}}^{(i_j)}$, and $B_\mu = \sigma_{\mu_{j+1}}^{(i_{j+1})}\otimes \cdots \otimes \sigma_{\mu_{k}}^{(i_k)}$, where $j=\lceil k/2\rceil$. Considering adding an ancilla qubit $a_\mu$ for each $1\le \mu \le m$, and write the following $\mathcal{E}$-precision $(\lceil k/2 \rceil +1)$-local \emph{subdivision gadget} Hamiltonian on $n+m$ qubits \begin{gather} \tilde{H}_\textnormal{gadget} = H_\textnormal{anc} + V, \quad H_\textnormal{anc} = \sum_{\mu} \Delta \ketbra{1}^{(a_\mu)}, \\ V = H_\textnormal{else} + \sum_\mu \left[ \sqrt{\frac{|c_\mu|\Delta}{2}} ( \sgn(c_\mu)A_\mu - B_\mu ) \otimes X^{(a_{\mu})} + |c_\mu| \right] \end{gather} assuming we choose $\mathcal{E} \ll (1-w)\gamma$ and \begin{equation} \Delta = O\left(\frac{m^2 J(m^4J^2 + \|H_\textnormal{else}\|)}{\mathcal{E}^2}\right) \end{equation} Let $c=\gamma/(\gamma-2\mathcal{E})=O(1)$, then $\tilde{H} = c\tilde{H}_\textnormal{gadget}$ gap-simulates $(H_\textnormal{target},P)$ with incoherence $\epsilon=\O(\mathcal{E}/(1-w)^2\gamma)$ and energy spread $\tilde{w}\le w+2\mathcal{E}/\gamma$. \end{claim} \begin{proof} Let us denote $P_\textnormal{anc} = \bigotimes_\mu \ketbra{0}^{(a_\mu)}=\ketbra{\vect{a}=0}$ which projects onto the ancilla state described by the binary string $\vect{a}=0$. It is also the groundspace projector of $H_\textnormal{anc}$. Let us denote $\Pi_- = \mathds{1}\otimes P_\textnormal{anc}$ and $\Pi_+ = \mathds{1}-\Pi_-$ be two projectors that partition the full Hilbert space into $\L_-$ and $\L_+$ respectively. We now follow the same convention laid out in Eq.~\eqref{eq:gadget-convention-1} and \eqref{eq:gadget-convention-2}. Note $G_+(z) = \Pi_+ (z-H_\textnormal{anc})^{-1} \Pi_+ = \sum_{\vect{a}\neq 0} \ketbra{\vect{a}}/(z-h(\vect{a})\Delta)$, where $h(\vect{a})$ is the Hamming weight of the binary string $\vect{a}$. Observe that \begin{eqnarray} V_{--} &=& (H_\textnormal{else} + \sum_\mu |c_\mu|)\otimes P_\textnormal{anc} \\ V_{-+} G_+ V_{+-} &=& \sum_\mu \frac{|c_\mu|\Delta}{2(z-\Delta)} ( \sgn(c_\mu) A_\mu - B_\mu )^2 \otimes P_\textnormal{anc} = \frac{\Delta}{z-\Delta} \sum_\mu (|c_\mu|-c_\mu A_\mu \otimes B_\mu ) \otimes P_\textnormal{anc} \nonumber \\ &=& \sum_{\mu} (c_\mu A_\mu \otimes B_\mu - |c_\mu|)\otimes P_\textnormal{anc} + \frac{z}{z-\Delta} \sum_\mu (|c_\mu|-c_\mu A_\mu \otimes B_\mu ) \otimes P_\textnormal{anc}. \end{eqnarray} We used the fact that in the second-order perturbation $V_{-+} G_+ V_{+-}$, there's no ``cross-gadget'' term because $\sum_{\mu,\mu'} \Pi_- X^{(a_\mu)} G_+ X^{(a_\mu')}\Pi_- = \delta_{\mu,\mu'}\Pi_-/(z-\Delta)$. Noting that $H_{\textnormal{anc},-}=0$, we have \begin{eqnarray} \Sigma_-(z) = H\otimes P_\textnormal{anc} +\underbrace{\frac{z}{z-\Delta} \sum_\mu (|c_\mu|-c_\mu A_\mu \otimes B_\mu ) \otimes P_\textnormal{anc} }_{E_1} + \underbrace{\sum_{p=1}^\infty V_{-+}(G_+ V_{++})^p G_+ V_{+-}}_{E_2} \end{eqnarray} We want to show $\|\Sigma_-(z) - H_\textnormal{eff}\|< \mathcal{E}$ for an appropriate range of $z$ if $\Delta$ is sufficiently large. Consider $|z| \le \|H\|+\mathcal{E}$, which is sufficient for applying Lemma~\ref{lem:gadget-eigenvalue} and \ref{lem:gadget-ground-space}. Let us assume we choose $\Delta \gg 2\|H\|$, and consequently $\Delta \gg J$. Then we can bound the first error term $\|E_1\| \le O(mJ/\Delta)$. Note we have $\|G_+(z)\| \le 1/(\Delta - \|H\|) \le 2/\Delta$, $\|V_{-+}\| \le O(m \sqrt{J\Delta})$ and $\|V_{++}\| \le \|V\| = \|H_\textnormal{else}\| + O(mJ) + O(m\sqrt{J\Delta}) = \|H_\textnormal{else}\| + O(m\sqrt{J\Delta})$, and thus \begin{equation} \|E_2\| \le \sum_{p=1}^\infty \frac{\|V^{(0)}_{-+}\|^2\|V^{(0)}_{++}\|^p}{(\Delta/2)^{p+1}} = \frac{4 \|V_{-+}\|^2 \|V_{++}\|}{\Delta(\Delta - 2\|V_{++}\|)} \le O\left(\frac{m^3 J^{3/2}}{\Delta^{1/2}} \right) + O\left(\frac{m^2 J \|H_\textnormal{else}\|}{\Delta}\right) \end{equation} To make sure $\|E_1\| + \|E_2\| \le \mathcal{E}$, we need $\Delta = \Omega(m^6 J^3/\mathcal{E}^2)$ and $\Delta=\Omega(m^2 J \|H_\textnormal{else}\|/\mathcal{E})$. Hence, a sufficient choice for $\Delta$ is \begin{equation} \Delta = O\left(\frac{m^2 J(m^4J^2 + \|H_\textnormal{else}\|)}{\mathcal{E}^2}\right) \end{equation} Note this choice would also ensure $\|V\|/\Delta \ll \mathcal{E}$. Let us denote $\tilde{P}$ as the quasi-groundspace projector of $\tilde{H}_\textnormal{gadget} $ corresponding to the lowest $\rank(P)$ eigenstates. By applying Lemma~\ref{lem:gadget-eigenvalue}, we can see that the corresponding $H_\textnormal{target}$ and $\tilde{H}_\textnormal{gadget}$ differ by at most $\mathcal{E}$. By rescaling $\tilde{H}_\textnormal{gadget}\mapsto \tilde{H} = c \tilde{H}_\textnormal{gadget}$, where $c=\gamma/(\gamma-2\mathcal{E})$, we can ensure $\tilde{H}$ has $\tilde{P}$ as a quasi-groundspace projector with quasi-spectral gap at least $\gamma$. Furthermore, assuming $\mathcal{E} < (1-w)\gamma/2$, we can bound the energy spread of $\tilde{P}$ in $\tilde{H}$ is by \begin{equation} \tilde{w} \le \frac{w \gamma + 2\mathcal{E}}{\gamma} = w + \frac{2\mathcal{E}}{\gamma}. \end{equation} By applying Lemma~\ref{lem:gadget-ground-space} with $r=(1+w)\gamma/2$, noting that $|E^g| \le \|H_\textnormal{target}\| \ll \Delta$ and $\mathcal{E} \ll (1-w)\gamma/2$, we can bound incoherence by \begin{equation} \|\tilde{P} - P\otimes P_\textnormal{anc}\| \le \O\left(\frac{\mathcal{E}}{(1-w)^2\gamma}\right). \end{equation} \end{proof} \begin{claim}[gap-simulation by 3-to-2-local gadget] \label{claim:3-to-2} Given a $n$-qubit 3-local Hamiltonian $H_\textnormal{target}$ with a quasi-groundspace projector $P$ with quasi-spectral gap $\gamma$ and energy spread $w$. Let us write it as \begin{equation} H_\textnormal{target} = H_\textnormal{else} + \sum_{\mu=1}^m c_\mu A_\mu^{(i_\mu)} \otimes B_\mu^{(j_\mu)} \otimes C_\mu^{(k_\mu)} \quad A_\mu, B_\mu, C_\mu \in \{X,Y,Z\} \end{equation} where $|c_\mu|\le \sqrt{\Delta_0}$ for some $\Delta_0$. We assume $H_\textnormal{else}$ contains only 2-local terms and $\|H_\textnormal{else}\| \le O(m \Delta_0)$. Consider adding an ancilla qubit $a_\mu$ for each $1\le \mu \le m$. Then consider the following $\mathcal{E}$-precision 2-local gadget Hamiltonian on $n+m$ qubits \begin{equation} \begin{aligned} & \tilde{H}_\textnormal{gadget} = H_\textnormal{anc} + V, \quad H_\textnormal{anc} = \sum_\mu \Delta \ketbra{1}^{(a_\mu)}, \quad V = V_1 + V_2 \\ & V_1 = H_\textnormal{else} + \sum_\mu \left[ \frac12 \Delta^{1/3}(A_\mu^{(i_\mu)} - B_\mu^{(j_\mu)})^2 + c_\mu C_\mu^{(k_\mu)} \otimes \ketbra{0}^{(a_\mu)} \right], \\ & V_2 = \Delta^{2/3} \sum_\mu \left[\frac{1}{\sqrt{2}} (A_\mu^{(i_\mu)} - B_\mu^{(j_\mu)})\otimes X^{(a_\mu)} - c_\mu C_\mu^{(k_\mu)} \otimes \ketbra{1}^{(a_\mu)}\right] \\ \end{aligned} \end{equation} where $\Delta = O(m^{12} \Delta_0^3 /\mathcal{E}^3)$, where we assume $\mathcal{E} \ll (1-w)\gamma/2$. Let $c=\gamma/(\gamma-2\mathcal{E})=O(1)$, then $\tilde{H}=c\tilde{H}_\textnormal{gadget}$ gap-simulates $(H_\textnormal{target}, P)$ with incoherence $\epsilon = \O(\mathcal{E}/(1-w)^2\gamma)$ and energy spread $\tilde{w} \le w + 2\mathcal{E}/\gamma$, \end{claim} \begin{proof} Let us denote $P_\textnormal{anc} = \bigotimes_\mu \ketbra{0}^{(a_\mu)}=\ketbra{\vect{a}=0}$ which projects onto the ancilla state described by the binary string $\vect{a}=0$. It is also the groundspace projector of $H_\textnormal{anc}$. Let us denote $\Pi_- = \mathds{1}\otimes P_\textnormal{anc}$ and $\Pi_+ = \mathds{1}-\Pi_-$ be two projectors that partition the full Hilbert space into $\L_-$ and $\L_+$ respectively. We now follow the same convention laid out in Eq.~\eqref{eq:gadget-convention-1} and \eqref{eq:gadget-convention-2}. Note $G_+(z) = \Pi_+ (z-H_\textnormal{anc})^{-1} \Pi_+ = \sum_{\vect{a}\neq 0} \ketbra{\vect{a}}/(z-h(\vect{a})\Delta)$, where $h(\vect{a})$ is the Hamming weight of the binary string $\vect{a}$. In the following, we will simplify notation by denoting $A_\mu=A_\mu^{(i_\mu)}, B_\mu\equiv B_\mu^{(j_\mu)}, C_\mu \equiv C_\mu^{(k_\mu)}$, and $X_\mu \equiv X_\mu^{(a_\mu)}$. Observe that \begin{eqnarray} V_{--} &=& V_{1,--} = H_\textnormal{else} \otimes P_\textnormal{anc} + \sum_\mu \left[ \frac12 \Delta^{1/3}(A_\mu - B_\mu)^2 + c_\mu C_\mu \right]\otimes P_\textnormal{anc} \\ V_{-+} &=& \Delta^{2/3} \sum_\mu \left[\frac{1}{\sqrt{2}} (A_\mu - B_\mu)\otimes \ketbrat{0}{1}^{(a_\mu)}\right] \end{eqnarray} Then \begin{eqnarray} V_{-+} G_+ V_{+-} &=& \frac{\Delta^{4/3}}{z-\Delta} \sum_\mu \frac{1}{2}(A_\mu-B_\mu)^2\otimes P_\textnormal{anc} \nonumber\\ V_{--} + V_{-+} G_+ V_{+-} &=& \left[H_\textnormal{else} + \sum_\mu c_\mu C_\mu\right]\otimes P_\textnormal{anc} + \underbrace{\frac{z\Delta^{1/3}}{2(z-\Delta)} \sum_\mu (A_\mu - B_\mu)^2 \otimes P_\textnormal{anc}}_{E_1} \end{eqnarray} At the third order perturbation theory, the only allowed virtual transition on the ancilla qubits are of the form $\ket{0\cdots0}\to\ket{0\cdots010\cdots0} \to\ket{0\cdots010\cdots0} \to \ket{0\cdots0}$. In other words, the only non-zero terms at this order involve exciting some ancilla $a_\mu$ from $\ket{0}$ to $\ket{1}$ by $V_{+-}$, keeping it at $\ket{1}$ by $V_{++}$, and then return it to $\ket{0}$ by $V_{-+}$. Hence \begin{eqnarray*} && V_{-+} G_+ V_{++} G_+ V_{+-} = -\frac{\Delta^2}{2(z-\Delta)^2}\sum_\mu c_\mu (A_\mu-B_\mu)^2C_\mu \otimes P_\textnormal{anc} \nonumber \\ && + \underbrace {\frac{1}{(z-\Delta)^2}\Bigg[ \frac{\Delta^{4/3}}{2} \sum_\mu (A_\mu-B_\mu) H_\textnormal{else} (A_\mu - B_\mu)+\sum_{\mu,\mu'} \frac{\Delta^{5/3}}{4} (A_\mu - B_\mu)^2(A_{\mu'} - B_{\mu'})^2 \Bigg]\otimes P_\textnormal{anc}}_{E_2} \end{eqnarray*} Let us denote $\xi = \Delta^2/[(z-\Delta)^2]$, then \begin{eqnarray} && V_{--} + V_{-+} G_+ V_{+-} + V_{-+} G_+ V_{++} G_+ V_{+-}\nonumber \\ &=& \left[H_\textnormal{else} + \sum_\mu \left(c_\mu C_\mu -\frac{\xi}{2} c_\mu (A_\mu-B_\mu)^2C_\mu\right)\right] \otimes P_\textnormal{anc} + E_1 + E_2 \nonumber \\ &=& \left[H_\textnormal{else} + \xi \sum_\mu c_\mu A_\mu B_\mu C_\mu + (1-\xi)\sum_\mu c_\mu C_\mu \right]\otimes P_\textnormal{anc} + E_1 + E_2 \nonumber \\ &=& \underbrace{[H_\textnormal{else} + \sum_\mu c_\mu A_\mu B_\mu C_\mu]}_{H_\textnormal{target}} \otimes P_\textnormal{anc} + \underbrace{(1-\xi) \sum_\mu c_\mu(C_\mu - A_\mu B_\mu C_\mu)\otimes P_\textnormal{anc}}_{E_3} + E_1 + E_2 \end{eqnarray} Thus, we can write the self-energy as \begin{eqnarray} \Sigma_-(z) &=& H_\textnormal{target}\otimes P_\textnormal{anc} + E_1 + E_2 + E_3 + \underbrace{\sum_{p=2}^\infty V_{-+}(G_+ V_{++})^p G_+ V_{+-}}_{E_4} \end{eqnarray} Suppose we choose the relevant range of $|z|\le z_{\max} = \|H_\textnormal{else}\| = O(m\Delta_0)$. Assuming we will choose $\Delta \gg \|H_\textnormal{else}\|$, we have \begin{eqnarray} \|E_1\| &\le& O\left(m \frac{ \|H_\textnormal{else}\| }{\Delta^{2/3}} \right) = O\left(\frac{m^2\Delta_0}{\Delta^{2/3}}\right) \\ \|E_2\| &\le& O\left(\frac{m z_{\max}}{\Delta^{2/3}}\right) + O\left(\frac{m^2}{\Delta^{1/3}}\right) = O\left(\frac{m^2\Delta_0}{\Delta^{2/3}}\right) + O\left(\frac{m^2}{\Delta^{1/3}}\right)\\ \|E_3\| &\le& 2m \sqrt{\Delta_0}|1-\xi| = 2m \sqrt{\Delta_0} \frac{(2\Delta - \|H_\textnormal{else}\| )\|H_\textnormal{else}\| }{(\Delta-\|H_\textnormal{else}\| )^2} \le O \left(\frac{m^2\Delta_0^{3/2}}{\Delta}\right) \\ \|E_4\| &\le& \sum_{p=2}^\infty \frac{\|V_{-+}\|^2 \|V_{++}\|^p}{(\Delta/2)^{p+1}} = \frac{8\|V_{-+}\|^2 \|V_{++}\|^2}{\Delta^2(\Delta-2\|V_{++}\|)} \le O\left(\frac{m^4 \Delta_0}{\Delta^{1/3}}\right)+ O\left(\frac{m^4\Delta_0^2}{\Delta^{5/3}}\right) \end{eqnarray} where we used the fact that $\|V_{-+}\|= O(m\Delta^{2/3})$ and $\|V_{++}\|\le \|V\| = O(m \sqrt{\Delta_0} \Delta^{2/3}) + \|H_\textnormal{else}\| = O(m\sqrt{\Delta_0} \Delta^{2/3}) + O(m\Delta_0)$. Hence, as sufficient choice for $\Delta$ to ensure that $\|\Sigma_-(z) - H_\textnormal{eff}\|\le \mathcal{E}\|$ would be \begin{equation} \Delta = O(m^{12} \Delta_0^3 /\mathcal{E}^3) \end{equation} Let us denote $\tilde{P}$ as the quasi-groundspace projector of $\tilde{H}_\textnormal{gadget}$ corresponding to the lowest $\rank(P)$ eigenstates. By applying Lemma~\ref{lem:gadget-eigenvalue}, we can see that the corresponding $H_\textnormal{target}$ and $\tilde{H}_\textnormal{gadget}$ differ by at most $\mathcal{E}$. By rescaling $\tilde{H}_\textnormal{gadget}\mapsto \tilde{H} = c \tilde{H}_\textnormal{gadget}$, where $c=\gamma/(\gamma-2\mathcal{E})$, we can ensure $\tilde{H}$ has $\tilde{P}$ as a quasi-groundspace projector with quasi-spectral gap at least $\gamma$. Furthermore, assuming $\mathcal{E} < (1-w)\gamma/2$, we can bound the energy spread of $\tilde{P}$ in $\tilde{H}$ is by \begin{equation} \tilde{w} \le \frac{w \gamma + 2\mathcal{E}}{\gamma} = w + \frac{2\mathcal{E}}{\gamma}. \end{equation} By applying Lemma~\ref{lem:gadget-ground-space} with $r=(1+w)\gamma/2$, noting that $|E^g| \le \|H_\textnormal{target}\| \ll \Delta$ and $\mathcal{E} \ll (1-w)\gamma/2$, we can bound incoherence by \begin{equation} \|\tilde{P} - P\otimes P_\textnormal{anc}\| \le \O\left(\frac{\mathcal{E}}{(1-w)^2\gamma}\right). \end{equation} \end{proof} \begin{claim}[degree-reduction via fork gadget] \label{claim:fork} Consider a 2-local Hamiltonian $H_\textnormal{target}$ of the form \begin{equation} H_\textnormal{target} = H_\textnormal{else} + \sum_{i=1}^n \sum_{\alpha=x,y,z}\sum_{\kappa_{\alpha,i}=1}^{r_{\alpha,i}} \lambda_{\kappa_{\alpha,i}} \sigma^{(i)}_{\alpha} \otimes X^{(\kappa_{\alpha,i})} \end{equation} which contains $n$ original qubits interacting only with ancilla qubits $\kappa_{\alpha,i}$ through $\sigma_\alpha\otimes X$. Let $r_0 \equiv \max_{i,\alpha}r_{\alpha,i} = O(\poly n)$ be the maximum ``Pauli degree'', and $|\lambda_{\kappa_{\alpha,i}}| \le \sqrt{\Delta_0}$. We assume $H_\textnormal{else}$ does not act on the $n$ original qubits, and contains $O(n r_0)$ terms with interaction strength at most $\Delta_0$. Lastly, we assume the ancilla (non-original) qubits in $H_\textnormal{target}$ has degree at most 5. Now let $P$ be a quasi-groundspace projector of $H_\textnormal{target}$ with energy spread $w$ and quasi-spectral gap $\gamma$. Then for some precision parameter $\mathcal{E} \ll (1-w)\gamma$, there is a Hamiltonian $\tilde{H}$ that gap-simulates $(H_\textnormal{target}, P)$ with incoherence $\epsilon = O((\mathcal{E}/\gamma) \log n)$ and energy spread $\tilde{w} \le w + O((\mathcal{E}/\gamma) \log n)$, and has maximum degree 6, $O(nr_0)$ terms with interaction strength $J=O((\poly(n) \Delta_0/\mathcal{E}^2)^{\poly(n)})$. \end{claim} \begin{proof} We can reduce the degrees of original qubits with serial application of the ``fork gadget''~\cite{OliveiraTerhal}. We will apply the fork gadget in $S=O(\log_2(r_0))=O(\log(n))$ iterative steps, starting with the target Hamiltonian $H_\textnormal{target}^{(1)} = H_\textnormal{target}$, producing gadget Hamiltonian that gap-simulates the target, which then becomes the target Hamiltonian for the next step: \begin{equation} [H_\textnormal{target} \equiv H_\textnormal{target}^{(1)} ]\to [\tilde{H}_\textnormal{gadget}^{(1)} \equiv H_\textnormal{target}^{(2)} ] \to \cdots \to [\tilde{H}_\textnormal{gadget}^{(S-1)} \equiv H_\textnormal{target}^{(S-1)}] \to \tilde{H}_\textnormal{gadget}^{(S)} = \tilde{H} \end{equation} At each step $s=1,\ldots,S$, our target Hamiltonian is of the form \begin{eqnarray} H_\textnormal{target}^{(s)} \equiv \tilde{H}_\textnormal{gadget}^{(s-1)} = H_\textnormal{else}^{(s)} + \sum_{i=1}^n \sum_{\alpha=x,y,z}\sum_{\kappa_{\alpha,i}^s=1}^{r_{\alpha,i}^{(s)}} \lambda_{\kappa_{\alpha,i}^s} \sigma^{(i)}_{\alpha} \otimes X^{(\kappa_{\alpha,i}^s)} \end{eqnarray} where $H_\textnormal{else}^{(s)}$ contains all terms that are not 2-local terms that involve an original qubit. Note in the last sum, $\kappa_{\alpha,i}^s$ indexes all ancilla qubits that interacts with the original qubit $i$ with $\sigma_\alpha$ at the beginning of step $s$. We (self-consistently) assume that the degrees of these ancilla qubits $\kappa_{\alpha,i}^s$ are at most 5 in $H_{\textnormal{target}}^{(s)}$, and furthermore (denoting $M_0=nr_0$) that \begin{equation} \text{for } 1\le s \le S, \quad \Delta_s \gg M_0 \Delta_{s-1}, \quad \|H_\textnormal{else}^{(s)}\| \le \|H_\textnormal{target}^{(s)}\| \le O(M_0 \Delta_{s-1}), \quad \lambda_{\kappa_{\alpha,i}^s} = O(\sqrt{\Delta_{s-1}}). \label{eq:fork-self-consistent-assumption} \end{equation} where $\Delta_s$ is thought of as the interaction strength of $\tilde{H}_\textnormal{gadget}^{(s)}$. To simplify notation, we will denote $\kappa\equiv\kappa_{\alpha,i}^s$ from now on with the implicit understanding that the index $\kappa$ depends on the Pauli type $\alpha$, original qubit index $i$, as well as step index $s$. Then, the \emph{fork gadget} Hamiltonian that roughly halves the Pauli degrees of original qubits is \begin{gather} \tilde{H}_\textnormal{gadget}^{(s)} = H_\textnormal{anc}^{(s)} + V^{(s)}, \quad H_\textnormal{anc}^{(s)} = \Delta_s \sum_{i=1}^n \sum_{\alpha=x,y,z} \sum_{\kappa=1}^{\lfloor r_{\alpha,i}^{(s)} /2 \rfloor} \ketbra{1}^{()} , \quad V^{(s)} = V_1^{(s)}+V_2^{(s)}\\ V_1^{(s)} = H_\textnormal{else}^{(s)}{}' + \sum_{i,\alpha}\sum_{\kappa=1}^{\lfloor r_{\alpha,i}^{(s)} /2 \rfloor} \left[ \lambda_{2\kappa-1}\lambda_{2\kappa} X^{(2\kappa-1)} X^{(2\kappa)} + \frac12(\lambda_{2\kappa-1}^2 + \lambda_{2\kappa}^2 + 1)\right]\\ V_2^{(s)} = \sqrt{\frac{\Delta_s}{2}} \sum_{i,\alpha}\sum_{\kappa=1}^{\lfloor r_{\alpha,i}^{(s)} /2 \rfloor} (\sigma_\alpha^{(i)} - \lambda_{2\kappa-1} X^{(2\kappa-1)} - \lambda_{2\kappa} X^{(2\kappa)})\otimes X^{(a_\kappa)} \end{gather} Here, we introduce an extra ancilla qubit $a_\kappa$ for each pair of relevant ancilla qubit $(2\kappa-1, 2\kappa)$ where the fork gadget is applied, and it has degree 3. For every odd $r_{\alpha,i}^{(s)}$, we add the left-over term $\lambda_{\kappa} \sigma_{\alpha}^{(i)}\otimes X^{(\kappa)}$ for $\kappa = r_{\alpha,i}^{(s)}$ to $H_\textnormal{else}^{(s)}$, which gives us $H_\textnormal{else}^{(s)}{}'$. Each original qubit $i$ thus has its $\alpha$-Pauli-degree reduced to $r_{\alpha,i}^{(s+1)} = \lceil r_{\alpha,i}^{(s)} /2 \rceil$. However, the pre-existing ancilla qubits $2\kappa-1$ and $2\kappa$ acquire an extra ``edge'' (interaction term) in $V_1^{(s)}$, so their degrees increase by one, but will not increase further as they are unaffected by subsequent gadget applications. Therefore, since we assumed these qubits have degree at most 5 in $H_\textnormal{target}^{(s)}$, their degree in $\tilde{H}_\textnormal{gadget}^{(s)}$ is at most 6. Note when we consider $\tilde{H}_\textnormal{gadget}^{(s)}\equiv H_\textnormal{target}^{(s+1)}$, the last two of the three self-consistent assumptions we made in Eq.~\eqref{eq:fork-self-consistent-assumption} are satisfied. \textit{Effective simulation of $H_\textnormal{target}^{(s)}$}--- Now we've written down the gadget Hamiltonian, we want to show that it reproduces the effective Hamiltonian $H_\textnormal{eff}^{(s)} = H_\textnormal{target}^{(s)}\otimes P_\textnormal{anc}|_{\L_-}$ in its self-energy $\Sigma_-(z)$. Let us denote $P_\textnormal{anc}^{(s)}=\bigotimes_{a_\kappa}\ketbra{0}^{(a_\kappa)}$ as the ground space projector of $H_\textnormal{anc}^{(s)}$ restricted to Hilbert space of newly added ancilla. We also denote $\Pi_-^{(s)}=\mathds{1}\otimes P_\textnormal{anc}$, $\Pi_+^{(s)} = \mathds{1} - \Pi_-^{(s)}$ as projectors the partition the full Hilbert space into low and high energy subspace with respect to $H_\textnormal{anc}^{(s)}$. Following the same convention outlined earlier in Eq.~\eqref{eq:gadget-convention-1}, we denote $V_{\pm\pm}^{(s)}$, $V_{\pm\mp}^{(s)}$, and $G_+^{(s)}(z)\equiv \Pi_+^{(s)}(z-H_\textnormal{anc}^{(s)})^{-1}\Pi_+^{(s)}$, etc. We then apply the perturbation series expansion for self-energy in Eq.~\eqref{eq:gadget-convention-2} to find \begin{equation} \label{eq:self-energy-fork} \Sigma_-^{(s)}(z) = H^{(s)}_{\textnormal{anc},-} + V^{(s)}_{--} + V^{(s)}_{-+} G^{(s)}_+ V^{(s)}_{+-} + \sum_{p=1}^\infty V^{(s)}_{-+} (G^{(s)}_{+} V^{(s)}_{++})^p G^{(s)}_+ V^{(s)}_{+-} \end{equation} Here $H_{\textnormal{anc},-}^{(s)}=0$, but \begin{eqnarray} && V_{--}^{(s)}+ V_{-+}^{(s)}G^{(s)}_+ V^{(s)}_{+-} = V_1^{(s)}\otimes P_\textnormal{anc}^{(s)} + \frac{\Delta_s}{2(z-\Delta_s)} \sum_{i,\alpha}\sum_{\kappa=1}^{\lfloor r_{\alpha,i}^{(s)} /2 \rfloor} (\sigma_\alpha^{(i)} - \lambda_{2\kappa-1} X^{(2\kappa-1)} - \lambda_{2\kappa} X^{(2\kappa)})^2 \otimes P_\textnormal{anc}^{(s)} \nonumber \\ &=& H_\textnormal{target}^{(s)} \otimes P_\textnormal{anc}^{(s)} + \underbrace{\frac{z}{2(z-\Delta_s)} \sum_{i,\alpha}\sum_{\kappa=1}^{\lfloor r_{\alpha,i}^{(s)} /2 \rfloor} (\sigma_\alpha^{(i)} - \lambda_{2\kappa-1} X^{(2\kappa-1)} - \lambda_{2\kappa} X^{(2\kappa)})^2 \otimes P_\textnormal{anc}^{(s)}}_{E_1} \label{eq:second-order-fork} \end{eqnarray} where we used the fact that there is no ``cross-gadget'' term because $\sum_{\kappa,\kappa'} \Pi_- X^{(a_\kappa)} G_+ X^{(a_\kappa')}\Pi_- = \delta_{\kappa,\kappa'}\Pi_-/(z-\Delta_s)$. \textit{Optimizing $\Delta_s$}--- We need to find the optimal choice of interaction strength $\Delta_s$ that allows $\tilde{H}_\textnormal{gadget}^{(s)}$ to gap-simulate $H_\textnormal{target}^{(s)}$. To that end, we want to show $\|\Sigma_-^{(s)}(z) - H_\textnormal{eff}^{(s)}\|\le \mathcal{E}$, for some range of $|z| \le z_{\max} = \|H_\textnormal{target}^{(s)}\|$ and appropriately large $\Delta_s$. Recall our self-consistent assumptions in Eq.~\eqref{eq:fork-self-consistent-assumption} that: \begin{equation} \Delta_s \gg M_0 \Delta_{s-1}, \quad \|H_\textnormal{else}^{(s)}\| \le \|H_\textnormal{target}^{(s)}\| \le O(M_0 \Delta_{s-1}), \quad \lambda_{\kappa_{\alpha,i}^s} = O(\sqrt{\Delta_{s-1}}) \end{equation} Note the error term $E_1$ in Eq.~\eqref{eq:second-order-fork} can be bounded by \begin{equation} \|E_1\| \le O(M_0\sqrt{\Delta_{s-1}} z_{\max}/\Delta_s) = O(M_0^2\Delta_{s-1}^{3/2}/\Delta_s). \end{equation} We also have $\|V_{-+}^{(s)}\| , \|V_{2,++}^{(s)}\| \le \|V^{(s)}\| = O(M_0\sqrt{\Delta_s\Delta_{s-1}})$, $ \|V_{1,++}^{(s)}\| = O(M_0\Delta_{s-1})$, and $\|G_+^{(s)}\| \le 1/(\Delta_s - z_{\max}) \le 2/\Delta_s$. We are now ready to bound the higher order terms ($p\ge 1$) in Eq.~\eqref{eq:self-energy-fork}. In order to obtain a better overall error bound, let us bound the $p=1$ term (third-order perturbation) in Eq.~\eqref{eq:self-energy-fork} separately. Observe that at this order, with only three possible applications of $V$, the only possible virtual transition on the ancilla level is of the form $\ket{0\cdots0}\to\ket{0\cdots010\cdots0} \to\ket{0\cdots010\cdots0} \to \ket{0\cdots0}$. Therefore, \begin{eqnarray} V_{-+}^{(s)} G_+^{(s)} V_{++}^{(s)} G_+^{(s)} V_{+-}^{(s)} &=& V_{-+}^{(s)} G_+^{(s)} V_{1,++}^{(s)} G_+^{(s)} V_{+-}^{(s)}, \nonumber \\ \|E_2\| \equiv \|V_{-+}^{(s)} G_+^{(s)} V_{++}^{(s)} G_+^{(s)} V_{+-}^{(s)}\| &\le & \frac{\|V_{-+}^{(s)} \|^2 \|V_{1,++}^{(s)}\|}{(\Delta_s - z_{\max})^2} = O\left(\frac{M_0^3\Delta_{s-1}^2}{\Delta_s}\right). \end{eqnarray} Also, $\|V_{++}^{(s)}\| \le \|V_{1,++}^{(s)}\| + \|V_{2,++}^{(s)}\| = O(M_0\sqrt{\Delta_s\Delta_{s-1}})$. Consequently, we can bound the remaining terms in Eq.~\eqref{eq:self-energy-fork}: \begin{eqnarray} \|E_3\| \equiv\left\|\sum_{p=2}^\infty V^{(s)}_{-+} (G^{(s)}_{+} V^{(s)}_{++})^p G^{(s)}_+ V^{(s)}_{+-}\right\| &\le& \sum_{p=2}^\infty \frac{\|V^{(s)}_{-+}\|^2\|V^{(s)}_{++}\|^p}{(\Delta_s/2 )^{p+1}} = \frac{8\|V_{-+}^{(s)}\|^2 \|V_{++}^{(s)}\|^2}{\Delta_s^2(\Delta_s- 2\|V_{++}^{(s)}\|)} \nonumber \\ &\le& O\left(\frac{M_0^4 \Delta_{s-1}^2}{\Delta_s}\right). \end{eqnarray} Hence, for $H_\textnormal{eff}^{(s)} = H_\textnormal{target}^{(s)}\otimes P_\textnormal{anc}^{(s)}$, we have \begin{equation} \|\Sigma_-^{(s)}(z) - H_\textnormal{eff}^{(s)}\|\le \|E_1\|+\|E_2\| + \|E_3\| =O\left(\frac{M_0^4 \Delta_{s-1}^2}{\Delta_s}\right) \end{equation} Furthermore, in order to bound the groundspace projector error to $\O(\mathcal{E})$ per Lemma~\ref{lem:gadget-ground-space}, we also need $\|V^{(s)}\|/\Delta_s \le\O(M_0\sqrt{\Delta_{s-1}/\Delta_s}) \le \mathcal{E}_s$. Hence, a sufficient choice of $\Delta_s$ that satisfy all these bounds is \begin{equation} \label{eq:gadget-norm2} \Delta_s = O \left(\frac{M_0^4\Delta_{s-1}^2}{\mathcal{E}^2}\right). \end{equation} \textit{Analysis of gap-simulation}--- We now analyze the gap-simulation of $H_\textnormal{target}^{(s)}$ by $\tilde{H}_\textnormal{gadget}^{(s)}$, for $s=1,\ldots, S$. By Lemma~\ref{lem:gadget-eigenvalue}, one can see that corresponding eigenvalues of $\tilde{H}_\textnormal{gadget}^{(s)}$ and $H_\textnormal{target}^{(s)}$ differ by at most $\mathcal{E}$. Let $\tilde{P}^{(0)}=P$ be the given quasi-groundspace projector of $H_\textnormal{target} = H_\textnormal{target}^{(1)}$ with energy spread $w$ and quasi-spectral gap $\gamma$. We let $\tilde{P}^{(s)}$ as the quasi-groundspace projector onto $\rank(P)$ lowest eigenstates of $\tilde{H}_\textnormal{gadget}^{(s)}$, with energy spread $\tilde{w}_s$ and quasi-spectral gap $\gamma_s$. We also generalize to the case of $s=0$ by denoting $\tilde{w}_0 = w$ and $\gamma_0=\gamma$. To ensure $\gamma_s \ge \gamma$, we simply scale $\tilde{H}_\textnormal{gadget}^{(s)} \mapsto c \tilde{H}_\textnormal{gadget}^{(s)}$ where $c = \frac{\gamma}{\gamma-2\mathcal{E}}=O(1)$. Assuming $\mathcal{E} < (1-\tilde{w}_{s-1})\gamma/2$, the energy spread of $\tilde{H}_\textnormal{gadget}^{(s)}$ can be bounded by \begin{equation} \tilde{w}_s\le \frac{\tilde{w}_{s-1}\gamma + 2 \mathcal{E}}{\gamma} =\tilde{w}_{s-1} + \frac{2\mathcal{E}}{\gamma}. \label{eq:fork-energy-spread} \end{equation} Let $E^g_0$ be the groundstate energy of $H_\textnormal{target}$ and $E^g_s$ be the groundstate energy of $\tilde{H}_\textnormal{gadget}^{(s)}$, then Lemma~\ref{lem:gadget-eigenvalue} tells us that $|E^g_s| \le |E^g_0| + \sum_{r=1}^{s} \mathcal{E}_r \le O(\|H_\textnormal{target}\|) + \O(S \mathcal{E}) \ll \Delta_s$. Our choice of $\Delta_s$ satisfies $\|V^{(s)}\|/\Delta_s \le \O(\mathcal{E}_s)$, we can use Lemma~\ref{lem:gadget-ground-space} with $r=(1+\tilde{w}_{s-1})\gamma/2$ and $\mathcal{E} \ll (1-\tilde{w}_{s-1})\gamma/2$ to bound the error in the ground space projector by \begin{equation} \label{eq:fork-ground-space} \|\tilde{P}^{(s)} - \tilde{P}^{(s-1)} \otimes P_\textnormal{anc}^{(s)}\| \le \O\left(\frac{\mathcal{E}_s}{(1-\tilde{w}_{s-1})^2\gamma}\right). \end{equation} \textit{Analysis of output Hamiltonian}--- After $S=O(\log r_0) = O(\log n)$ iterations of fork gadgets, we obtained the final gadget Hamiltonian $\tilde{H} = \tilde{H}_\textnormal{gadget}^{(S)}$. Here, each original qubit has final Pauli-$\alpha$-degree of at most $2$. Since the different Pauli couplings are handled independently with different groups of ancilla qubits, the maximum degree of each original qubit is $2\times3=6$. As noted earlier, each ancilla qubit has degree at most 6 in $\tilde{H}_\textnormal{gadget}^{(s)}$. Hence $\tilde{H}$ has maximum degree 6. Moreover, since we had assumed $H_\textnormal{target}$ has $O(n r_0)$ terms, so does $\tilde{H}$. By Eq.~\eqref{eq:fork-energy-spread}, $\tilde{H}$ gap-simulates $(H_\textnormal{target}, P)$ with energy spread \begin{equation} \tilde{w}_S \le w + \sum_{s=1}^S \frac{2\mathcal{E}}{\gamma} = w + \frac{2S\mathcal{E}}{\gamma} = w + O(\frac{\mathcal{E}}{\gamma} \log n) \end{equation} By choosing $P_\textnormal{anc}=\bigotimes_{s=1}^S P_\textnormal{anc}^{(s)}$, we can bound the incoherence of gap-simulation by \begin{equation} \|\tilde{P}^{(S)} - P\otimes P_\textnormal{anc}\| \le \sum_{s=1}^{S} \O\left(\frac{\mathcal{E}}{(1-\tilde{w}_{s-1})^2\gamma} \right) \lesssim \O(S\mathcal{E}/\gamma) = O(\mathcal{E} \log n/\gamma). \end{equation} Furthermore, solving the recursive relation in Eq.~\eqref{eq:gadget-norm2} yields the maximum required interaction strength in $\tilde{H}$: \begin{equation} \Delta_{S} = O\left(\left(\frac{M_0^4\Delta_0}{\mathcal{E}^2}\right)^{2^S-1} \right) = O\left(\left(\frac{M_0^4\Delta_0}{\mathcal{E}^2}\right)^{\poly(n)} \right) = O\left(\left(\frac{\poly(n) \Delta_0}{\mathcal{E}^2}\right)^{\poly(n)}\right). \end{equation} Note that in this final gadget Hamiltonian, the resultant geometry is a cluster of $O(r_0)$ qubits arranged in a tree-like graph that mediate all $r_0$ interactions between the original qubit with the rest. \end{proof} \subsection{Proof of Theorem~\ref{thm:degree-reduction-exp}\label{sec:proof-theorem-gadget}} Now that we have established that perturbative gadgets can be used for gap-simulation, we can use them constructively for degree-reduction of any local Hamiltonian. This is Theorem~\ref{thm:degree-reduction-exp}, which we restate here for convenience: { \renewcommand{\thethm}{\ref{thm:degree-reduction-exp}} \begin{thm}[Coherent DR with exponential interaction strength] Let $H$ be an $n$-qubit $O(1)$-local Hamiltonian with $M_0$ terms, each with bounded norm. Suppose $H$ has quasi-spectral gap $\gamma$ and energy spread $w$ according to Def.~\ref{defn:gap}. For any $\epsilon>0$, one can construct a $2$-local $[O(1), O(M_0), O ((\gamma\epsilon)^{-\poly (n)} )]$-degree-reducer of $H$ with incoherence $\epsilon$, energy spread $w+\O(\epsilon)$, and trivial encoding. \end{thm} \addtocounter{thm}{-1} } \begin{proof} Let $H$ be the given $k$-local $n$-qubit Hamiltonian, $H$, where $k=O(1)$. Note we can always write $H$ in the form \begin{equation} H = \sum_{\mu=1}^{M_0} \alpha_{\mu} \sigma_{\mu_1}^{(s_{\mu_1})}\otimes \sigma_{\mu_2}^{(s_{\mu_2})} \otimes \cdots \otimes \sigma_{\mu_k}^{(s_{\mu_k})} , \quad \sigma_{\mu_i} \in \{\mathds{1}, X, Y, Z\}, \end{equation} where $M_0 = O(n^{k})$, and $|\alpha_\mu|=O(1)$. We call these $n$ qubits the ``original qubits'', and they have maximum degree $d_0 = O(n^{k-1})$. Let us denote the groundspace projector of $H$ as $P$, with spectral gap $\gamma$ and energy spread $w$. We now want to construct a degree-reducer of $H$ using gadgets that gap-simulates $(H,P)$, and our construction proceeds in four parts: \begin{enumerate} \item Reduce locality to 3 by $O(\log k)$ serial applications of subdivision gadget (use Claim~\ref{claim:subdiv}). \item Reduce locality to 2 by one application 3-to-2 local gadget in parallel (use Claim~\ref{claim:3-to-2}). \item Isolate each original qubit by one application of subdivision gadget (use Claim~\ref{claim:subdiv}). \item Reduce maximum degree to 6 by $O(\log n)$ serial applications of fork gadget (use Claim~\ref{claim:fork}). \end{enumerate} \paragraph{Part I}--- We apply the subdivision gadget $K$ times to reduce locality to 3. At iteration $q = 1,\ldots,K$, we have \begin{equation} [H\equiv \tilde{H}_{\textnormal{gadget},0} \equiv H_{\textnormal{target},1}] \to [\tilde{H}_{\textnormal{gadget},1} \equiv H_{\textnormal{target},2}] \to \cdots \to \tilde{H}_{\textnormal{gadget},K}. \end{equation} Let us denote the locality of $\tilde{H}_{\textnormal{gadget},q}$ as $k_q$, its energy spread $\tilde{w}_q$, its incoherence relative to $H$ as $\epsilon_q$, and $\Delta_q$ as the parameter chosen as in Claim~\ref{claim:subdiv}. We denote $k_0=k$, $\tilde{w}_0=w$, $\epsilon_0=0$, and $\Delta_0=O(1)$. By Claim~\ref{claim:subdiv}, we have $k_q \le \lceil k_{q-1}/2\rceil +1$, hence $K=O(\log k) = O(1)$ iterations is sufficient to reduce locality at the end to $k_K=3$. Note at any given iteration, the number of terms whose locality need to be reduced is $m=O(M_0)$. At iteration $q$, let us denote interaction strength in front of the $k_q$ local terms of $H_{\textnormal{target},q}$ as $J_q$, where $J_1=\max_\mu |\alpha_\mu| = O(1)$. Then $\Delta_1 = O((M_0^2J_1(M_0^4 J_1^2 + M_0)/\mathcal{E}^2)= O(M_0^6/\mathcal{E}^2)$. For $q\ge 2$, we have \begin{equation} J_q = O(\sqrt{J_{q-1}\Delta_{q-1}}), \quad \Delta_q = O\left(\frac{M_0^2J_q(M_0^4 J_q^2 + M_0 \Delta_{q-1}) }{\mathcal{E}^2}\right) \end{equation} Since $K=O(1)$, $M_0 = O(n^k) = \poly(n)$, we have \begin{equation} J_K, \Delta_K = O(\poly(n, \mathcal{E}^{-1})) \end{equation} We note that after these iterative applications, we added $O(k)$ ancilla qubits for each $k$-local term. A total of $O(k M_0)$ ancilla qubits are added since there are $O(M_0)$ $k$-local terms. The added ancilla qubits have degree at most 2. The original qubits still have degree $O(r_0)$, since for every $k$-local term there were a part of, they still need to interact with some other qubit to effectively generate the interactions. By Claim~\ref{claim:subdiv}, $\tilde{H}_I := \tilde{H}_{\textnormal{gadget},K}$ gap-simulates $(H,P)$ with incoherence $\epsilon_I$ and energy spread $\tilde{w}_I$, where \begin{equation} \begin{aligned} \epsilon_I &= \O(K\mathcal{E}/\gamma) = \O(\log k \mathcal{E}/\gamma) = \O(\mathcal{E}/\gamma), \\ \tilde{w}_I &= w + 2K\mathcal{E}/\gamma = w + \O(\log k \mathcal{E}/\gamma) = w + \O(\mathcal{E}/\gamma). \end{aligned} \end{equation} The interaction strength of the $(>1)$-local terms in $\tilde{H}_I \equiv$ are $\sqrt{\Delta_I} := \sqrt{J_K\Delta_K} = O(\poly(n,\mathcal{E}^{-1}))$. \paragraph{Part II}--- We apply the 3-to-2 local gadget to reduce locality of $H_{\textnormal{gadget},K}$ to 2. We note that there are $O(kM_0)=O(n^k)$ 3-local terms after the previous part, In particular, we note that since we can apply subdivision gadget to even 3-local terms in the previous part, we can make it so that any 3-local term in $\tilde{H}_{\textnormal{gadget}, K}$ contains at least one ancilla qubits, where the 3-local term acts on it with $X$. Hence, we can apply Claim~\ref{claim:3-to-2} for every 3-local term $\mu$ simultaneously, while choosing $C_\mu=X$. The parameters in the premise of Claim~\ref{claim:3-to-2} in this context are $m=O(n^k)$ and $\Delta_0 =\Delta_I = O(\poly(n,\mathcal{E}^{-1})$. This allows us to generate a 2-local Hamiltonian $\tilde{H}_{II}$ with interaction strength $\Delta_{II} = O(m^{12}\Delta_0^3/\mathcal{E}^3) = O(\poly(n,\mathcal{E}^{-1}))$. $\tilde{H}_{II}$ gap-simulate $(H,P)$ with incoherence $\epsilon_{II}$ and energy spread $\tilde{w}_{II}$, where \begin{equation} \mathcal{E}_{II} \le \epsilon_{I} + \O(\mathcal{E}/\gamma) = \O(\mathcal{E}/\gamma),\quad \tilde{w}_{II} \le \tilde{w}_{I} + 2\mathcal{E}/\gamma = w + \O(\mathcal{E}/\gamma). \end{equation} Importantly, in this construction, all the original qubits will interact with ancilla only in the form of $\sigma_\alpha^{(i)}\otimes X^{(a_i)}$, where $i$ is an original qubit and $a_i$ is some ancilla qubit. (They still might interact with other original qubits with some arbitrary 2-local term.) The ancilla qubits will have maximum degree 4 (they had maximum degree 2 in the 3-local Hamiltonian of Part I, and the 3-to-2-local gadget Hamiltonian results in a maximum of degree 2 per conversion of a 3-local term). This is useful to keep in mind because it satisfies the assumptions in Claim~\ref{claim:fork}. \paragraph{Part III}--- Now we want to isolate original qubits from each other with the subdivision gadget, so that each original qubit $i$ only interacts with some set of ancilla qubit $\{a_i\}$ in the form of $\sigma_\alpha^{(i)}\otimes X^{(a_i)}$. The idea is to that we can write \begin{equation} H_{\textnormal{target},III} = \tilde{H}_{II} = H_{\textnormal{else},III} + \sum_{\nu} c_\nu \sigma_{\nu_1} ^{(i_1)} \otimes \sigma_{\nu_2}^{(i_2)} \end{equation} where $(i_1, i_2)$ are pairs of original qubits, $|c_\nu|\le \Delta_{II}^{2/3} = O(\poly(n,\mathcal{E}^{-1}))$, and $H_{\textnormal{else},III}$ contains all other terms (i.e. 1-local terms, 2-local terms interacting ancilla with original through $\sigma\otimes X$, or ancilla with ancilla). We note that $\|H_{\textnormal{else},III}\| \le O(M_0\Delta_{II})$. Again, since we have not reduced the degree of the original qubit, there are $m=O(M_0)$ interactions between the original qubits that we need to address. Thus, we can use the following gadget Hamiltonian \begin{gather} \begin{aligned} &\tilde{H}_{III} = H_{\textnormal{anc},III} + V_{III} , \quad H_{\textnormal{anc},III} = \Delta_{III} \sum_\nu \ketbra{1}^{(a_{\nu})}, \\ &V_{III} = H_{\textnormal{else},III} + \sum_\nu \left[ \sqrt{\frac{|c_\nu| \Delta_{III}}{2}} ( \sgn(c_\nu) \sigma_{\nu_1}^{(i_1)} - \sigma_{\nu_2}^{(i_2)} ) \otimes X^{(a_{\nu})} + |c_\nu| \right], \label{eq:VIII} \end{aligned} \end{gather} so that the original qubit $i_1$ and $i_2$ no longer interact directly, but now interacts with a new ancilla qubit $a_\nu$. By Claim~\ref{claim:subdiv}, it is sufficient to choose $\Delta_{III} = \poly(n,\mathcal{E}^{-1})$ to ensure $\tilde{H}_{III}$ gap-simulates $(H,P)$ with incoherence $\epsilon_{III}$ and energy spread $\tilde{w}_{III}$, where \begin{equation} \mathcal{E}_{III} \le \epsilon_{II} + \O(\mathcal{E}/\gamma) = \O(\mathcal{E}/\gamma),\quad \tilde{w}_{III} \le \tilde{w}_{II} + 2\mathcal{E}/\gamma = w +\O(\mathcal{E}/\gamma). \end{equation} Note the added ancilla qubits have degree at most 2, while the ancilla qubits in $H_{\textnormal{else}, III}$ have degree at most 3. \paragraph{Part IV}--- Note the Hamiltonian $\tilde{H}_{III}$ from the previous part satisfy the assumptions for applying Claim~\ref{claim:fork}. This is because the original qubits only interact with ancilla qubits through the form $\sigma\otimes X$ in $V_{III}$ as seen in Eq.~\eqref{eq:VIII}, and implicitly so in $H_{\textnormal{else},III}$ due to our construction in Part II. Note the maximum Pauli degree is $r_0 \le d_0=O(n^k)$. By choosing $\Delta_0 \equiv \Delta_{0,IV} = O(|c_\nu|\Delta_{III}) = O(\Delta_{II}^{2/3}\Delta_{III}) = \poly(n,\mathcal{E}^{-1})$, the rest of the assumptions in Claim~\ref{claim:fork} are satisfied. Therefore, by Claim~\ref{claim:fork}, there is a Hamiltonian $\tilde{H}_{IV}$ that gap-simulate $(H,P)$ with incoherence $\epsilon_{IV}$ and energy spread $\tilde{w}_{IV}$ where \begin{equation} \epsilon_{IV} \le \epsilon_{III} + O(\frac{\mathcal{E}}{\gamma} \log n) = O(\frac{\mathcal{E}}{\gamma}\log n), \quad \tilde{w}_{IV} \le \tilde{w}_{III} + O(\frac{\mathcal{E}}{\gamma} \log n) = w + O(\frac{\mathcal{E}}{\gamma}\log n). \end{equation} To ensure that $\mathcal{E}_{IV} \le \epsilon$ for some constant $\epsilon$, we need to have chosen \begin{equation} \mathcal{E} \le O(\frac{\gamma \epsilon}{\log n}) \quad \Longrightarrow \quad \epsilon_{IV} \le \epsilon \quad \text{and} \quad \tilde{w}_{IV} \le w + \O(\epsilon). \end{equation} By Claim~\ref{claim:fork}, this gap-simulating Hamiltonian has maximum degree $6$, $O(nr_0)=O(n^k)=O(M_0)$ terms, each with norm (interaction strength) bounded by \begin{equation} J = O\left(\left(\frac{\poly(n) \Delta_{0,IV}}{\mathcal{E}^2}\right)^{\poly(n)}\right) = O\left(\left(\poly(n,\mathcal{E}^{-1})\right)^{\poly(n)}\right) = O\left(\left(\poly(n)/\gamma\epsilon)\right)^{\poly(n)}\right). \end{equation} This concludes our construction. \end{proof} \section{Connection to Quantum PCP\label{sec:qPCP}} In this Appendix, we draw a connection between our idea of gap-simulation to reductions of Hamiltonian in the context on quantum PCP. As explained in Sec.~\ref{sec:other-results}, there is a very important distinction between gap-simulating degree-reductions and degree-reductions used in the PCP context. In a nut-shell, this distinction boils down to the difference between spectral gap and promise gap. Nevertheless, there is a meaningful setting in which this difference can be bridged. We note that classical PCP reduction algorithms are usually {\it constructive}: they map not only CSPs to CSPs but also assignments to assignments\cite{dinurgoldreich,BenSassonPCP}. Moreover, if the CSP is satisfiable, then any of its non-satisfying assignments is mapped to one with at least as many violations. In other words, they preserve the {\it properties of the assignment}. In Sec.~\ref{sec:qPCP-implication} below, we suggest a definition of qPCP reductions which extends this notion to the quantum world. Specifically, we requires that the reduction preserve groundstate properties in the similar sense as gap-simulations do, and maps excited states of $H$ to high-energy states of $\tilde{H}$. Very importantly, we do \emph{not} require preservation of the spectral gap, as this is the essence of the difference between the qPCP and gap-simulating settings. We discuss the connection between this definition and gap-simulation in Sec.~\ref{sec:qPCP-gap-simulation}. With these restrictions, it is possible to connect the worlds of spectral gap and promise gap. In Sec.~\ref{sec:imposs-qPCP-degree}, we prove Theorem~\ref{thm:imposs-qPCP} of that shows the impossibility of qPCP-DR and qPCP-dilution with close-to-perfect coherence, based on similar ideas of those in Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute}. Unfortunately, these impossibility results hold only for inverse polynomial incoherence. One would hope to improve these results to {\it constant} incoherence, similar to Theorem \ref{thm:main}. Alas, this remains open; See Sec.~\ref{sec:proof-sketch-main} for discussion about the difficulty in strengthening to constant $\epsilon$, which is the relevant regime in the context of PCPs. \subsection{Definitions of Quantum PCP Reductions\label{sec:qPCP-implication}} To derive implications of our gap-simulation framework to quantum PCP, we need to define quantum PCP reductions, and then restrict those in a way that will enable bridging the differences between spectral-gap and promise-gap worlds. We start by defining general quantum PCP reductions. In gap-simulation, one considers reductions of Hamiltonians that preserve all groundstate properties {\it including their spectral gaps}. In the context of quantum NP, it is the so-called \emph{promise gap}, and not the spectral gap, that must be controlled through the reductions. Formally: \begin{defn}[$\mu$-qPCP-reduction] \label{defn:qPCP-reduction0} An algorithm $\mathcal{A}$ is a $\mu$-\emph{qPCP-reduction} if it takes as its input any $n$-qubit, $O(1)$-local, positive semi-definite Hamiltonian $H$, and two numbers $a,b$ where $0\le a<2^{-\Omega(n)}$, the \emph{promise gap} $b-a>1/\poly(n)$, and its output $\tilde{H}=\mathcal{A}(H,a,b)$ is an $O(1)$-local, positive semi-definite Hamiltonian on $n+\poly(n)$ qubits such that (1) if $\lambda_1(H) \le a$, then $\lambda_1(\tilde{H}) \le \mu a$, and (2) if $\lambda_1(H) \ge b$, then $\lambda_1(\tilde{H}) \ge \mu b$. \end{defn} The definition of classical PCP reduction can easily be deduced. Note that $\mathcal{A}$ above is not required to preserve any properties of the groundspace. Thus, as explained in the introduction, there is no hope to prove information-theoretical impossibility in this setting. Hence, we consider the following the restrictions on the quantum PCP reductions, which we believe extend the properties of constructive PCP reductions in the classical context to the quantum world: \begin{defn}[$(\mu,\delta, \epsilon)$-qPCP*-reduction] \label{defn:qPCP-reduction} A $\mu$-qPCP-reduction $\mathcal{A}$ is a $(\mu,\delta, \epsilon)$-\emph{qPCP*-reduction} with encoding $V$ ($V$ is an isometry) if there exists a projector $P_\textnormal{anc}$ on some ancilla so that: \begin{enumerate} \vspace{-5pt} \item Let $P$ be the projector onto eigenstates of $H$ with eigenvalue $\le a$. Let $\tilde{P}$ be the projector onto eigenstates of $\tilde{H}=\mathcal{A}(H,a,b)$ with eigenvalue $\le \mu a$. Then they satisfy $\|\tilde{P} - V (P\otimes \mathds{1}_\textnormal{anc})V^\dag \tilde{P}\| \le \delta$, and $\|\tilde{P} - V({P}\otimes P_\textnormal{anc})V^\dag \| \le \epsilon$. We call $\delta$ \emph{unfaithfulness} and $\epsilon$ \emph{incoherence}, same as in Definitions~\ref{defn:hamsimul} and \ref{defn:hamsimul-incoherent}. \item If $\ket{\psi}$ is an eigenstate of $H$ with eigenvalue $\ge b$, then $\forall \ket\alpha\in P_\textnormal{anc}$, let $\ket{\bar{\psi}} = V \ket{\psi}\ket{\alpha}$ which must satisfy $\braket{\bar{\psi}|\tilde{H}|\bar{\psi}} \ge \mu b$. \end{enumerate} \end{defn} Essentially, our definition of the analogously ``constructive'' qPCP reduction requires that the eigenstates of $H$ serving as satisfying assignments to be faithfully and/or coherently mapped to low-energy eigenstates of the output Hamiltonian $\tilde{H}$. Furthermore, we require any high-energy eigenstates (violating assignments) of $H$ with eigenvalues above the promise gap to also be mapped mapped to high-energy states in $\tilde{H}$. As a sanity check, note that a $\mu$-qPCP-reduction with the additional condition that its output $\tilde{H}=\mathcal{A}(H,a=0,b)$ gap-simulates $H$ with $\delta$-unfaithfulness, $\epsilon$-incoherence, and energy spread $\tilde{w}=0$ is a $(\mu,\delta,\epsilon)$-qPCP*-reduction, if the output promise gap is not larger than the spectral gap (see Lemma~\ref{lem:connecting-qPCP-gap} in Sec.~\ref{sec:qPCP-gap-simulation}). We now aim to study whether we can rule out DR and dilution in the qPCP context. We first define these notions: \begin{defn}[qPCP-DR and qPCP-dilution] Consider any $n$-qubit input Hamiltonians of the form $H=\sum_{i=1}^{M_0} H_i$, which is a sum of $M_0=M_0(n)$ terms, each of which is $O(1)$-local. \begin{itemize} \vspace{-5pt} \item A $(\mu,\delta, \epsilon)$-qPCP*-reduction $\mathcal{A}$ is a $(\mu,\delta, \epsilon)$-\emph{qPCP-DR} if $\mathcal{A}(H,a,b)$ has O(1) degree. \vspace{-5pt} \item A $(\mu,\delta, \epsilon)$-qPCP*-reduction $\mathcal{A}$ is a $(\mu,\delta, \epsilon)$-\emph{qPCP-dilution} if $\mathcal{A}(H,a,b)$ has $o(M_0(n))$ local terms. \vspace{-5pt} \end{itemize} \end{defn} We note that the construction used in Proposition~\ref{prop:classical-deg-reduct}, which is based on DR for classical PCP\cite{dinur}, directly implies a $(1,0,1)$-qPCP-DR by the above definition, for all classical Hamiltonians. \subsection{Impossibility Result on qPCP-DR and qPCP-dilution (Theorem~\ref{thm:imposs-qPCP})\label{sec:imposs-qPCP-degree}} Recall our example family of 2-local $n$-qubit Hamiltonian that was previously used: \begin{equation} H_A = \left(\mathcal{J}_z+\frac{n}{2} \right)\left(\mathcal{J}_z+\frac{n}{2}-1\right), \end{equation} whose $n+1$ groundstates are \begin{equation} \ket{00\cdots00}, \ket{00\cdots01}, \ket{00\cdots10}, \ldots, \ket{10\cdots00}. \end{equation} Using the above Hamiltonian, we show that {\it generic} quantum $(\mu,\delta,\epsilon)$-qPCP-DR and $(\mu,\delta,\epsilon)$-qPCP-dilution are impossible assuming the encoding $V$ is unitary and localized. Unfortunately, we are only able to show this for polynomially small $\epsilon$: \begin{thm}[Limitation on qPCP-DR and qPCP-dilution] \label{thm:imposs-qPCP} For any localized encoding $V$, it is impossible to have a $(\mu,\delta,\epsilon)$-qPCP-DR or $(\mu,\delta,\epsilon)$-qPCP-dilution algorithm with encoding $V$ that works on the $n$-qubit Hamiltonian $H_A$ and outputs $\tilde{H}_A$ with $\|\tilde{H}_A\|=O(n^p)$, and incoherence $\epsilon(n) \le o(\sqrt{\mu(n)/n^{p}})$. \end{thm} \vspace{-5pt} In particular, if we restrict the output Hamiltonians to have degree $d$ on $n+m$ number of qubits, with $O(1)$-norm terms and encoded by some localized $V$, then no $(\mu,\delta,\epsilon)$-qPCP-DR or $(\mu,\delta,\epsilon)$-qPCP-dilution algorithm exists that works on $H_A$ with $\mu=\Omega(1)$ and $\epsilon \le o (1/\sqrt{d(n+m)})$. We prove this Theorem using the same idea as in Lemma~\ref{lem:TotalCoherentImpossible} (that was used to prove Lemma~\ref{lem:imposs1-DR} and Theorem~\ref{thm:imposs1-dilute}), which in the current context yields: \begin{lemma} \label{lem:imposs-qPCP} Suppose there exists a $(\mu,\delta,\epsilon)$-qPCP*-reduction algorithm $\mathcal{A}$ with trivial encoding $V=\mathds{1}$. Let $\tilde{H}_A = \mathcal{A}(H_A,a,b)$ be its output, with $0\le a < b \le 1$. If either (1) $\epsilon=0$ and $b>2a$, or (2) $\|\tilde{H}_A\| < \frac{\mu (b - 2a - 4\epsilon)}{2\epsilon^2}$, then for every pair of qubits $(i,j)$, $\tilde{H}_A$ must contain a term that acts nontrivially on both qubits. \end{lemma} \begin{proof} For the sake of contradiction, suppose $\tilde{H}_A$ contains no term that acts nontrivially on both qubit $i$ and $j$. This means we can decompose $\tilde{H}_A$ into two parts: $\tilde{H}_A=\tilde{H}_{A,i}+\tilde{H}_{A,j}$, where $\tilde{H}_{A,i}$ acts trivially on qubit $i$. In other words, $[\tilde{H}_{A,i},\sigma_i]=0$ for any Pauli operator $\sigma_i$ on qubit $i$. Let us denote states $\ket{g_0}=\ket{0\cdots0}$ and $\ket{g_i}=X_i\ket{g_0}=\ket{0\cdots01_i0\cdots0}$. Let us denote $P=\sum_{i=0}^n \ketbra{g_i}$, which is a projector onto eigenstates of $H_A$ with eigenvalue $\le a $. We also denote $\tilde{P}$ as the projector onto eigenstates of $\tilde{H}_A$ with eigenvalue $\le \mu a$. By Definition~\ref{defn:qPCP-reduction} there exists $P_\textnormal{anc}$, such that $\|\tilde{P} - P\otimes P_\textnormal{anc}\| \le \epsilon$. Let $\ket{\alpha}\in P_\textnormal{anc}$, and denote $\ket{\bar{g}_i} \equiv \ket{g_i}\ket{\alpha}$ for $0\le i \le n$. Observe that \begin{equation} \tilde{P} \ket{\bar{g}_i}= (\tilde{P} - P \otimes P_\textnormal{anc} + P \otimes P_\textnormal{anc})\ket{g_i}\ket{\alpha} = (\tilde{P} - P \otimes P_\textnormal{anc})\ket{g_i}\ket{\alpha} + \ket{g_i}\ket{\alpha} = \ket{\epsilon_i} + \ket{\bar{g}_i} \end{equation} where $\ket{\epsilon_i} = (\tilde{P}-P\otimes P_\textnormal{anc})\ket{\bar{g}_i}$ satisfying $\|\ket{\epsilon_i}\|\le \epsilon$. Thus, \begin{eqnarray} \braket{\bar{g}_i | \tilde{H}_A |\bar{g}_i} &=& (\bra{\bar{g}_i} \tilde{P} - \bra{\epsilon_i}) \tilde{H}_A (\tilde{P}\ket{\bar{g}_i} - \ket{\epsilon_i}) \nonumber\\ &=& \braket{\bar{g}_i |\tilde{P}\tilde{H}_A \tilde{P}|\bar{g}_i} + \braket{\epsilon_i |\tilde{H}_A|\epsilon_i} - 2\Re \braket{\bar{g}_i |\tilde{P}\tilde{H}_A |\epsilon_i} \nonumber \\ &\le& \braket{\epsilon_i |\tilde{H}_A|\epsilon_i} + \mu a(1+2\epsilon) \le \epsilon^2 \|\tilde{H}_A\| + \mu a(1+2\epsilon), \end{eqnarray} where we used the fact that $\|\tilde{P}\tilde{H}_A \| =\|\tilde{P} \tilde{H}_A \tilde{P}\| \le \mu a$. Now consider the eigenstate $\ket{e_{ij}}=X_iX_j\ket{g_0}$ of $H_A$ with eigenvalue $1 \ge b$, and let $\ket{\bar{e}_{ij}} = \ket{e_{ij}}\ket{\alpha}$. By the second condition in Definition~\ref{defn:qPCP-reduction} of a $(\mu,\delta,\epsilon)$-qPCP*-reduction algorithm, we must have \begin{equation} \braket{\bar{e}_{ij}|\tilde{H}_A|\bar{e}_{ij}} \ge \mu b \end{equation} In addition, observe that \begin{eqnarray} \braket{\bar{e}_{ij}|\tilde{H}_A|\bar{e}_{ij}} &=& \braket{\bar{g}_0|X_iX_j (\tilde{H}_{A,i}+\tilde{H}_{A,j})X_iX_j|\bar{g}_0} = \braket{\bar{g}_0|X_i\tilde{H}_{A,j}X_i|\bar{g}_0} + \braket{\bar{g}_0|X_j\tilde{H}_{A,i}X_j|\bar{g}_0} \nonumber \\ &=& \braket{\bar{g}_i|\tilde{H}_{A,j}|\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_{A,i}|\bar{g}_j} \nonumber\\ &=& \braket{\bar{g}_i|\tilde{H}_{A}|\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_{A}|\bar{g}_j} - \braket{\bar{g}_i|\tilde{H}_{A,i}|\bar{g}_i} - \braket{\bar{g}_j|\tilde{H}_{A,j}|\bar{g}_j} \nonumber \\ &=& \braket{\bar{g}_i|\tilde{H}_{A}|\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_{A}|\bar{g}_j} - \braket{\bar{g}_0|X_i\tilde{H}_{A,i}X_i|\bar{g}_0} - \braket{\bar{g}_0|X_j\tilde{H}_{A,j}X_j|\bar{g}_0} \nonumber \\ &=& \braket{\bar{g}_i|\tilde{H}_{A}|\bar{g}_i} + \braket{\bar{g}_j|\tilde{H}_{A}|\bar{g}_j} - \braket{\bar{g}_0|\tilde{H}_{A}|\bar{g}_0} \le 2 \epsilon^2 \|\tilde{H}_A\| + 2\mu a(1+2\epsilon). \end{eqnarray} where we used the fact that $\braket{\bar{g}_0|\tilde{H}_{A,j}|\bar{g}_0}\ge 0$ (because $\tilde{H}_A$, as an output of a qPCP*-reduction, is positive semi-definite). This contradicts the previous equation whenever \begin{equation} 2\epsilon^2 \|\tilde{H}_A\| + 2\mu a(1+2\epsilon) < \mu b \quad \Longleftrightarrow \quad \begin{dcases} b>2a, & \text{if } \epsilon = 0 \\ \|\tilde{H}_A\| < \frac{\mu (b - 2a - 4\epsilon)}{2\epsilon^2}, & \text{if } \epsilon > 0 \end{dcases} \label{eq:HA-norm-small} \end{equation} Hence, if we assume (1) $\epsilon=0$ and $b>2a$, or (2) $\|\tilde{H}_A\| < [\mu (b - 2a - 4\epsilon)]/(2\epsilon^2)$, then $\tilde{H}_A$ must contain a term that acts nontrivially on both qubit $i$ and $j$. \end{proof} \begin{proof}[\textbf{Proof of Theorem~\ref{thm:imposs-qPCP}}] We know $\lambda_1(H_A) = 0$. Suppose $\mathcal{A}$ is a $(\mu,\delta,\epsilon)$-qPCP-DR or $(\mu,\delta,\epsilon)$-qPCP-dilution algorithm with some localized encoding $V$. Since $V$ is an encoding supplied by $\mathcal{A}$, we can define $\mathcal{A}'(H,a,b)=V^\dag \mathcal{A}(H,a,b) V$, which effectively has trivial encoding. We run the qPCP*-reductions with $a=0$ $b=1$, obtaining $\tilde{H}_A = \mathcal{A}(H_A,0,1)$ and $\tilde{H}_A'=\mathcal{A}'(H_A,0,1) = V^\dag \tilde{H}_A V$. However, if we require the norm of $\|\tilde{H}_A\| =\|\tilde{H}_A'\| = O(n^p)$ and $\epsilon(n)\le o(\sqrt{\mu(n)/n^p})$, where $\mu(n)$ and $\epsilon(n)$ are parameters of $\mathcal{A}$ that we change with the system size $n$, then we have $ O(n^p) = \|\tilde{H}_A'\| < [\mu(1-4\epsilon)]/(2\epsilon^2) = \omega(n^p)$ for sufficiently large $n$. Then by Lemma~\ref{lem:imposs-qPCP}, for sufficiently large $n$, each qubit in $\tilde{H}_A'$ must interact with at least $n-1$ other qubits in $\tilde{H}_A'$. Since if $\tilde{H}_A$ is assumed to be $O(1)$-local, then so is $\tilde{H}_A'$ when $V$ is a localized encoding. Thus, this means that $\tilde{H}_A'$ must have degree $r'=\Omega(n)$, and $M'=\Omega(n^2)$ terms to cover all $\binom{n}{2}$ required pairwise interactions between qubits. As $V$ maps local terms in $\tilde{H}_A$ to local terms in $\tilde{H}_A'$, their maximum degree and number of terms is related by a constant factor. Hence, $\tilde{H}_A$ also must have degree $r=\Theta(r')=\Omega(n)$, and $M=M'=\Omega(n^2)$ local terms. Therefore, $\tilde{H}_A$ cannot be the output of a qPCP-DR or a qPCP-dilution algorithm with localized encoding, given the assumptions of polynomial bound on $\|\tilde{H}_A\|$ and inverse polynomially small incoherence $\epsilon$.\end{proof} \subsection{Relationship of Gap-Simulation to qPCP*-reduction\label{sec:qPCP-gap-simulation}} \begin{lemma} \label{lem:connecting-qPCP-gap} Fix $a=0$. Suppose $\mathcal{A}$ is a $\mu$-qPCP-reduction such that for all input $H$, its output $\tilde{H}=\mathcal{A}(H,a=0,b)$ also gap-simulates $H$ with encoding $V$, unfaithfulness $\delta$, incoherence $\epsilon$, and energy spread $\tilde{w}=0$. Then $\mathcal{A}$ is also a $(\mu,\delta,\epsilon)$-qPCP*-reduction with encoding $V$, as long as the output promise gap $\mu b \le \gamma(1-\delta^2)$, where $\gamma$ is the spectral gap of $H$. \end{lemma} \begin{proof} Let us consider the two cases, where $\lambda_1(H) = 0$ or $\lambda_1(H) \ge b$. To clarify notations, let us denote $P_q$ as the projector onto eigenstates of $H$ with eigenvalue $0$. and $\tilde{P}_q$ as the projector onto eigenstates of $\tilde{H}$ with eigenvalue $0$, both of which may be empty. These are not to be confused with (quasi-)groundspace projector $P$ and $\tilde{P}$ in the gap-simulation context, although we'll show that under certain conditions, they coincide. \textbf{Case 1: $\lambda_1(H) \ge b$. }--- Because $\lambda_1(H) \ge b \neq 0$, then there is no eigenstate of $H$ with eigenvalue $\le 0$, so the projector $P_q=0$. Also since $\lambda_1(H) \ge b \Longrightarrow \lambda_1(\tilde{H})\ge \mu b$ according to Def.~\ref{defn:qPCP-reduction0}, there is also no eigenstate of $\tilde{H}$ with eigenvalue $\le \mu\times 0 =0$, so $\tilde{P}_q=0$. Trivially, $\|\tilde{P}_q - V P_q V^\dag \tilde{P}_q\| = 0\le \delta$, and $\|\tilde{P}_q - V (P_q\otimes P_\textnormal{anc})V^\dag \|=0\le \epsilon$. Thus, condition 1 of Def.~\ref{defn:qPCP-reduction} is satisfied. Condition 2 is trivially satisfied because $\lambda_1(\tilde{H}) \ge \mu b$. \textbf{Case 2: $\lambda_1(H) = 0$. }--- Let us denote $P$ the groundspace projector of $H$ with energy $0$, and its spectral gap be $\gamma$. Since $\lambda_1(H) = 0 \Longrightarrow \lambda_1(\tilde{H})=0$ by Def.~\ref{defn:qPCP-reduction0}, let us denote $\tilde{P}$ as the groundspace projector of $\tilde{H}$ with energy $0$. Note $P_q=P$ and $\tilde{P}_q=\tilde{P}$. Since by definition of gap-simulation, we have $\|\tilde{P} - VPV^\dag \tilde{P}\|\le \delta$ and $\|\tilde{P} - V(P\otimes P_\textnormal{anc}) V^\dag \| \le \epsilon$ for some $P_\textnormal{anc}$, then condition 1 of Def.~\ref{defn:qPCP-reduction} is satisfied Let us now check if condition 2 of Def.~\ref{defn:qPCP-reduction} is satisfied. Consider any $\ket{\alpha}\in P_\textnormal{anc}$, and $\ket{g^\perp}$ be any eigenstate of $H$ with eigenvalue $\ge b$ (which satisfies $P\ket{g^\perp}=0$). Let $\ket{\bar\psi}= V \ket{g^\perp}\ket{\alpha}$, we have \begin{equation} \tilde{P}^\perp \ket{\bar{\psi}} = \ket{\bar{\psi}} - \tilde{P}\ket{\bar{\psi}} = \ket{\bar{\psi}} - \tilde{P} V\ket{g^\perp}\ket{\alpha} = \ket{\bar{\psi}} - (\tilde{P}-\tilde{P}V P V^\dag) V\ket{g^\perp}\ket{\alpha} = \ket{\bar{\psi}} - \ket{\delta}, \end{equation} where we denoted $\ket{\delta}\equiv(\tilde{P}-\tilde{P}VPV^\dag )\ket{\bar{\psi}}$ satisfying $\|\ket{\delta}\|\le \delta$. Also observe that $\tilde{P}^\perp \ket{\delta}=0$, which means $\tilde{P}^\perp\ket{\bar\psi}$ and $\ket{\delta}$ are orthogonal. Then using $1=\|\ket{\bar\psi}\|^2 = \|\tilde{P}^\perp\ket{\bar\psi}\|^2 + \|\ket{\delta}\|^2$, we have $\|\tilde{P}^\perp\ket{\bar\psi}\| \ge \sqrt{1-\delta^2}$. Furthermore, note that $[\tilde{P},\tilde{H}]=0$ implies $[\tilde{P}^\perp, \tilde{H}]=0$. Hence, the energy of $\ket{\bar\psi}$ with respect to $\tilde{H}$ can be lower-bounded: \begin{eqnarray} \braket{\bar{\psi} |\tilde{H} |\bar{\psi}} &=& \braket{\bar{\psi}|\tilde{P}^\perp \tilde{H}\tilde{P}^\perp|\bar{\psi}} + 2 \Re\braket{\delta| \tilde{H}\tilde{P}^\perp|\bar{\psi}} + \braket{\delta|\tilde{H}|\delta} \nonumber\\ &=& \braket{\bar{\psi}|\tilde{P}^\perp \tilde{H}\tilde{P}^\perp|\bar{\psi}} + 2 \Re\braket{\delta| \tilde{P}^\perp \tilde{H}\tilde{P}^\perp|\bar{\psi}} + \braket{\delta|\tilde{H}|\delta} = \braket{\bar{\psi}|\tilde{P}^\perp \tilde{H}\tilde{P}^\perp|\bar{\psi}}+ \braket{\delta|\tilde{H}|\delta} \nonumber \\ &\ge& \braket{\bar{\psi}|\tilde{P}^\perp \tilde{H}\tilde{P}^\perp|\bar{\psi}} = \braket{\bar{\psi}|\tilde{P}^\perp(\tilde{P}^\perp \tilde{H} \tilde{P}^\perp+\gamma\tilde{P})\tilde{P}^\perp|\bar{\psi}}, \end{eqnarray} where in the last step we added a term $\gamma \tilde{P}$ (which would evaluate to zero) inside the parenthesis to utilize the fact that the operator $\tilde{P}^\perp(\tilde{P}^\perp \tilde{H} \tilde{P}^\perp+\gamma\tilde{P})\tilde{P}^\perp$ has minimum eigenvalue $\gamma$, by Def.~\ref{defn:hamsimul} of gap-simulation. Normalizing $\tilde{P}^\perp \ket{\bar\psi}$ by its norm, we can apply the lower bound of eigenvalue to get a lower bound of $\braket{\bar{\psi}|\tilde{H}|\bar{\psi}}$: \begin{eqnarray} \braket{\bar{\psi} |\tilde{H} |\bar{\psi}} \ge \frac{\braket{\bar{\psi}|\tilde{P}^\perp(\tilde{P}^\perp \tilde{H} \tilde{P}^\perp+\gamma\tilde{P})\tilde{P}^\perp|\bar{\psi}}}{\left\|\ket{\tilde{P}^\perp|\bar{\psi}}\right\|^2 } \left\|\ket{\tilde{P}^\perp|\bar{\psi}}\right\|^2 \ge \gamma (1-\delta^2). \end{eqnarray} We satisfy condition 2 of Def.~\ref{defn:qPCP-reduction} as long as $\gamma(1-\delta^2)\ge \mu b$. This completes the proof of the Lemma. \end{proof} \section{Weak Gap-Simulation and Coherent Weak Dilution of $H_A$ with Constant Interaction Strength\label{sec:weak-sparsifier}} In this Appendix we introduce the definition of weak gap-simulation, which is an even weaker version of Hamiltonian simulation than our gap-simulation. In certain instances, we find that it can be helpful to allow the Hamiltonian $\tilde{H}$ to simulate the original $H$ not in its ``groundspace" by in an excited space. In particular, we can consider allowing $\tilde{P}$ to project onto a subspace that is not necessarily the lowest-energy, but rather isolated in the spectrum by a spectral gap of $\gamma$ from both above and below. This may have physical applications, for example, in Floquet Hamiltonian engineering\cite{floquet,FloquetEngineering, LindnerFloquetTopo, ChoiDynamicalEngineering} where the system is driven time-dependently and periodically, and thus eigenvalues are only well-defined up to a period. Hence, this motivates a definition of a weaker version of gap-simulation that we provide below: \begin{defn}[weak gap-simulation of Hamiltonian] \label{defn:weaksimul} Let $H$ and $\tilde{H}$ be two Hamiltonians, defined on Hilbert spaces $\H$ and $\tilde{\H}$ respectively, where $\tilde{\H}$. Let $V: \H\otimes \H_\textnormal{anc} \to \tilde{\H}$ be an isometry ($V^\dag V=\mathds{1}$), where $\H_\textnormal{anc}$ is some ancilla Hilbert space. Denote $\tilde{E}^g \equiv \lambda_1(\tilde{H})$. Per Definition~\ref{defn:gap}, let $P$ be a quasi-groundspace projector of $H$, $\gamma$ its quasi-spectral gap. We say that $\tilde{H}$ \emph{weakly gap-simulates} $(H,P)$ with \emph{encoding} $V$, \emph{incoherence} $\epsilon\ge 0$ and \emph{energy spread} $0\le\tilde{w}<1$ if the following conditions are both satisfied: \begin{enumerate} \item There exists a Hermitian projector $\tilde{P}$ projecting onto a subspace of eigenstates of $\tilde{H}$ such that \begin{gather} [\tilde{H}, \tilde{P}]= 0, \quad \|\tilde{P}(\tilde{H} -\tilde{E}^g)\tilde{P}\|\le \tilde{w}\gamma, \quad \textnormal{and} \quad \left|\lambda_j(\tilde{P}^\perp (\tilde{H} - \tilde{E}^g )\tilde{P}^\perp + \gamma \tilde{P})\right| \ge \gamma \quad \forall j. \label{eq:weaksimul} \end{gather} I.e., $\tilde{P}$ projects onto a quasi-groundspace of $\tilde{H}$ with quasi-spectral gap not smaller than that of $P$ in $H$, and energy spread $\tilde{w}$. \item There exists a Hermitian projector $P_\textnormal{anc}$ acting on $\H_\textnormal{anc}$, so that \end{enumerate} \begin{flalign} \textnormal{[bounded incoherence]} && \|\tilde{P} - V(P\otimes P_\textnormal{anc})V^\dag \| \le \epsilon && \phantom{\textnormal{(incoherence)}} \end{flalign} When $P$ projects onto the groundspace of $H$, rather than onto a quasi-groundspace, we usually omit $P$ and simply say $\tilde{H}$ \emph{weakly gap-simulates} $H$. \end{defn} \noindent The only difference between Definition~\ref{defn:hamsimul} and \ref{defn:weaksimul} is that we replaced Eq.~\eqref{eq:strongsimul} with Eq.~\eqref{eq:weaksimul}. Correspondingly, any degree-reducer (diluter) that only weakly gap-simulates is called a \emph{weak} degree-reducer (diluter). \begin{figure} \centering \includegraphics[height=5cm]{weakgapsimuldef.pdf} \caption{\label{fig:weakgapsimul}Visualizing weak gap-simulation of $H$ by $\tilde{H}$.} \end{figure} \begin{prop}[weak star-graph diluter of $H_A$] \label{prop:star} There is a 2-local, weak $[n, O(n), O(1/\epsilon^2)]$-diluter of $H_A$ with $\epsilon$-incoherence, energy spread $\tilde{w}=\O(\epsilon^2)$ and trivial encoding $V=\mathds{1}$, using one additional ancilla qubit that interacts with all original qubits in a star-graph geometry. \end{prop} \begin{proof} Let us denote the number operator $\hat{N}_e=\sum_{i=1}^n \ketbra{1}^{(i)}$. Note \begin{equation} H_A = -\hat{N}_e+\hat{N}_e^2 \end{equation} Consider the following Hamiltonian on $n$ system qubits and 1 ancilla qubit: \begin{equation} \tilde{H}_A^\text{star} = -\Delta \ketbra{0}^\textnormal{anc} - \hat{N}_e + \sqrt{\Delta}\hat{N}_e\otimes \sigma_x^\textnormal{anc} \end{equation} Note when expanded in terms of 2-local terms, $\tilde{H}_A^\text{star}$ has $O(n)$ terms, and has maximum degree $n$ at the ancilla qubit. Since $[\hat{N}_e, \tilde{H}_A^\text{star}]=0$, we can replace $\hat{N}_e$ with its eigenvalue $m$ in $\tilde{H}_A^\text{star}$, where $m=0,1,\ldots,n$. The Hamiltonian can now be rewritten as \begin{equation} \tilde{H}_A^\text{star} = -\Delta/2-m + (m\sqrt{\Delta},0,-\Delta/2)\cdot \vec\sigma^\textnormal{anc} \end{equation} The energy eigenvalues are \begin{eqnarray} E_m^\pm = -\Delta/2-m \pm \sqrt{m^2\Delta + \Delta^2/4} = \begin{cases} -m+m^2-O(m^4/\Delta)\\ -\Delta-m-m^2+O(m^4/\Delta) \end{cases} \end{eqnarray} The corresponding eigenstates are \begin{eqnarray} \ket{E_m^+} = \ket{m_e}\otimes(\cos\frac{\theta_m}{2}\ket{1}+\sin\frac{\theta_m}{2}\ket{0})_\textnormal{anc} \equiv \ket{m_e}\otimes \ket{\theta_m^+}\\ \ket{E_m^-} = \ket{m_e}\otimes(\sin\frac{\theta_m}{2}\ket{1}-\cos\frac{\theta_m}{2}\ket{0})_\textnormal{anc} \equiv \ket{m_e}\otimes \ket{\theta_m^-} \end{eqnarray} where $\ket{m_e}$ is any eigenvector of $\hat{N}_e$ with eigenvalue $m$, and $\tan\theta_m=m\sqrt{\Delta}/(\Delta/2)=2m/\sqrt{\Delta}$. In the small $m$ sector \begin{eqnarray*} E^+_0 &=& 0\\ E^+_1 &=& - 1/\Delta + O(1 /\Delta^2) \\ E^+_2 &=& 2 - 16/\Delta + O(1/\Delta^2) \end{eqnarray*} So $m=0,1$ states are quasi-degenerate up to $1/\Delta$, separated by a gap of $\gamma\approx2$ from $m=2$ states. More explicitly, the states \begin{eqnarray*} \ket{E_0^+} &=& \ket{0_e}\otimes\ket{1}_\textnormal{anc} \\ \ket{E_1^+} &=& \ket{1_e}\otimes(\cos\frac{\theta_1}{2}\ket{1}+ \sin\frac{\theta_1}{2}\ket{0})_\textnormal{anc} \end{eqnarray*} spans a quasi-degenerate ``groundspace'' of zero energy. From below, these states are gapped by $\Delta$ from the $\ket{E_m^-}$ states. From above, these states are gapped by $E_2^+\simeq 2$ from states like \begin{equation} \ket{E_2^+} = \ket{2_e}\otimes(\cos\frac{\theta_2}{2}\ket{1}+\sin\frac{\theta_2}{2}\ket{0})_\textnormal{anc} \end{equation} The projector onto the ``groundspace'' of $\tilde{H}_A^\text{star}$ is \begin{equation} \tilde{P} = P_{0e}\otimes\ketbra{1}_\textnormal{anc} + P_{1e} \otimes \ketbra{\theta_1^+}_\textnormal{anc} \end{equation} where $P_{me}$ projects onto $\{\ket{m_e}:\hat{N}_e\ket{m_e}=m\ket{m_e}\}$, the set of states containing $m$ excitations. The original Hamiltonian's groundspace projector is $P=P_{0e} + P_{1e}$. We can guess $P_\textnormal{anc} = \ketbra{1}$, and thus \begin{eqnarray*} \tilde{P}-P\otimes P_\textnormal{anc} &=& P_{1e} \otimes \left(\ketbra{\theta_1^+} - \ketbra{1}\right) \\ &=& P_{1e}\otimes \left(\sin^2\frac{\theta_1}{2}(\ketbra{0}-\ketbra{1}) + \frac{1}{2}\sin\theta_1(\ketbrat{0}{1}+\ketbrat{1}{0})\right) \end{eqnarray*} The incoherence of the $m=0,1$ ``groundspace'' is \begin{equation} \|\tilde{P}-P\otimes P_\textnormal{anc}\| \le \sin\theta_1 = \sqrt{\frac{2/\Delta}{1+4/\Delta+\sqrt{1+4/\Delta}}} \approx \sqrt{\frac{2}{\Delta}} \end{equation} Hence, by choosing $\Delta=O(1/\epsilon^2)$, $\tilde{H}_A^\text{star}$ is a 2-local weak $[n,O(n),O(1/\epsilon^2)]$-sparsifier of $H_A$ with $\epsilon$-incoherence and energy spread $\tilde{w}=1/\Delta=\O(\epsilon^2)$. \end{proof} \end{appendices} \bibliographystyle{hieeetr}
2,869,038,155,750
arxiv
\section{Properties of the support and uniqueness of the solution} Let $X$ be a complex manifold of complex dimension $n$ and $T$ be a $\ol\partial$-exact $(0,1)$-current on $X$. We will describe some relations between the support of the current $T$ and the support of the solution $S$ of the Cauchy-Riemann equation $\ol\partial S=T$. \begin{prop}\label{support} Let $X$ be a complex manifold of complex dimension $n$ and $T$ be a $\ol\partial$-exact $(0,1)$-current on $X$. If $\Omega^c$ denotes a connected component of $X\setminus{\rm supp}~T$ and if $S$ is a distribution on $X$ such that $\ol\partial S=T$, then either ${\rm supp}~S\cap\Omega^c=\emptyset$ or $\Omega^c\subset{\rm supp}~S$. \end{prop} \begin{proof} Note that, since $\ol\partial S=T$, $S$ is a holomorphic function on $X\setminus{\rm supp}~T$ and in particular on the connected set $\Omega^c$. Assume that the support of $S$ does not contain $\Omega^c$, then $S$ vanishes on an open subset of $\Omega^c$ and by analytic continuation $S$ vanishes on $\Omega^c$, which means that ${\rm supp}~S\cap\Omega^c=\emptyset$. \end{proof} \begin{cor} Let $X$ be a complex manifold of complex dimension $n$ and $T$ be a $\ol\partial$-exact $(0,1)$-current on $X$. Assume that $X\setminus{\rm supp}~T$ is connected, then if $S$ is a distribution on $X$ such that $\ol\partial S=T$, then either ${\rm supp}~S={\rm supp}~T$ or ${\rm supp}~S=X$. \end{cor} \begin{proof} The support of $T$ is always contained in the support of $S$. If ${\rm supp}~S\neq X$, then the other inclusion holds by Proposition 1.1 since $X\setminus{\rm supp}~T$ is connected. \end{proof} Note that the difference between two solutions of the equation $\ol\partial S=T$ is a holomorphic function on $X$. Then analytic continuation implies the following uniqueness result. \begin{prop}\label{uniqueness} Assume that the complex manifold $X$ is connected. Let $T$ be a $\ol\partial$-exact $(0,1)$-current on $X$ such that $X\setminus{\rm supp}~T\neq\emptyset$ and $S$ and $U$ two distributions such that $$\ol\partial S=\ol\partial U=T$$ and the support of $S$ and the support of $U$ do not intersect on the same connected component $\Omega^c$ of $X\setminus{\rm supp}~T$, then $S=U$. In particular, the equation $\ol\partial S=T$ admits at most one solution $S$ such that ${\rm supp}~S={\rm supp}~T$. \end{prop} \begin{rem} The equation $\ol\partial S=T$ may have no solution $S$ with ${\rm supp}~S={\rm supp}~T$. Consider for example a relatively compact domain $D$ with ${\mathcal C}^\infty$-smooth boundary in a complex manifold $X$ and a function $F\in{\mathcal C}^\infty(\overline D)$ which is holomorphic in $D$. Denote by $f$ the restriction of $F$ to the boundary of $D$ and set $S=F\chi_D$, where $\chi_D$ is the characteristic function of the domain $D$. Then, by the Stokes formula, $\ol\partial S=f[\partial D]^{0,1}$, where $[\partial D]^{0,1}$ is the part of bidegree $(0,1)$ of the integration current over the boundary of $D$. Clearly the support of $T=f[\partial D]^{0,1}$ is the boundary of $D$, but, by Proposition \ref{uniqueness}, $S$ is the unique solution of $\ol\partial S=T$ whose support is contained in $\overline D$. So there is no solution whose support is equal to the support of $T$. \end{rem} Let us end this section by considering the regularity of the solutions. \begin{prop}\label{reg} Let $X$ be a complex manifold and $f$ a $(0,1)$-form with coefficients in $\mathcal C^k(X)$, $0\leq k\leq +\infty$ (resp. $L^p_{loc}(X)$, $1\leq p\leq +\infty$), which is $\ol\partial$-exact in the sense of currents. Then any solution $g$ of the equation $\ol\partial g=f$ is in $\mathcal C^k(X)$, $0\leq k\leq +\infty$ (resp. $L^p_{loc}(X)$, $1\leq p\leq +\infty$). \end{prop} \begin{proof} By the regularity of the Cauchy-Riemann operator (injectivity of the Dolbeault isomorphism \cite{HeLe2} and \cite{LaLp}), if $f$ has coefficients in $\mathcal C^k(X)$, $0\leq k\leq +\infty$ (resp. $L^p_{loc}(X)$, $1\leq p\leq +\infty$), then, since $f$ is $\ol\partial$-exact in the sense of currents, the equation $\ol\partial S=f$ has a solution in $\mathcal C^k(X)$, $0\leq k\leq +\infty$ (resp. $L^p_{loc}(X)$, $1\leq p\leq +\infty$). The difference between two solutions of the equation $\ol\partial S=f$ being a holomorphic function on $X$, all the solutions have the same regularity. \end{proof} Associating Proposition \ref{uniqueness} and Proposition \ref{reg}, we get: \begin{cor} Assume that the complex manifold $X$ is connected. If $f$ is a $(0,1)$-form such that $X\setminus {\rm supp}~f\neq\emptyset$, then the equation $\ol\partial g=f$ has at most one unique solution such that ${\rm supp}~g={\rm supp}~f$ and this solution has the same regularity as $f$. \end{cor} \section{Solving $\overline{\partial}$ with prescribed support} Let $X$ be a connected, complex manifold and $\Omega$ a domain such that $\overline\Omega$ is strictly contained in $X$ and the interior of $\overline\Omega$ coincides with $\Omega$. We set $\Omega^c=X\setminus \overline\Omega$, it is a non-empty open subset of $X$. Let us denote by $H^{0,1}_{\overline\Omega,\infty}(X)$ (resp. $H^{0,1}_{\overline\Omega,cur}(X)$, $H^{0,1}_{\overline\Omega,\mathcal C^k}(X)$, $H^{0,1}_{\overline\Omega,L^p_{loc}}(X)$) the Dolbeault cohomology group of bidegree $(0,1)$ for smooth forms (resp. currents, $\mathcal C^k$-forms, $k\geq 0$, $L^p_{loc}$-forms, $1\leq p\leq +\infty$) with support in $\overline\Omega$. The vanishing of these groups means that one can solve the $\ol\partial$ equation with prescribed support in $\overline\Omega$ in the smooth category (resp. the space of currents, the space of $\mathcal C^k$-forms, the space of $L^p_{loc}$-forms). It follows from Proposition \ref{uniqueness}, Proposition \ref{reg} and from the Dolbeault isomorphism with support conditions (Corollary 2.15 in \cite{HeLe2} and Proposition 1.2 in \cite{LaLp}) that \begin{prop}\label{injective} The natural morphisms from $H^{0,1}_{\overline\Omega,\infty}(X)$ (resp. $H^{0,1}_{\overline\Omega,\mathcal C^k}(X)$, $k\geq 0$, $H^{0,1}_{\overline\Omega,L^p_{loc}}(X)$, $1\leq p\leq +\infty$) into $H^{0,1}_{\overline\Omega,cur}(X)$ are injective. In particular, if $H^{0,1}_{\overline\Omega,cur}(X)=0$, then $H^{0,1}_{\overline\Omega,\infty}(X)=0$, $H^{0,1}_{\overline\Omega,\mathcal C^k}(X)=0$ and $H^{0,1}_{\overline\Omega,L^p_{loc}}(X)=0$. \end{prop} In the next sections, examples are given proving that there exist domains in $\mathbb C^2$ and $\mathbb C P^2$ such that $H^{0,1}_{\overline\Omega,\infty}(X)=0$, but $H^{0,1}_{\overline\Omega,cur}(X)\neq 0$. We will now consider the link between the vanishing of the group $H^{0,1}_{\overline\Omega,cur}(X)$ and the extension properties of some holomorphic functions in $\Omega^c$. \begin{prop}\label{CNcur} Assume $H^{0,1}_{\overline\Omega,cur}(X)=0$, then any holomorphic function on $\Omega^c=X\setminus \overline\Omega$, which is the restriction to $\Omega^c$ of a distribution on $X$, extends as a holomorphic function to $X$. \end{prop} \begin{proof} Let $f\in \mathcal O(\Omega^c)$ and $S_f\in\mathcal D'(X)$ a distribution such that ${S_f}_{|_{\Omega^c}}=f$. Consider the $(0,1)$-current $\ol\partial S_f$, it is closed and has support in $\overline\Omega$. Since $H^{0,1}_{\overline\Omega,cur}(X)=0$, there exists $U\in\mathcal D'(X)$, with support in $\overline\Omega$ such that $\ol\partial U=\ol\partial S_f$ in $X$. Set $h=S_f-U$, it is a holomorphic fonction on $X$ and $h_{|_{\Omega^c}}=S_{f|_{\Omega^c}}=f$. \end{proof} In the same way, we can prove \begin{prop}\label{CNLp} Assume $H^{0,1}_{\overline\Omega,L^p_{loc}}(X)=0$, $p\geq 1$, then any holomorphic function on $\Omega^c=X\setminus \overline\Omega$, which is the restriction to $\Omega^c$ of a form with coefficients in $W^{1,p}_{loc}(X)$, extends as a holomorphic function to $X$. \end{prop} \begin{prop}\label{CNCk} Assume $H^{0,1}_{\overline\Omega,\mathcal C^k}(X)=0$, $k\geq 0$, then any holomorphic function on $\Omega^c=X\setminus \overline\Omega$, which is of class $\mathcal C^{k+1}$ on $X\setminus\Omega=\overline{\Omega^c}$, extends as a holomorphic function to $X$. \end{prop} \begin{prop}\label{CNsmooth} Assume $H^{0,1}_{\overline\Omega,\infty}(X)=0$, then any holomorphic function on $\Omega^c=X\setminus \overline\Omega$, which is smooth on $X\setminus\Omega=\overline{\Omega^c}$, extends as a holomorphic function to $X$. \end{prop} \begin{cor}\label{CSconnected} Assume $H^{0,1}_{\overline\Omega,\infty}(X)=0$, then $\Omega^c=X\setminus \overline\Omega$ is connected. \end{cor} \begin{proof} Assume $\Omega^c$ is not connected. Let $f$ be a holomorphic function which is constant equal to $1$ in one connected component of $\Omega^c$ and vanishes identically on all the other ones. By analytic continuation $f$ cannot be the restriction to $\Omega^c$ of a holomorphic function on $X$, and by Proposition \ref{CNsmooth} we get $H^{0,1}_{\overline\Omega,\infty}(X)\neq 0$. \end{proof} \begin{rem} Note that, by Proposition \ref{support}, $H^{0,1}_{\overline\Omega,cur}(X)\neq 0$ if and only if there exists at least one $\ol\partial$-exact $(0,1)$-current $T$ with support contained in $\overline\Omega$ such that the support of each solution of the equation $\ol\partial S=T$ contains at least a connected component of $\Omega^c$. \end{rem} Let us give a partial converse to Corollary \ref{CSconnected}. Let $H^{0,1}_c(X) $ denote the Dolbeault cohomology group for $(0,1)$-forms with compact support in $X$. \begin{prop}\label{CNconnected} Assume $\Omega$ is relatively compact in a non-compact complex manifold $X$ such that $H^{0,1}_c(X)=0$. If $\Omega^c=X\setminus \overline\Omega$ is connected, then $$H^{0,1}_{\overline\Omega,cur}(X)=H^{0,1}_{\overline\Omega,\infty}(X)=H^{0,1}_{\overline\Omega,\mathcal C^k}(X)=H^{0,1}_{\overline\Omega,L^p_{loc}}(X)=0.$$ \end{prop} \begin{proof} By Proposition \ref{reg}, it suffices to prove that $H^{0,1}_{\overline\Omega,cur}(X)=0$. This vanishing result follows directly from Proposition \ref{support}. More precisely, if $T$ is a $\ol\partial$-closed current on $X$ with support contained in $\overline\Omega$, there exists a distribution $S$, with compact support such that $\ol\partial S=T$, since $H^{0,1}_c(X)=0$. Then the support of $S$ cannot contain the connected set $\Omega^c$, otherwise $X=\overline\Omega\cup {\rm supp}~S$ would be compact, and hence ${\rm supp}~S$ is contained in $\overline\Omega$. \end{proof} In particular, if $X$ is a Stein manifold with ${\rm dim}_\mathbb C~X\geq 2$ and $\Omega$ a relatively compact domain in $X$, then \centerline{$H^{0,1}_{\overline\Omega,cur}(X)=H^{0,1}_{\overline\Omega,\infty}(X)=H^{0,1}_{\overline\Omega,\mathcal C^k}(X)=H^{0,1}_{\overline\Omega,L^p_{loc}}(X)=0$~$\Leftrightarrow$~$\Omega^c$ is connected.} \medskip An immediate corollary of Proposition \ref{CNconnected} and Proposition \ref{CNcur} is the following: \begin{cor}\label{extension-holo} Let $X$ be a non-compact, connected complex manifold such that $H^{0,1}_c(X)=0$, and $\Omega$ a relatively compact, open subset of $X$ with connected complement, then any holomorphic function on $\Omega^c$ extends as a holomorphic function to $X$. \end{cor} \begin{proof} It is sufficient to apply Proposition \ref{CNconnected} and Proposition \ref{CNcur} to a neighborhood $D$ of $\overline\Omega$ with connected complement and to conclude by analytic continuation. \end{proof} Corollary \ref{extension-holo} is the classical Hartogs extension phenomenon. Note that all the previous results remain true if we replace the family of all compact subsets of a non-compact manifold by any family $\Phi$ of supports in a manifold $X$, different from the family of all closed subsets of $X$ (see e.g. \cite{Se} for the definition of a family of supports). \begin{prop}\label{CSsmooth} Assume the complex manifold $X$ satisfies $H^{0,1}(X)=0$. If any holomorphic function on $\Omega^c$, which is smooth on $X\setminus\Omega=\overline{\Omega^c}$, extends as a holomorphic function to $X$, then $H^{0,1}_{\overline\Omega,\infty}(X)=0$. \end{prop} \begin{proof} Let $f$ be a smooth $\ol\partial$-closed form in $X$ with support contained in $\overline\Omega$. Since $H^{0,1}(X)=0$, there exists a function $g\in{\mathcal C}^\infty(X)$ such that $\ol\partial g=f$. Since the support of $f$ is contained in $\overline\Omega$, $g$ is holomorphic in $\Omega^c$ and by the extension property it extends as a holomorphic function $\widetilde g$ to $X$. Set $h=g-\widetilde g$, then the support of $h$ is contained in $\overline\Omega$ and $\ol\partial h=f$. \end{proof} Similarly, since $H^{0,1}(X)=H^{0,1}_{\mathcal C^k}(X)=H^{0,1}_{L^p_{loc}}(X)=H^{0,1}_{cur}(X)=0$ by the Dolbeault isomorphism, we have \begin{prop}\label{CSCk} Assume the complex manifold $X$ satisfies $H^{0,1}(X)=0$. If any holomorphic function on $\Omega^c$, which is of class $\mathcal C^{k}$, $k\geq 0$ on $X\setminus\Omega=\overline{\Omega^c}$, extends as a holomorphic function to $X$, then $H^{0,1}_{\overline\Omega,\mathcal C^k}(X)=0$. \end{prop} \begin{prop}\label{CSLp} Assume the complex manifold $X$ satisfies $H^{0,1}(X)=0$. If any holomorphic function on $\Omega^c=X\setminus \overline\Omega$, which is the restriction to $\Omega^c$ of a function $L^p_{loc}(X)$, $p\geq 1$, extends as a holomorphic function to $X$, then $H^{0,1}_{\overline\Omega,L^p_{loc}}(X)=0$. \end{prop} \begin{prop}\label{CScur} Assume the complex manifold $X$ satisfies $H^{0,1}(X)=0$. If any holomorphic function on $\Omega^c=X\setminus \overline\Omega$, which is the restriction to $\Omega^c$ of a distribution on $X$, extends as a holomorphic function to $X$, then $H^{0,1}_{\overline\Omega,cur}(X)=0$. \end{prop} Let us end this section by a characterization of pseudoconvexity in $\mathbb C^2$ by means of the Dolbeault cohomology with prescribed support. \begin{thm} Let $D$ be a bounded domain in $\mathbb C^2$ with Lipschitz boundary. Then the following assertions are equivalent: (\text{i}) $D$ is a pseudoconvex domain; (\text{ii}) $H^{0,1}_{\overline D,\infty}(\mathbb C^2)=0$ and $H^{0,2}_{\overline D,\infty}(\mathbb C^2)$ is Hausdorff. \end{thm} \begin{proof} By Serre duality (\cite{Ca} or Theorem 2.7 in \cite{LaShdualiteL2}) assertion (ii) implies that $\check{H}^{2,q}(D)$ is Hausdorff, for all $1\leq q\leq 2$ and moreover $\check{H}^{2,1}(D)=0$ as the dual space to $H^{0,1}_{\overline D,\infty}(\mathbb C^2)$. Let us prove now that the condition $\check{H}^{2,1}(D)=0$ implies that $D$ is pseudoconvex. We will follow the methods used by Laufer \cite{Lf} for the usual Dolbeault cohomology and prove by contradiction. Assume $D$ is not pseudoconvex, then there exists a domain $\widetilde D$ strictly containing $D$ such that any holomorphic function on $D$ extends holomorphically to $\widetilde D$. Since interior($\overline D$)$=D$, after a translation and a rotation we may assume that $0\in\widetilde D\setminus\overline D$ and there exists a point $z_0$ in the intersection of the plane $\{(z_1,z_2)\in\mathbb C^2~|~z_1=0\}$ with $D$, which belongs to the same connected component of the intersection of that plane with $\widetilde D$. Let us denote by $B(z_1,z_2)$ the $(0,1)$-form defined by $$B(z_1,z_2)=\frac{\overline z_1~d\overline z_2-\overline z_2~d\overline z_1}{|z|^4}\wedge dz_1\wedge dz_2.$$ It is derived from the Bochner-Martinelli kernel in $\mathbb C^2$ and is a $\ol\partial$-closed form on $\mathbb C^2\setminus\{0\}$. Then the $L^1$-form $\frac{\overline z_2}{|z|^2}\wedge dz_1\wedge dz_2$ defines a distribution in $\mathbb C^2$ which satisfies $$\ol\partial(\frac{-\overline z_2}{|z|^2}~dz_1\wedge dz_2)=z_1B(z_1,z_2)\qquad \text{ on }\mathbb C^2\setminus\{0\}.$$ On the other hand, if $\check{H}^{2,1}(D)=0$, there exists an extendable $(2,0)$-current $v$ such that $\ol\partial v=B$ on $D$ and by the regularity of $\ol\partial$ in bidegree $(2,1)$, $v$ is smooth on $D$, since $B$ is smooth on $\mathbb C^2\setminus\{0\}$. Set $$F=z_1v+\frac{\overline z_2}{|z|^2}\wedge dz_1\wedge dz_2.$$ Then $F$ is a holomorphic $(2,0)$-form on $D$, so its coefficient $F_{12}$ should extend holomorphically to $\widetilde D$, but we have $F_{12}(0,z_2)=\frac{1}{z_2}$ on $D\cap\{z_1=0\}$, which is holomorphic and singular at $z_2=0$. This gives the contradiction since $0\in\widetilde D\setminus D$. This proves that (ii) $\Rightarrow$ (i). For the converse, first note that if $D$ is a pseudoconvex domain in $\mathbb C^2$, then $\mathbb C^2\setminus D$ is connected and by Proposition \ref{CNconnected}, we have $H^{0,1}_{\overline D,\infty}(\mathbb C^2)=0$. Then we apply Theorem 5 in \cite{CS2012} to get that if $D$ is pseudoconvex with Lipschitz boundary, then $H^{0,1}_\infty(\mathbb C^2\setminus D)$ is Hausdorff. Let us prove that if $H^{0,1}_\infty(\mathbb C^2\setminus D)$ is Hausdorff, then $H^{0,2}_{\overline D,\infty}(\mathbb C^2)$ is Hausdorff. Let $f$ be a $\ol\partial$-closed $(0,2)$-form on $\mathbb C^2$ with support contained in $\overline D$ such that for any $\ol\partial$-closed $(2,0)$-current $T$ on $D$ extendable as a current to $\mathbb C^2$, we have $<T,f>=0$. Since $H^{0,2}(\mathbb C^2)=0$, there exists a smooth $(0,1)$-form $g$ on $\mathbb C^2$ such that $\ol\partial g=f$ on $\mathbb C^2$, in particular $\ol\partial g=0$ on $\mathbb C^2\setminus\overline D$. Let $S$ be any $\ol\partial$-closed $(2,1)$-current on $\mathbb C^2$ with compact support in $\mathbb C^2\setminus D$, then, since $H^{2,1}_c(\mathbb C^2)=0$, there exists a compactly supported $(2,0)$-current $U$ on $\mathbb C^2$ such that $\ol\partial U=S$ and in particular $\ol\partial U=0$ on $D$. Thus $$<S,g>=<\ol\partial U,g>=<U,\ol\partial g>=<U,f>=0,$$ by hypothesis on $f$. Therefore the Hausdorff property of $H^{0,1}_{\infty}(X\setminus D)$ implies there exists a smooth function $h$ on $X\setminus D$ such that $\ol\partial h=g$. Let $\widetilde h$ be a smooth extension of $h$ to $\mathbb C^2$, then $u=g-\ol\partial\widetilde h$ is a smooth form with support in $\overline D$ and $$\ol\partial u=\ol\partial (g-\ol\partial\widetilde h)=\ol\partial g=f.$$ This proves that $H^{0,2}_{\overline D,\infty}(\mathbb C^2)$ is Hausdorff, which proves that (i) $\Rightarrow$ (ii). \end{proof} \section{The case of the unbounded Hartogs triangle in $\mathbb C^2$} In $\mathbb C^2$, let us define the domains $\mathbb H^+$ and $\mathbb H^-$ by \begin{align*} \mathbb H^+&=\{(z,w)\in\mathbb C^2~|~|z|<|w|\}\\ \mathbb H^-&=\{(z,w)\in\mathbb C^2~|~|z|>|w|\} \end{align*} then $\mathbb H^+\cap\mathbb H^-=\emptyset$ and $\overline\mathbb H^+\cup\overline\mathbb H^-=\mathbb C^2$. Let us denote by $H^{0,1}_{\overline\mathbb H^-,\infty}(\mathbb C^2)$ (resp. $H^{0,1}_{\overline\mathbb H^-,cur}(\mathbb C^2)$, $H^{0,1}_{\overline\mathbb H^-,L^2_{loc}}(\mathbb C^2)$,$H^{0,1}_{\overline\mathbb H^-,\mathcal C^k}(\mathbb C^2)$ ) the Dolbeault cohomology group of bidegree $(0,1)$ for smooth forms (resp. currents, $L^2_{loc}$-forms, $\mathcal C^k$-forms) with support in $\overline\mathbb H^-$. The vanishing of these groups means that one can solve the $\ol\partial$ equation with prescribed support in $\overline\mathbb H^-$ in the smooth category (resp. the space of currents, the space of $L^2$-forms, the space of $\mathcal C^k$-forms). \medskip We can apply Propositions \ref{CNsmooth} and \ref{CSsmooth} for $\Omega=\mathbb H^-$, since $H^{0,1}(\mathbb C^2)=0$, and we get \begin{prop}\label{smooth} We have $H^{0,1}_{\overline\mathbb H^-,\infty}(\mathbb C^2)=0$ if and only if any holomorphic function on $\mathbb H^+$ which is smooth on $\overline\mathbb H^+$ extends as a holomorphic function to $\mathbb C^2$. \end{prop} \begin{prop}\label{ext} Any holomorphic function on $\mathbb H^+$ which is smooth on $\overline\mathbb H^+$ extends as a holomorphic function to $\mathbb C^2$. \end{prop} \begin{proof} Let $f\in{\mathcal C}^\infty(\overline\mathbb H^+)\cap \mathcal O(\mathbb H^+)$. By Sibony's result (\cite{Si}, page 220), for any $R>0$, the restriction of $f$ to $\mathbb H^+\cap\Delta(0,R)\times\Delta(0,R)$ extends holomorphically to the bidisc $\Delta(0,R)\times\Delta(0,R)$ and then by analytic continuation $f$ extends holomorphically to $\mathbb C^2$. \end{proof} It follows immediately from Proposition \ref{smooth} and Proposition \ref{ext} that \begin{cor}\label{vanishsmooth} $H^{0,1}_{\overline\mathbb H^-,\infty}(\mathbb C^2)=0$. \end{cor} Let us consider now the case of currents. We can apply Proposition \ref{CNCk} to get \begin{prop}\label{cur} Assume we have $H^{0,1}_{\overline\mathbb H^-,\mathcal C^k}(\mathbb C^2)=0$, $k\geq 0$ then any holomorphic function on $\mathbb H^+$, which is of class $\mathcal C^{k+1}$ on $\overline\mathbb H^+$, extends as a holomorphic function to $\mathbb C^2$. \end{prop} \begin{thm}\label{ck} For any $k\geq 0$, $H^{0,1}_{\overline\mathbb H^-,\mathcal C^k}(\mathbb C^2)\neq 0$, and $H^{0,1}_{\overline\mathbb H^-,cur}(\mathbb C^2)\neq 0$. \end{thm} \begin{proof} Let us consider the function $h$ define on $\mathbb H^+$ by $h(z,w)=z^l (\frac{z}{w})$, $l\geq 0$. It is of class $\mathcal C^{k+1}$ on $\overline\mathbb H^+$, if $l\geq k+2$, but does not extend as a holomorphic function to $\mathbb C^2$. In fact if $h$ admits a holomorphic extension $\widetilde h$ to $\mathbb C^2$, then we would have $$\widetilde h(z,w)=z^l(\frac{z}{w})\quad {\rm on}\quad \mathbb C^2\setminus\{w=0\},$$ which is not bounded nearby $\{(z,w)\in\mathbb C^2~|~z\neq 0, w=0\}$. By Proposition \ref{cur}, we get $H^{0,1}_{\overline\mathbb H^-,\mathcal C^k}(\mathbb C^2)\neq 0$. Then using Proposition \ref{injective} , it follows $H^{0,1}_{\overline\mathbb H^-,cur}(\mathbb C^2)\neq 0$. \end{proof} Proposition \ref{smooth} still holds if we replace smooth forms by $W^1_{loc}$-forms (for $D\subset\mathbb C^2$, $W^1_{loc}(\overline D)$ is the space of functions which are in $W^1(\overline D\cap B(0,R))$ for any $R>0$) in the following way \begin{prop}\label{L2} We have $H^{0,1}_{\overline\mathbb H^-,L^2_{loc}}(\mathbb C^2)=0$ if and only if any function $f\in\mathcal O(\mathbb H^+)\cap W^1_{loc}(\overline\mathbb H^+)$, which is the restriction to $\overline\mathbb H^+$ of a form with coefficients in $W^1_{loc}(\mathbb C^2)$, extends as a holomorphic function to $\mathbb C^2$. \end{prop} \begin{thm} $H^{0,1}_{\overline\mathbb H^-,L^2_{loc}}(\mathbb C^2)\neq 0$ \end{thm} \begin{proof} Let us consider the function $h$ defined on $\mathbb H^+$ by $h(z,w)=z^3 (\frac{z}{w})$. It is of class $\mathcal C^2$ on $\overline\mathbb H^+$ and it is in $W^{1}_{loc}(\overline\mathbb H^+)$ and extends as a $\mathcal C^2$ funtion to $\mathbb C^2$ by the Whitney extension Theorem, but does not extend as a holomorphic function to $\mathbb C^2$. In fact if $h$ would admit a holomorphic extension $\widetilde h$ to $\mathbb C^2$, then we would have $$\widetilde h=z^3(\frac{z}{w})\quad {\rm on}\quad \mathbb C^2\setminus\{w=0\},$$ which is not bounded nearby $\{(z,w)\in\mathbb C^2~|~z\neq 0, w=0\}$. By Proposition \ref{L2}, we get $H^{0,1}_{\overline\mathbb H^-,L^2_{loc}}(\mathbb C^2)\neq 0$. \end{proof} {\parindent=0pt{\bf Remark}: Note that if we replace $\mathbb H^-$ by the classical Hartogs triangle $\mathbb T^-=\mathbb H^-\cap\Delta\times\Delta$, where $\Delta$ is the unit disc in $\mathbb C$, then by Proposition \ref{CNconnected} we have $$H^{0,1}_{\overline\mathbb T^-,L^2_{loc}}(\mathbb C^2)=H^{0,1}_{\overline\mathbb T^-,L^2_{loc}}(\mathbb C^2)=H^{0,1}_{\overline\mathbb T^-,\infty}(\mathbb C^2)=0.$$ So for solving the $\ol\partial$-equation with prescribed support, it is quite different to consider a bounded domain or an unbounded domain as support.} \section{The case of the Hartogs triangles in ${\mathbb{C}}\mathbb P^2$} In ${\mathbb{C}}\mathbb P^2$, we denote the homogeneous coordinates by $[z_0,z_1,z_2]$. On the domain where $z_0\neq 0$, we set $z=\frac{z_1}{z_0}$ and $w=\frac{z_2}{z_0}$. Let us define the domains $\mathbb H^+$ and $\mathbb H^-$ by \begin{align*} \mathbb H^+&=\{[z_0:z_1:z_2]\in{\mathbb{C}}\mathbb P^2~|~|z_1|<|z_2|\}\\ \mathbb H^-&=\{[z_0:z_1:z_2]\in{\mathbb{C}}\mathbb P^2~|~|z_1|>|z_2|\} \end{align*} then $\mathbb H^+\cap\mathbb H^-=\emptyset$ and $\overline\mathbb H^+\cup\overline\mathbb H^-={\mathbb{C}}\mathbb P^2$. These domains are called Hartogs' triangles in ${\mathbb{C}}\mathbb P^2$. The Hartogs triangles provide examples of non-Lipschitz Levi-flat hypersurfaces (see \cite{HI}). For $k\geq 0$ or $k=\infty$, we denote by $H^{0,1}_{\overline\mathbb H^-,\mathcal C^k}({\mathbb{C}}\mathbb P^2)$ (resp. $H^{0,1}_{\overline\mathbb H^-,cur}({\mathbb{C}}\mathbb P^2)$, $H^{0,1}_{\overline\mathbb H^-,L^2}({\mathbb{C}}\mathbb P^2)$) the Dolbeault cohomology group of bidegree $(0,1)$ for $\mathcal C^k$-smooth forms (resp. currents, $L^2$-forms) with support in $\overline\mathbb H^-$. Again the vanishing of these groups means that one can solve the $\ol\partial$ equation with prescribed support in $\overline\mathbb H^-$ in the $\mathcal C^k$-smooth category (resp. the space of currents, the space of $L^2$-forms). \medskip We can also apply Propositions \ref{CNsmooth} and \ref{CSsmooth} for $\Omega=\mathbb H^-$, since $H^{0,1}({\mathbb{C}}\mathbb P^2)=0$, and we get \begin{prop} We have, for $k\geq 0$ and for $k=\infty$, $H^{0,1}_{\overline\mathbb H^-,\mathcal C^k}({\mathbb{C}}\mathbb P^2)=0$ if and only if any holomorphic function on $\mathbb H^+$ which is $\mathcal C^{k+1}$-smooth on $\overline\mathbb H^+$ extends as a holomorphic function to ${\mathbb{C}}\mathbb P^2$. \end{prop} \begin{prop} Any holomorphic function on $\mathbb H^+$ which is continuous on $\overline\mathbb H^+$ is constant. \end{prop} \begin{proof} Let $f\in \mathcal C(\overline\mathbb H^+)\cap \mathcal O(\mathbb H^+)$. Notice that the boundary $b \mathbb H^+$ of $\mathbb H^+$ is foliated by a family of compact complex curves described in non-homogeneous coordinates by \begin{equation} S_\theta =\{ z=e^{i\theta} w\},\qquad \theta\in \mathbb R.\end{equation} Restricted to each fixed $\theta$, $f$ is a continuous $CR$ function on the compact Riemann surface $S_\theta$. Thus $f$ must be a constant on each $S_\theta$. Since every Riemann surface $S_\theta$ contains the point $(0,0)$, this implies $f$ must be constant on $b \mathbb H^+$. \end{proof} Note that in the case of the unbounded Hartogs triangle in $\mathbb C^2$, the function $f$ needs to be of class ${\mathcal C}^\infty$ on $\overline\mathbb H^+$ to be extendable as a holomorphic function to $\mathbb C^2$ (see Proposition \ref{smooth} and the beginning of the proof of Theorem \ref{ck}). But in ${\mathbb{C}}\mathbb P^2$, in contrary to $\mathbb C^2$ we get (compare to Corollary \ref{vanishsmooth} and Theorem \ref{ck}) from the previous propositions that \begin{cor} For each $k\geq 0$, $H^{0,1}_{\overline\mathbb H^-,\mathcal C^k}({\mathbb{C}}\mathbb P^2)=0$ and $H^{0,1}_{\overline\mathbb H^-,\infty}({\mathbb{C}}\mathbb P^2)=0$ . \end{cor} As in the case of $\mathbb C^2$, we get for extendable currents \begin{prop}\label{curproj} Suppose that $H^{0,1}_{\overline\mathbb H^-,cur}({\mathbb{C}}\mathbb P^2)=0$. Then any holomorphic function on $\mathbb H^+$, which is extendable in the sense of currents, is constant. \end{prop} \begin{thm} $H^{0,1}_{\overline\mathbb H^-,cur}({\mathbb{C}}\mathbb P^2)$ does not vanish and is Hausdorff. \end{thm} \begin{proof} Let us consider the function $h$ defined on the open subset $\mathbb H^+$ of ${\mathbb{C}}\mathbb P^2$ by $h([z_0:z_1:z_2])=\frac{z_1}{z_2}$. It is holomorphic and bounded and hence defines an extendable current, but it is not constant, so by Proposition \ref{curproj}, we get $H^{0,1}_{\overline\mathbb H^-,cur}({\mathbb{C}}\mathbb P^2)\neq 0$. By the Serre duality, to prove that $H^{0,1}_{\overline\mathbb H^-,cur}({\mathbb{C}}\mathbb P^2)$ is Hausdorff, it is sufficient to prove that $H^{2,2}_\infty(\overline\mathbb H^-)=0$. Let $f$ be a smooth $(2,2)$-form on $\overline\mathbb H^-$ and $U$ be a neighborhood of $\overline\mathbb H^-$, we can choose $U$ such that $\overline U$ is a connected proper subset of ${\mathbb{C}}\mathbb P^2$. Then $f$ extends as a smooth $(2,2)$-form on $U$, called $\widetilde f$. By Malgrange's theorem, the top degree Dolbeault cohomology group $H^{2,2}(U)$ vanishes since $U$ is a non compact connected complex manifold. Thus there exists a smooth $(2,1)$-form $u$ on $U$ such that $\ol\partial u=\widetilde f$ on $U$. Then $v=u_{|_{\overline\mathbb H^-}}$is a smooth form on $\overline\mathbb H^-$ which satisfies $\ol\partial v=f$ on $\mathbb H^-$. \end{proof} Let us now consider the $L^2$ Dolbeault cohomology with prescribed support in an Hartogs triangle in ${\mathbb{C}}\mathbb P^2$. As usual we endow $\mathbb H^+$ with the restriction of the Fubini-Study metric of ${\mathbb{C}} \mathbb{P}^2$. The following proposition is already proved in Proposition 6 in \cite{CS2012}. \begin{prop}\label{holoL2} Let $\mathbb H^+\subset{\mathbb{C}}\mathbb P^2$ be the Hartogs' triangle. Then we have the following: \begin{enumerate} \item The Bergman space of $L^2$ holomorphic functions $L^2(\mathbb H^+)\cap\mathcal{O}(\mathbb H^+)$ on the domain $\mathbb H^+$ separates points in $\mathbb H^+$. \item There exist nonconstant functions in the space $W^1(\mathbb H^+)\cap\mathcal{O}(\mathbb H^+)$. However, this space does not separate points in $\mathbb H^+$ and is not dense in the Bergman space $L^2(\mathbb H^+)\cap \mathcal{O}(\mathbb H^+)$. \item Let $f\in W^2(\mathbb H^+)\cap \mathcal{O}(\mathbb H^+)$ be a holomorphic function on $\mathbb H^+$ which is in the Sobolev space $ W^2(\mathbb H^+)$. Then $f$ is a constant. \end{enumerate} \end{prop} \begin{prop}\label{extW1} Let $\mathbb H^+\subset{\mathbb{C}}\mathbb P^2$ be the Hartogs' triangle. Any function $f\in W^1(\mathbb H^+)\cap\mathcal{O}(\mathbb H^+)$ can be extended to a function in $W^1({\mathbb{C}}\mathbb{P}^2)$. \end{prop} \begin{proof} In the non-homogeneous holomorphic coordinates $(z,w)$ for $\Bbb H^+$, any function $f\in W^1(\mathbb H^+)\cap\mathcal{O}(\mathbb H^+)$ has the form (see Proposition 6 in \cite{CS2012}) $$f_k(z,w)= \left( \frac zw\right)^k, \qquad k\in \mathbb N.$$ It suffices to prove the proposition for each $f_k(z,w)$. Let $\chi(t)\in C^\infty(\mathbb R) $ be a function defined by $\chi(t)=0$ if $t\le 0$ and $\chi(t)=1$ if $t\ge 1$. Let $\tilde f_k$ be the function defined by \begin{equation} \tilde f_k(z,w)= \chi\left(1+\frac 13(1- \frac {|z|^2 }{|w|^2 })\right)f_k (z,w).\end{equation} On $|z|<|w|$, it is easy to see that $\tilde f_k=f_k$. Thus $\tilde f_k$ is an extension of $f_k$ to ${\mathbb{C}}\mathbb{P}^2$. To see that $\tilde f_k$ is in $W^1({\mathbb{C}}\mathbb{P}^2)$, we first note that the function $$\chi\left(1+\frac 13(1- \frac {|z|^2 }{|w|^2 })\right)=0$$ when restricted to $\{|z|\ge 2|w|\}$. Thus it is supported in $\{|z|\le 2|w|\}$. On its support, the function $\frac {|z|}{|w|}$ is bounded. Using this fact and differentiating under the chain rule, we have that \begin{equation}\label{deriv} |\nabla \chi\left(1+\frac 13(1- \frac {|z|^2 }{|w|^2 })\right)|\le C( \sup|\chi'|) \frac 1{|w|}\le C\frac 1{|w|}.\end{equation} Repeating the arguments as before, we see that the function $\frac 1{|w|}$ is in $L^2$ on $\{|z|\le 2|w|\}$. Since the function $f_k$ is bounded on the set $\{|z|\le 2|w|\}$, we conclude from \eqref{deriv} that the derivatives of $\tilde f_k$ is in $L^2({\mathbb{C}}\mathbb{P}^2)$. Thus $\tilde f_k$ is an extension in $W^1({\mathbb{C}}\mathbb{P}^2)$ of $f_k$. \end{proof} {\parindent=0pt{\bf Remark.} Suppose $D$ is a bounded domain with Lipschitz boundary, then any function $f\in W^1(D)$ extends as a function in $W^1({\mathbb{C}}\mathbb P^2)$. It is not known if this is true for the Hartogs triangle $\mathbb H^+$. In the proof of Proposition \ref{extW1}, we have used the fact that the function $f_k$ are in $W^1(\mathbb H^+)$ and {\it bounded} on $\mathbb H^+$.} \begin{thm}\label{infinite} Let $\mathbb H^-\subset{\mathbb{C}}\mathbb P^2$ be the Hartogs' triangle. Then the cohomology group $H^{0,1}_{\overline\mathbb H^-, L^2}({\mathbb{C}}\mathbb P^2)\neq 0$ and is infinite dimensional. \end{thm} \begin{proof} We recall that $\mathbb H^+= {\mathbb{C}}\mathbb{P}^2\setminus \overline \mathbb H^-.$ From Proposition \ref{holoL2}, the space of holomorphic functions in $W^1(\mathbb H^+)\cap\mathcal{O}(\mathbb H^+)$ is infinite dimensional. In the non-homogeneous coordinates, consider the holomorphic functions of the type $f_k=( \frac zw)^k$, $k\in \mathbb N$. We define the operator $\ol\partial_{\tilde c}$ as the weak minimal realization of $\ol\partial$, then the domain of $\ol\partial_{\tilde c}$ is the space of $L^2$ forms $f$ in ${\mathbb{C}}\mathbb P^2$ with support in $\overline\mathbb H^-$ such that $\ol\partial f$ is also an $L^2$ form in ${\mathbb{C}}\mathbb P^2$. Using Proposition \ref{extW1}, each holomorphic function $f_k$ can be extended to a function $\tilde f_k\in W^1({\mathbb{C}} \mathbb P^2)$. Suppose that $H^{0,1}_{\overline\mathbb H^-, L^2}({\mathbb{C}}\mathbb P^2)=0$. Then we can solve $\bar \partial_{\tilde c} u_k= \overline{\partial} \tilde f_k$ in ${\mathbb{C}}\mathbb P^2$ with prescribed support for $u_k$ in $\overline \mathbb H^-$. Let $H_k=\tilde f_k-u_k.$ Then $H_k$ is a holomorphic function in ${\mathbb{C}}\mathbb P^2$, hence a constant. But $H_k=f_k$ on $\mathbb H^+$, a contradiction. This implies that the space $H^{0,1}_{\overline\mathbb H^-, L^2}({\mathbb{C}}\mathbb P^2)$ is non-trivial. Next we prove that $H^{0,1}_{\overline\mathbb H^-, L^2}({\mathbb{C}}\mathbb P^2)$ is infinite dimensional. Each function $\tilde f_k$ corresponds to a (0,1)-form $ \overline{\partial} \tilde f_k$. We set $g_k=\overline{\partial} \tilde f_k$. Then $g_k$ is in $\text{Dom}(\overline{\partial}_{\tilde c})$ and satisfies $\overline{\partial}_{\tilde c} g_k=0.$ Thus it induces an element $[g_k]$ in $H^{0,1}_{\overline\mathbb H^-, L^2}({\mathbb{C}}\mathbb P^2)$. To see that $[g_k]$'s are linearly independent, let $N>1$ be a positive integer and $F_N=\sum_{k=1}^{N} c_k f_k$, where $c_k$ are constants. Set $G_N=\sum_{k=1}^{N} c_k g_k.$ Suppose that $[G_N]=0$, then we can solve $\overline{\partial}_{\tilde c} u= G_N$ and the function $F_N$ holomorphic in $\mathbb H^+$ extends holomorphically to ${\mathbb{C}}\mathbb P^2$. Thus $F_N$ must be a constant and $c_1=\cdots=c_N=0.$ Thus $[g_k]$'s are linearly independent. This proves that $H^{0,1}_{\overline\mathbb H^-, L^2}({\mathbb{C}}\mathbb P^2)$ is infinite dimensional. \end{proof} {\parindent=0pt{\bf Remark.} It follows from Proposition \ref{injective} and Theorem \ref{infinite} that $H^{0,1}_{\overline\mathbb H^-, cur}({\mathbb{C}}\mathbb P^2)$ is also infinite dimensional.} \begin{lem}\label{range} The range of the strong $L^2$ closure of $\ol\partial$ \begin{equation}\overline{\partial}_s:L^2_{2,1}(\mathbb H^-)\to L^2_{2,2}(\mathbb H^-) \end{equation} is closed and equal to $L^2_{2,2}(\mathbb H^-)$. \end{lem} \begin{proof} It is clear that $\overline{\partial}$ has closed range in the top degree and the range is $L^2_{2,2}(\mathbb H^-)$. Let $f\in L^2_{2,2}(\mathbb H^-)$. We extend $f$ to be zero outside $ \mathbb H^-$. Let $U$ be an open neighbourhood of $\overline \mathbb H^-$, then $f$ is in $L^2_{2,2}(U)$. We can choose $U$ such that $\overline U$ is a proper subset of ${\mathbb{C}}\mathbb P^2$ and $U$ has Lipschitz boundary. Since one can solve the $\overline{\partial}$ equation for top degree forms on $U$, there exists $u\in L^2_{2,1}(U)$ such that $$\overline{\partial} u=f$$ in the weak sense. It suffices to show that $f$ is in the range of $\overline{\partial}_s$. Since $U$ has Lipschitz boundary, using Friedrichs' lemma, there exists a sequence $u_\nu\in C^\infty(\overline U)$ such that $u_\nu\to u$ and $\overline{\partial} u_\nu\to f$ in $L^2_{2,2}(U)$. Restricting $u_\nu$ to $\overline \mathbb H^-$, we have that $u$ is in the domain of $\overline{\partial}_s$ and $$\overline{\partial}_s u=f.$$ Thus the range of $\overline{\partial}_s$ is equal to $L^2_{2,2}(\mathbb H^-)$. The lemma is proved. \end{proof} \begin{cor}\label{hausdorff} The cohomology group $H^{0,1}_{\overline\mathbb H^-, L^2}({\mathbb{C}}\mathbb P^2)$ is Hausdorff and infinite dimensional. \end{cor} \begin{thm} Let us consider the Hartogs' triangle $\mathbb H^-\subset{\mathbb{C}}\mathbb{P}^2$. Then the cohomology group $H^{2,1}_{\overline{\partial}_s, L^2}(\mathbb H^-)$ is infinite dimensional. \end{thm} \begin{proof} Suppose that $\overline{\partial}_s:L^2_{2,0}(\mathbb H^-)\to L^2_{2,1}(\mathbb H^-)$ does not have closed range. Then $H^{2,1}_{\overline{\partial}_s, L^2}(\mathbb H^-) $ is non-hausdorff, hence infinite dimensional. Suppose that $\overline{\partial}_s:L^2_{2,0}(\mathbb H^-)\to L^2_{2,1}(\mathbb H^-)$ has closed range. Using Lemma \ref{range}, $\overline{\partial}_s:L^2_{2,1}(\mathbb H^-)\to L^2_{2,2}(\mathbb H^-)$ has closed range. From the $L^2$ Serre duality, $\overline{\partial}_{\tilde c} :L^2(\mathbb H^-) \to L^2_{0,1} (\mathbb H^-)$ and $\overline{\partial}_{\tilde c} :L^2_{0,1}(\mathbb H^-) \to L^2_{0,2} (\mathbb H^-)$ both have closed range. Furthermore, \begin{equation} H^{2,1}_{\overline{\partial}_s, L^2}(\mathbb H^-) \cong H^{0,1}_{\overline\mathbb H^-, L^2}({\mathbb{C}}\mathbb P^2).\end{equation} Thus from Theorem \ref{infinite}, it is infinite dimensional. \end{proof} \bigskip \noindent{\bf Remarks:} \begin{enumerate} \item Let $\mathbb T=\{(z_1,z_2)\in {\mathbb{C}}^2\mid |z_2|< |z_1|<1\}$ be the Hartogs triangle in ${\mathbb{C}}^2$. Then by Proposition \ref{CNconnected}, $$H^{0,1}_{\overline{\partial}_{\tilde c}, L^2}(\mathbb T) =H^{0,1}_{ \overline {\mathbb T}, L^2}({\mathbb{C}}^2)=0.$$ This is in sharp contrast to Corollary \ref{hausdorff}. It is well-known that $H^{01}(\mathbb T)=0$ since $\mathbb T$ is pseudoconvex, but $H^{0,1}_\infty(\overline{\mathbb T})$ (cohomology with forms smooth up to the boundary) is infinite dimensional (see \cite{Si}). In fact, $H^{0,1}(\overline{\mathbb T})$ is even non-Hausdorff (see \cite{LaSh2}). We also refer the reader to the recent survey paper on the Hartogs triangle \cite{Sh}. \item If $D$ is a domain in ${\mathbb{C}}\mathbb P^n$ with $C^2$ boundary, then we have $L^2$ existence theorems for $\overline{\partial}$ on $D$ for all degrees (see \cite{BC} \cite{HI}, \cite{CSW}). This follows from the existence of bounded plurisubharmonic functions on pseudoconvex domains in ${\mathbb{C}}\mathbb P^n$ with $C^2$ boundary (see \cite{OS}). This is even true if $D$ has only Lipschitz boundary (see \cite{Ha}). \item Suppose that $D$ is a pseudoconvex domain in ${\mathbb{C}}\mathbb P^n$ with Lipschitz boundary, we have $H^{p,q}_{L^2}(D)=0$ for all $q>0$. By the $L^2$ Serre duality (see \cite{CS2012}), we have $H^{0,1}_{\overline{\partial}_c, L^2} (D)= H^{0,1}_{\overline D, L^2}({\mathbb{C}}\mathbb P^n)=0.$ Corollary \ref{hausdorff} shows that the Lipschtz condition cannot be removed. \item From a result of Takeuchi \cite{Ta}, $\mathbb H^-$ is Stein. It is well-known that for any $p$, $0\le p\le 2$, $\overline{\partial} :L^2_{p,0}(\mathbb H^-,\text{loc}) \to L^2_{p,1}(\mathbb H^-,\text{loc})$ has closed range (see \cite{Ho}) and the cohomology $H^{p,1}_{L^2_{\text{loc}}}(\mathbb H^-)$ in the Frech\'et space $L^2_{0,1}(\mathbb H^-,\text{loc})$ is trivial. \item The (weak) $L^2$ theory holds for any pseudoconvex domain without any regularity assumption on the boundary for $(0,1)$-forms. The (weak) $L^2$ Cauchy-Riemann operator $\overline{\partial}:L^2(\mathbb H^-)\to L^2_{0,1}(\mathbb H^-)$ has closed range and $H^{0,1}_{L^2}(\mathbb H^-)=0$ (see \cite{HI} or \cite{CSW}). \item For $p=1$ or $p=2$, it is not known if the Cauchy-Riemann operator $\overline{\partial}:L^2_{p,0}(\mathbb H^-)\to L^2_{p,1}(\mathbb H^-)$ has closed range. It is also not known if $\overline{\partial}$ in the weak sense is equal to $\overline{\partial}_s$. \item It is not known if the strong $L^2$ Cauchy-Riemann operator $\overline{\partial}_s:L^2_{2,0}(\mathbb H^-)\to L^2_{2,1}(\mathbb H^-)$ has closed range. \end{enumerate}
2,869,038,155,751
arxiv
\section{Introduction}\label{introduction} In the study of Edgeworth type expansions for the limiting distribution of the rightmost eigenvalue from Gaussian Random Matrix Ensembles, we run into finding large $n$ expansions of many key functions. For the Tracy-Widom distribution derivation, one needs the large $n$ limits of these functions, and they can all be express in terms of the couple pair $q$ and $p$, where $q$ is the Hastings-McLeod solution to Painlev$\acute{e}$ II equation behaving at infinity as the Airy function. The frequency of these functions in the study of the largest eigenvalue of Gaussian and Laguerre Random Matrix Ensemble points to the necessity of a study of these functions in their own right. We hope this will shed a light into understanding some derivations related to this aspect of Random Matrix Theory and related field making use of such functions. If one try to read through a proof of an expansion relating various asymptotic functions, it's easy to get lost in translation. But if the related functions are well known, the reader will probably have a different experience and therefore a better understanding of the techniques and tools used for the derivation. We present in this paper the derivations of those functions arising in the study of the largest eigenvalue for the Gaussian Ensemble of Random Matrix Theory in a hope of achieving our goal set above. Before stating our results, there is a need to define our functions.\\ For a Gaussian ensemble of $n \times n$ matrices, the probability density that the eigenvalues lie in an infinitesimal intervals about the points $x_{1}\> < \> \ldots \> < \> x_{n}$ is given by \begin{equation}\label{jpdfeig} \bP_{n,\beta}(x_{1},\cdots,x_{n})\; = \textrm{C}_{n\beta}\, \textrm{exp}\left(-\frac{\beta}{2}\, \sum_{1}^{n}x_{j}^{2}\right)\, \prod_{j<k}|x_{j}-x_{k}|^{\beta}. \end{equation} Where $\beta=1$ corresponds to the Gaussian Orthogonal Ensemble (GOE$_n$),$\beta=2$ corresponds to the Gaussian Unitary Ensemble (GUE$_{n}$),and $\beta=4$ for the Gaussian Symplectic Ensemble (GSE$_{n}$). \\ \eqref{jpdfeig} can also be represented as a determinant involving the variables $x_{i}^{'}s$. For the simplest case $\beta=2$, we have \begin{equation}\label{guedist} \bP_{n,2}(x_{1},\cdots,x_{n})\; = \frac{1}{n!}(\det [\varphi_{j-1}(x_{i})])^{2}=\frac{1}{n!}\det[K_{n,2}(x_{i},x_{j})]_{i,j=1,\cdots,n} \end{equation} with \begin{equation}\label{guekernel} K_{n,2}(x,y)=\sum_{k=0}^{n-1}\varphi_{k}(x)\varphi_{k}(y)= \sqrt{\frac{n}{2}}\frac{\varphi_{n}(x)\varphi_{n-1}(y)-\varphi_{n-1}(x)\varphi_{n}(y)}{x-y} \end{equation} and \begin{equation*} \varphi_{k}(x)= {1\over (2^{k}k! \sqrt{\pi})^{1/2}} \, H_k(x)\, e^{-x^2/2} \quad \textrm{with} \quad H_{k}(x) \quad \textrm{the Hermite polinomials} \end{equation*} obtained by orthogonalizing the sequence $\{x^{k},\> k=0,\cdots,n-1\}$ with respect to $e^{-x^{2}}$ over $\mathbb{R}$. Using this representation, it can be shown that the probability distribution function of the largest eigenvalue $\lambda_{max}$ is given by the Fredholm determinant of the operator with kernel $K_{n,2}$ acting on the set $(t \>\> \infty)$, \begin{equation}\label{largeigvaldist} F_{n,2}(t)=\bP(\lambda_{max} \> < \> t)=\det(I-K_{n,2}). \end{equation} In finding the Edgeworth type expansion of $F_{n,2}$, one needs large $n$ expansion of \eqref{guekernel} or what it amount to, the large $n$ expansion of $\varphi_{n}$. In \cite{Choup2}, we derived the following expression.\\ Let the rescaling function $\tau$ be defined by, \begin{equation}\label{scaling} \tau (x)=\sqrt{2(n+c)}+ 2^{-\frac{1}{2}}n^{-\frac{1}{6}}x, \end{equation} then \begin{displaymath} \varphi_{n}(\tau(x))=n^{\frac{1}{6}} \left\{ \airy(X) + \frac{(2c-1)}{2} \airy^{\prime}(X) n^{-\frac{1}{3}} + \right. \left[ (10\,c^{2}-10\,c +\frac{3}{2})\, X \airy(X)\right. \end{displaymath} \begin{equation}\label{phi(n)} +\>\> X^2 \airy^{\prime}(X)\biggr]\frac{n^{-\frac{2}{3}}}{20} + O(n^{-1}) \airy(X) \biggr\} \end{equation} and \begin{displaymath} \varphi_{n-1}(\tau(x))=n^{\frac{1}{6}} \left\{ \airy(X) + \frac{(2c+1)}{2} \airy^{\prime}(X) n^{-\frac{1}{3}} + \right. \left[ (10\,c^{2}+10\,c +\frac{3}{2})\, X \airy(X) \right. \end{displaymath} \begin{equation}\label{phi(n-1)} + \>\> X^2 \airy^{\prime}(X)\biggr]\frac{n^{-\frac{2}{3}}}{20} + O(n^{-1}) \airy(X) \biggr\} \end{equation} $Ai$ being the Airy function. These two functions enable us to obtain the following expansion of the GUE kernel. \begin{eqnarray*} K_{n,2}(\tau(X),\tau(Y))\,d\tau(X) = \tau^{\,_{'}}K_{n}(\tau(X),\tau(Y))dX = \biggl\{ K_{\airy}(X,Y) -c\airy(X)\airy(Y) n^{-\frac{1}{3}} + \end{eqnarray*} \begin{eqnarray*} \frac{1}{20}\biggl[(X + Y)\airy^{\prime}(X)\airy ^{\prime}(Y) - (X^2+XY+Y^2)\airy(X)\airy(Y) + \end{eqnarray*} \begin{equation}\label{hermitekernel} \left. \frac{-20c^2 +3 }{2}(\airy ^{\prime}(X)\airy(Y) + \airy(X)\airy ^{\prime}(Y)) \right]n^{-\frac{2}{3}} +O(n^{-1}) E(X,Y)\biggr\}dX. \end{equation} In deriving the finite but large $n$ probability distribution function of the largest eigenvalue using \eqref{hermitekernel}, and representation \eqref{largeigvaldist} we have to factor out of \eqref{hermitekernel} the constant term (with respect to $n$) to obtain the representation \begin{equation*} F_{n,2}(\tau(t))=\det \biggl( \left(\,I\>-\>K_{\airy}(X,Y)\right)\cdot\biggl\{ \,I \> + \left(\,I\>-\>K_{\airy}(X,Y)\right)^{-1}\biggl[ \> c \airy(X)\airy(Y) n^{-\frac{1}{3}} - \end{equation*} \begin{equation*} \frac{1}{20}\biggl[(X + Y)\airy^{\prime}(X)\airy ^{\prime}(Y) - (X^2+XY+Y^2)\airy(X)\airy(Y) + \end{equation*} \begin{equation}\label{eq1} \left. \left.\left. \left.\frac{-20c^2 +3 }{2}(\airy ^{\prime}(X)\airy(Y) + \airy(X)\airy ^{\prime}(Y)) \right]n^{-\frac{2}{3}} +O(n^{-1}) E(X,Y) \right]\right\}\right). \end{equation} This Fredholm determinant is computed over the set $(t,\> \infty)$. Thus to complete the determination of $F_{n,2}(\tau(t))$ we need to determine the action of the integral operator $(I-K_{\airy})$ on $x^{i}\airy(x)$ and $x^{i}\airy(x)$ where $i=0,1,\cdots$. These are the special functions in the GUE case, and they are independent of $n$, they are well known in the literature (see for example \cite{Choup1,Choup2,Choup3,Trac1,Trac2,Trac3,Trac4,Trac5,Trac7,Trac8}). For these $n$ independent functions, we just redefine them here and then introduce their $n$ dependent counterparts. \begin{equation}\label{notation} K_{\airy}(X,Y)\>=\> \frac{\airy(X)\,\airy^{'}(Y)\>-\>\airy(Y)\,\airy^{'}(X)}{X-Y}=\int_{0}^{\infty}\airy(X+Z)\,\airy(Y+Z)\,dZ. \end{equation} \begin{equation}\label{rho} \rho(X,Y;s)=(I-K_{\airy})^{-1}(X,Y;s),\quad R(X,Y;s)=\rho(X,Y;s) \cdot K_{\airy}(X,Y) \end{equation} this last product is operator multiplication. \begin{equation}\label{Q} Q_{i}(x;s)\, =\, (\,(I-K_{\airy})^{-1}\, ,\, x^{i}\airy), \end{equation} \begin{equation}\label{P} P_{i}(x;s)\, =\, (\,(I-K_{\airy})^{-1}\, ,\, x^{i}\airy^{'}), \end{equation} \begin{equation}\label{q} q_{i}(s)\>=\> Q_{i}(s;s), \>\>\> p_{i}(s)\>=\> P_{i}(s;s), \>\> q_{0}(s):=q(s)\>\>p_{0}(s):=p(s) \end{equation} \begin{equation}\label{u} u_{i}(s)\,=\, (Q_{i},\airy),\quad v_{i}(s)\,=\, (P_{i},\airy),\>\>u_{0}(s):=u(s),\>\>v_{0}(s):=v(s) \end{equation} \begin{equation} \tilde{v}_{i}(s)=(Q_{i},\airy^{'}),\quad w_{i}(t)\,= \, (P_{i},\airy^{'}),\>\> w_{0}(s):=w(s),\>\> \textrm{and}\>\>\tilde{v}_{0}(s):=\tilde{v}(s). \end{equation} Here $(\,\cdot \,,\cdot \,)$ denotes the inner product on $L^{2}(s,\infty)$ and $i=0,1,2,\cdots$. These are all well known functions, this paper is concerned with the $n$ dependent counterparts whose definitions are similar in nature. the changes needed here are on the kernel definition. The operator kernel is of the same form as \eqref{hermitekernel} \begin{equation}\label{guelkernel} K_{n}(x,y)=\frac{\varphi(x) \psi(y)-\psi(x)\varphi(y)}{x-y} \end{equation} with \begin{equation*} \varphi(x)=\sqrt[4]{\frac{n}{2}}\>\varphi_{n}(x)\>\>\>\textrm{and}\>\>\> \psi(x)=\sqrt[4]{\frac{n}{2}}\varphi_{n-1}(x) \end{equation*} Relating this to the previous set of function are the functions $\varphi$ and $\psi$, they are $\airy$ and $\airy^{'}$. We have the following functions, \begin{equation}\label{rhon} \rho_{n}(x,y;t):=(I-K_{n})^{-1}(x,y;t), \quad R_{n}(x,y;t):=\int_{t}^{\infty}\rho_{n}(x,z;t)\> K_{n}(z,y;t)\>dz \end{equation} these are kernels of integral operators on $(t \>\> \infty)$ \begin{equation}\label{QniPni} Q_{n,i}(x;t):=\int_{t}^{\infty}\rho_{n}(x,y;t)y^{i}\varphi(y)\>dy ,\quad P_{n,i}(x;t):=\int_{t}^{\infty}\rho_{n}(x,y;t)y^{i}\psi(y) \>dy \end{equation} or \begin{equation*} Q_{n}(x;t):=(\rho_{n}, \varphi)_{(t \>\> \infty)}\quad P_{n}(x;t):=(\rho_{n},\psi)_{(t,\>\>\infty)}. \end{equation*} And the other functions are \begin{equation}\label{qn} q_{n,i}(t)\>=\> Q_{n,i}(t;t), \>\>\> p_{n,i}(t)\>=\> P_{n,i}(t;t), \>\> q_{n,0}(t):=q_{n}(t)\>\>p_{n,0}(s):=p_{n}(t) \end{equation} \begin{equation}\label{un} u_{n,i}(t)\,=\, (Q_{n,i},\varphi),\quad v_{n,i}(t)\,=\, (P_{n,i},\varphi),\>\>u_{n,0}(t):=u_{n}(t),\>\>v_{n,0}(t):=v_{n}(t) \end{equation} \begin{equation}\label{tildev} \tilde{v}_{n,i}(t)=(Q_{n,i},\psi),\quad w_{n,i}(t)\,= \, (P_{n,i},\psi),\>\> w_{n,0}(t):=w_{n}(t),\>\> \textrm{and}\>\>\tilde{v}_{n,0}(t):=\tilde{v}_{n}(t). \end{equation} Here $(\,\cdot \,,\cdot \,)$ denotes the inner product on $L^{2}(t,\infty)$ and $i=0,1,2,\cdots$. \\ We will like to point out the following ambiguity in these definitions, the $n$-independent functions have a subscript $i$ whereas the $n$ dependent ones have the subscript $n$. We were not able to find a suitable representations of the set of functions depending on the matrix ensemble of $n\times n$ matrices, but the choice of keeping with the original Tracy and widom notation was made in part to help the reader go through the topic without too much confusion. Thus whenever we use the subscript $n$ we will refer to the large size on the underlying matrix ensemble and when $i$ is used it refers to the exponent of the variable $x$ appearing in the definition of that specific function and $i$ takes values from $0,1,2,\cdots$. One exception is when we will use a second subscript to distinguish between the $3$ beta ensembles $\beta=1,2,4$, in this case we will remind the reader of the significance of those values.\\ In deriving the probability distribution function of the largest eigenvalue $F_{n,1}(t)$ for the orthogonal ensemble, and $F_{n,4}(t)$ for the symplectic ensemble, we encounter new sets on functions obeying the same set of relations.\\ If we define $\ve$ to be the integral operator with kernel $\varepsilon(x,y)=\frac{1}{2}\textrm{sign} (x-y)$ then \begin{equation}\label{Qepsilon} Q_{n,\ve}(x;t):=\int_{t}^{\infty}\rho_{n}(x,y;t)\ve(\varphi)(y)\>dy, \quad q_{n,\ve}(t):=Q_{n,\ve}(t;t) \end{equation} \begin{equation} P_{n,\varepsilon}(x;t):=\int_{t}^{\infty}\rho_{n}(x,y;t)\varepsilon(\psi)(y)\> dy, \quad p_{n,\varepsilon}(t):=P_{n,\varepsilon}(t;t). \end{equation} In a similar way we define \begin{equation}\label{unepsilon} u_{n,\varepsilon}(t):=\int_{t}^{\infty}Q_{n,\varepsilon}(x;t) \> \varphi(x)\>dx, \quad v_{n,\varepsilon}(t):=\int_{t}^{\infty}P_{n,\varepsilon}(x;t)\>\varphi(x)\>dx. \end{equation} \begin{equation}\label{tildevnepsilon} \tilde{v}_{n, \varepsilon}(t):=\int_{t}^{\infty}Q_{n,\varepsilon}(x;t)\>\psi(x)\> dx, \quad \textrm{and} \quad w_{n,\varepsilon}(t)=\int_{t}^{\infty}P_{n,\varepsilon}(x;t)\>\psi(x)\>dx. \end{equation} And finally we also have for the Gaussian Orthogonal Ensemble \begin{equation}\label{calRn1} \mathcal{R}_{n,1}(t):=\int_{-\infty}^{t} R_{n}(x,t;t)\>dx,\quad \mathcal{P}_{n,1}(t):=\int_{-\infty}^{t} P_{n}(x;t)\>dx,\quad \mathcal{Q}_{n,1}(t):=\int_{-\infty}^{t} Q_{n}(x;t)\>dx \end{equation} (Note here that the second subscript here refers to the beta being 1 for the orthogonal ensemble and has nothing to do with the previous discussion on $i$ and $n$.) For the Gaussian Symplectic Ensemble we have, \begin{equation}\label{calRn4} \mathcal{R}_{n,4}(t):=\int_{-\infty}^{\infty}\varepsilon(x,t)R_{n}(x,t;t)\>dx,\quad \mathcal{P}_{n,4}(t)=\int_{-\infty}^{\infty}\varepsilon(x-t)\>P_{n}(x;t)\>dx,\quad \textrm{and} \quad \end{equation} \begin{equation*} \mathcal{Q}_{n,4}(t):=\int_{-\infty}^{\infty}\varepsilon(x-t)\>Q_{n}(x;t)\>dx, \end{equation*} and the $4$ refers to beta being $4$ for the Gaussian Symplectic Ensemble.\\ We have the large $n$ expansion of most of these functions from previous work. What is new in this paper are the large $n$ expansion of $Q_{n,i},\>\>P_{n,i}$ this can be used to derive an expansion for $u_{n,i},\>\>v_{n,i},\>\>\tilde{v}_{n,i},\>\>w_{n,i}$. We also have closed formula for $u_{n,\ve},\>\>\tilde{v}_{n,\ve},\>\>q_{n,\ve},\>\> \mathcal{Q}_{n,1},\>\> \mathcal{P}_{n,1},\>\> \mathcal{R}_{n,1},\>\> \mathcal{Q}_{n,4},\>\> \mathcal{P}_{n,4}$, and $\mathcal{R}_{n,4}$.\\ In the second section we will give a brief justification of $Q_{n,i}$ and $P_{n,i}$ follow in the third section with the justification of these last $9$ functions. Again the motivation for the derivation of these functions is due to their appearance in the Edgeworth type expansion of the largest eigenvalue probability distribution function for the Gaussian Orthogonal and Symplectic Ensembles. \\ \section{Epsilon independent functions} Building on \eqref{phi(n)}, \eqref{phi(n-1)} and \eqref{hermitekernel} we find that \begin{displaymath} Q_{n,i}(x):=((I-K_{n,2})^{-1}(x,y;t),y^{i}\varphi(y))=\int_{t}^{\infty}(I-K_{n,2})^{-1}(x,y;t)\>y^{i}\varphi(y)\>dy \end{displaymath} \begin{displaymath} P_{n,i}(x):=((I-K_{n,2})^{-1}(x,y;t),y^{i}\psi(y))=\int_{t}^{\infty}(I-K_{n,2})^{-1}(x,y;t)\>y^{i}\varphi(y)\>dy \end{displaymath} therefore we need to find $\rho_{n}(x,y;t)=(I-K_{n,2})^{-1}(x,y;t)$ in order to find an expressions for these two functions. But \begin{equation*} (I-K_{n,2})^{-1}(\tau(X),\tau(Y);\tau(t))= \biggl\{ \,I \> + \left(\,I\>-\>K_{\airy}\right)^{-1}(X,Y;t)\biggl[ \> c \airy(X)\airy(Y) n^{-\frac{1}{3}} - \end{equation*} \begin{equation*} \frac{1}{20}\biggl[(X + Y)\airy^{\prime}(X)\airy ^{\prime}(Y) - (X^2+XY+Y^2)\airy(X)\airy(Y) + \end{equation*} \begin{equation}\label{eq2} \left. \left. \left.\frac{-20c^2 +3 }{2}(\airy^{\prime}(X)\airy(Y) + \airy(X)\airy ^{\prime}(Y))\right]n^{-\frac{2}{3}} +O(n^{-1}) E(X,Y) \right]\right\}^{-1}\cdot\left(\,I\>-\>K_{\airy}(X,Y)\right)^{-1} \end{equation} \begin{displaymath} = \left\{I\>+\>cQ(X)\>\airy (Y) n^{-\frac{1}{3}}- \frac{1}{20}\biggl[ (P_{1}(X)+ Y P(X))\airy ^{\prime}(Y) - \right. \end{displaymath} \begin{displaymath} \left.(Q_{2}(X)+Y Q_{1}(X)+Y^{2} Q(X))\>\airy(Y) + \frac{-20c^2 +3 }{2}(P(X)\airy(Y) + Q(X)\airy ^{\prime}(Y))\right]n^{-\frac{2}{3}} \end{displaymath} \begin{displaymath} \left. \left. +O(\frac{1}{n}) E(X,Y)\right]\right\}^{-1}\cdot\left(\,I\>-\>K_{\airy}(X,Y)\right)^{-1} \end{displaymath} \begin{displaymath} = \left\{I\>-\>cQ(X)\>\airy (Y) n^{-\frac{1}{3}}+ \frac{1}{20}\biggl[ (P_{1}(X)+ Y P(X))\airy ^{\prime}(Y) - \right. \end{displaymath} \begin{displaymath} (Q_{2}(X)+Y Q_{1}(X)+Y^{2} Q(X))\>\airy(Y) + \frac{-20c^2 +3 }{2}(P(X)\airy(Y) + Q(X)\airy ^{\prime}(Y)) \end{displaymath} \begin{displaymath} \left. \left. +20c^{2}Q(X;s)u(s)\airy(Y)\biggr]n^{-\frac{2}{3}} +O(\frac{1}{n}) E(X,Y)\right]\right\}\cdot\left(\,I\>-\>K_{\airy}(X,Y)\right)^{-1}=\rho_{n}(X,Y;t) \end{displaymath} Note that with this representation of $\rho$ all the $Q_{n,i}$ and $P_{n,i}$ will have no term independent of $n$, but only $Q_{n,0}:=Q_{n}$ and $P_{n,0}:=P_{n}$, in \cite{Choup2} we find that \begin{equation*} Q_{n}(\tau(X);\tau(s))=n^{\frac{1}{6}}\biggl[ Q(X;s)+ \left[\frac{2c-1}{2}P(X;s)-c Q(X;s)u(s)\right]n^{-\frac{1}{3}} \end{equation*} \begin{equation*} +\left[(10c^{2}-10c+\frac{3}{2})Q_{1}(X;s)+P_{2}(X;s) + (-30c^{2}+10c+\frac{3}{2})Q(X;s) v(s) \right. \end{equation*} \begin{equation*} + P_{1}(X;s) v(s) +P(X;s) v_{1}(s)-Q_{2}(X;s) u(s)-Q_{1}(X;s) u_{1}(s)-Q(X;s) u_{2}(s) \end{equation*} \begin{equation} + \left.(-10c^{2}+\frac{3}{2})P(X;s) u(s) +20c^{2}Q(X;s) u^{2}(s) \right]\frac{n^{-\frac{2}{3}}}{20} +O(n^{-1})E_{q}(X;s)\biggr], \end{equation} and \begin{equation*} P_{n}(\tau(X);\tau(s))=n^{\frac{1}{6}}\biggl[ Q(X;s)+ \left[\frac{2c+1}{2}P(X;s)-c Q(X;s)u(s)\right]n^{-\frac{1}{3}} \end{equation*} \begin{equation*} +\left[(10c^{2}+10c+\frac{3}{2})Q_{1}(X;s)+P_{2}(X;s) + (-30c^{2}-10c+\frac{3}{2})Q(X;s) v(s) \right. \end{equation*} \begin{equation*} + P_{1}(X;s) v(s) +P(X;s) v_{1}(s)-Q_{2}(X;s) u(s)-Q_{1}(X;s) u_{1}(s)-Q(X;s) u_{2}(s) \end{equation*} \begin{equation} + \left.(-10c^{2}+\frac{3}{2})P(X;s) u(s) +20c^{2}Q(X;s) u^{2}(s) \right]\frac{n^{-\frac{2}{3}}}{20} +O(n^{-1})E_{p}(X;s)\biggr]. \end{equation} Using \begin{displaymath} Q_{n,i}(\tau(X),\tau(s))=(\rho_{n}(\tau(X),\tau(Y),\tau(s)),(\tau(Y))^{i}\varphi(\tau(Y)))_{(\tau(s)\>\>\> \infty)}= \end{displaymath} \begin{displaymath} \sum_{k=0}^{i}\frac{i!}{k! \> (i-k)!}\frac{2^{\frac{i}{2}}(n+c)^{\frac{i-k}{2}}}{2^{k}n^{\frac{k}{2}}}(\rho_{n}(\tau(X),\tau(Y);\tau(s)),Y^{k}\varphi(\tau(Y)))_{(\tau(s)\>\>\> \infty)}. \end{displaymath} and \begin{displaymath} X^{k}\varphi(\tau(X))=n^{\frac{1}{6}} \left\{ X^{k} \airy(X) + \frac{(2c-1)}{2} X^{k}\airy^{\prime}(X) n^{-\frac{1}{3}} + \right. \left[ (10\,c^{2}-10\,c +\frac{3}{2})\, X^{k+1} \airy(X)\right. \end{displaymath} \begin{equation}\label{phi(n)} +\>\> X^{k+2} \airy^{\prime}(X)\biggr]\frac{n^{-\frac{2}{3}}}{20} + O(n^{-1}) \airy(X) \biggr\} \end{equation} we find that \begin{displaymath} \rho(X,Y;s)\cdot \varphi(\tau(X))=n^{\frac{1}{6}} \left\{ Q_{k}(X;s) + \frac{(2c-1)}{2} P_{k}(X) n^{-\frac{1}{3}} + \right. \left[ (10\,c^{2}-10\,c +\frac{3}{2})\, Q_{k+1}(X)\right. \end{displaymath} \begin{equation}\label{phi(n)} +\>\> P_{k+2}(X)\biggr]\frac{n^{-\frac{2}{3}}}{20} + O(n^{-1}) Q_{k}(X) \biggr\}. \end{equation} Combining this with the action of the first factor on the right of \eqref{eq2} gives the following expression for $(\rho_{n}(\tau(X),\tau(Y);\tau(s)),X^{k}\varphi(\tau(X))$ \begin{displaymath} n^{\frac{1}{6}}\left\{Q_{k}(X;s)+\left[\frac{2c-1}{2}P_{k}(X;s)-cu_{k}(s)Q(X;s)\right]n^{-\frac{1}{3}}+\left[(10c-20c^{2})\tilde{v}_{k}(s)Q(X;s)+\right.\right. \end{displaymath} \begin{displaymath} (10c^{2}-10c+\frac{3}{2})Q_{k+1}(X;s)+P_{k+2}(X;s) +P_{1}(X;s)v_{k}(s)+P(X)(Y\airy^{'}(Y), Q_{k}(Y))_{(s\>\> \infty)} \end{displaymath} \begin{displaymath} -u_{k}(s)Q_{2}(X;s) -Q_{1}(X;s)(Y\airy(Y),Q_{k}(Y;s))_{(s\>\> \infty)}-Q(X;s)(Y^{2}\airy(Y),Q_{k}(Y;s))_{(s\>\> \infty)} \end{displaymath} \begin{displaymath} \left.+\frac{-20c^{2}+3}{2}P(X;s)u_{k}(s) +\frac{-20c^{2}+3}{2}Q(X;s)v_{k}(s)+20c^{2}Q(X;s)u(s)u_{k}(s)\right]\frac{n^{-\frac{2}{3}}}{20} \end{displaymath} \begin{displaymath} \left. +O(\frac{1}{n})E(X;s)\right\}. \end{displaymath} To simplify the inner product in this last expression, we use the following recurrence relation derived in \cite{Trac4}\\ $Q_{k}(X;s)=X^{k}Q(X;s)-\sum_{i+j=k-1;i,j>0}(v_{j}Q_{i}-u_{j}P_{i})$ to have \begin{displaymath} (Q_{k}(X;s),X\airy(X))_{(s\>\>\> \infty)}=\int_{s}^{\infty}\int_{s}^{\infty}X\airy(X)\rho(X,Y;s)Y^{k}\airy(Y)\>dY\> dX \end{displaymath} \begin{displaymath} =(Q_{1}(X;s),X^{k}\airy(X))=(X Q(X;s)+u(s)P(X;s)-v(s)Q(X;s),X^{k}\airy(X))= \end{displaymath} \begin{displaymath} u_{k+1}(s)+u(s)\tilde{v}_{k}(s)-v(s)u_{k}(s) \end{displaymath} and \begin{displaymath} (Q_{k}(X;s),X^{2}\airy(X))_{(s\>\>\> \infty)}= (Q_{2}(X;s),X^{k}\airy(X))_{(s\>\>\> \infty)}= \end{displaymath} \begin{displaymath} =(X^{2}Q(X;s)-v(s)(X Q(X;s)+u(s)P(X;s)-v(s)Q(X;s))-u(s)(XP(X;s) \end{displaymath} \begin{displaymath} -w(s)Q(X;s)+v(s)P(X;s))-v_{1}(s)Q(X;s)+u_{1}(s)P(X;s),X^{k}\airy(X)) \end{displaymath} \begin{displaymath} =u_{k+2}(s)-v(s)u_{k+1} -v(s)u(s)\tilde{v}_{k}(s)+v(s)^{2}u_{k}(s)-u(s)\tilde{v}_{k+1}(s) \end{displaymath} \begin{displaymath} +u(s)w(s)u_{k}(s)-u(s)v(s)\tilde{v}_{k}(s)-v_{1}(s)u_{k}(s)+u_{1}(s)\tilde{v}_{k}(s). \end{displaymath} we also have \begin{displaymath} (Q_{k}(X;s),X\airy^{'}(X))_{(s\>\>\> \infty)}=(P_{1}(X;s),X^{k}\airy(X))= \end{displaymath} \begin{displaymath} \tilde{v}_{k+1}(s)+v(s)\tilde{v}_{k}(s)-w(s)u_{k}(s). \end{displaymath} We therefore have \begin{displaymath} Q_{n,i}(\tau(X),\tau(s))= \sum_{k=0}^{i}\frac{i!}{k! \> (i-k)!}\frac{2^{\frac{i}{2}-k}(n+c)^{\frac{i-k}{2}}}{n^{\frac{k}{2}-\frac{1}{6}}}.\biggl\{Q_{k}(X;s)+ \end{displaymath} \begin{displaymath} \left[\frac{2c-1}{2}P_{k}(X;s)-cu_{k}(s)Q(X;s) \right]n^{-\frac{1}{3}}+\biggl[(10c-20c^{2})\tilde{v}_{k}(s)Q(X;s)+ \end{displaymath} \begin{displaymath} (10c^{2}-10c+\frac{3}{2})Q_{k+1}(X;s)+P_{k+2}(X;s) +P_{1}(X;s)v_{k}(s)+P(X)\biggl(\tilde{v}_{k+1}(s)+v(s)\tilde{v}_{k}(s)-w(s)u_{k}(s) \biggr) \end{displaymath} \begin{displaymath} -u_{k}(s)Q_{2}(X;s) -Q_{1}(X;s)\biggl(u_{k+1}(s)+u(s)\tilde{v}_{k}(s)-v(s)u_{k}(s)\biggr) \end{displaymath} \begin{displaymath} -Q(X;s)\biggl(u_{k+2}(s)-v(s)u_{k+1} -v(s)u(s)\tilde{v}_{k}(s)+v(s)^{2}u_{k}(s)-u(s)\tilde{v}_{k+1}(s) \end{displaymath} \begin{displaymath} +u(s)w(s)u_{k}(s)-u(s)v(s)\tilde{v}_{k}(s)-v_{1}(s)u_{k}(s)+u_{1}(s)\tilde{v}_{k}(s)\biggr) \end{displaymath} \begin{displaymath} \left.+\frac{-20c^{2}+3}{2}P(X;s)u_{k}(s) +\frac{-20c^{2}+3}{2}Q(X;s)v_{k}(s)+20c^{2}Q(X;s)u(s)u_{k}(s)\right]\frac{n^{-\frac{2}{3}}}{20} \end{displaymath} \begin{equation}\label{Qni} \left. +O(\frac{1}{n})E(X;s)\right\}. \end{equation} In a similar way we have \begin{displaymath} P_{n,i}(\tau(X),\tau(s))=(\rho_{n}(\tau(X),\tau(Y),\tau(s)),(\tau(Y))^{i}\psi(\tau(Y)))_{(\tau(s)\>\>\> \infty)}= \end{displaymath} \begin{displaymath} \sum_{k=0}^{i}\frac{i!}{k! \> (i-k)!}\frac{2^{\frac{i}{2}}(n+c)^{\frac{i-k}{2}}}{2^{k}n^{\frac{k}{2}}}(\rho_{n}(\tau(X),\tau(Y);\tau(s)),Y^{k}\psi(\tau(Y)))_{(\tau(s)\>\>\> \infty)}. \end{displaymath} And $(\rho_{n}(\tau(X),\tau(Y);\tau(s)),X^{k}\psi(\tau(X))$ is equal to \begin{displaymath} n^{\frac{1}{6}}\left\{Q_{k}(X;s)+\left[\frac{2c+1}{2}P_{k}(X;s)-cu_{k}(s)Q(X;s)\right]n^{-\frac{1}{3}}+\left[-(10c+20c^{2})\tilde{v}_{k}(s)Q(X;s)+\right.\right. \end{displaymath} \begin{displaymath} (10c^{2}+10c+\frac{3}{2})Q_{k+1}(X;s)+P_{k+2}(X;s) +P_{1}(X;s)v_{k}(s)+P(X)(Y\airy^{'}(Y), Q_{k}(Y))_{(s\>\> \infty)} \end{displaymath} \begin{displaymath} -u_{k}(s)Q_{2}(X;s) -Q_{1}(X;s)(Y\airy(Y),Q_{k}(Y;s))_{(s\>\> \infty)}-Q(X;s)(Y^{2}\airy(Y),Q_{k}(Y;s))_{(s\>\> \infty)} \end{displaymath} \begin{displaymath} \left.+\frac{-20c^{2}+3}{2}P(X;s)u_{k}(s) +\frac{-20c^{2}+3}{2}Q(X;s)v_{k}(s)+20c^{2}Q(X;s)u(s)u_{k}(s)\right]\frac{n^{-\frac{2}{3}}}{20} \end{displaymath} \begin{displaymath} \left. +O(\frac{1}{n})E(X;s)\right\}. \end{displaymath} this therefore gives \begin{displaymath} P_{n,i}(\tau(X),\tau(s))= \sum_{k=0}^{i}\frac{i!}{k! \> (i-k)!}\frac{2^{\frac{i}{2}-k}(n+c)^{\frac{i-k}{2}}}{n^{\frac{k}{2}-\frac{1}{6}}}\times \end{displaymath} \begin{displaymath} \left\{Q_{k}(X;s)+\left[\frac{2c+1}{2}P_{k}(X;s)-cu_{k}(s)Q(X;s)\right]n^{-\frac{1}{3}}+\left[-(10c+20c^{2})\tilde{v}_{k}(s)Q(X;s)+\right.\right. \end{displaymath} \begin{displaymath} (10c^{2}+10c+\frac{3}{2})Q_{k+1}(X;s)+P_{k+2}(X;s) +P_{1}(X;s)v_{k}(s)+ \end{displaymath} \begin{displaymath} P(X)\biggl(\tilde{v}_{k+1}(s)+v(s)\tilde{v}_{k}(s)-w(s)u_{k}(s) \biggr) \end{displaymath} \begin{displaymath} -u_{k}(s)Q_{2}(X;s) -Q_{1}(X;s)\biggl(u_{k+1}(s)+u(s)\tilde{v}_{k}(s)-v(s)u_{k}(s)\biggr) \end{displaymath} \begin{displaymath} -Q(X;s)\biggl(u_{k+2}(s)-v(s)u_{k+1} -v(s)u(s)\tilde{v}_{k}(s)+v(s)^{2}u_{k}(s)-u(s)\tilde{v}_{k+1}(s) \end{displaymath} \begin{displaymath} +u(s)w(s)u_{k}(s)-u(s)v(s)\tilde{v}_{k}(s)-v_{1}(s)u_{k}(s)+u_{1}(s)\tilde{v}_{k}(s)\biggr) \end{displaymath} \begin{displaymath} \left.+\frac{-20c^{2}+3}{2}P(X;s)u_{k}(s) +\frac{-20c^{2}+3}{2}Q(X;s)v_{k}(s)+20c^{2}Q(X;s)u(s)u_{k}(s)\right]\frac{n^{-\frac{2}{3}}}{20} \end{displaymath} \begin{equation}\label{Pni} \left. +O(\frac{1}{n})E(X;s)\right\}. \end{equation} When we set $i$ to zero we recover $Q_{n}(X;s)$ and $P_{n}(X;s)$. We see immediately that these two series representations of $Q_{n,i}$ and $P_{n,i}$ are not in terms of $n^{-\frac{1}{3}}$ when $i$ is not zero. We can use \eqref{phi(n)}, \eqref{phi(n-1)}, \eqref{Qni} and \eqref{Pni}, to derive an expansion for $u_{n,i},\>\>v_{n,i},\>\>\tilde{v}_{n,i}$ and $w_{n,i}$ from their representations \begin{displaymath} u_{n,i}(t)=(Q_{n,i}(x;t), \varphi(x))_{(t \>\>\> \infty)} \quad v_{n,i}(t)=(P_{n,i}(x;t), \varphi(x))_{(t\>\>\> \infty)} \end{displaymath} \begin{displaymath} \tilde{v}_{n,i}(t)=(Q_{n,i}(x;t),\psi(x))_{(t\>\>\> \infty)}\quad \textrm{and} \quad w_{n,i}(t)=(P_{n,i}(x;t), \psi(x))_{(t\>\>\> \infty)}. \end{displaymath} We would like to note that \eqref{Qni} and \eqref{Pni} are the new quantities in this section, as additional corollary the derivation of $q_{n,i}(t)$ and $p_{n,i}(t)$.\\ \begin{displaymath} q_{n,i}(\tau(s))= \sum_{k=0}^{i}\frac{i!}{k! \> (i-k)!}\frac{2^{\frac{i}{2}-k}(n+c)^{\frac{i-k}{2}}}{n^{\frac{k}{2}-\frac{1}{6}}}.\biggl\{q_{k}(s)+ \end{displaymath} \begin{displaymath} \left[\frac{2c-1}{2}p_{k}(s)-cu_{k}(s)q(s) \right]n^{-\frac{1}{3}}+\biggl[(10c-20c^{2})\tilde{v}_{k}(s)q(s)+ \end{displaymath} \begin{displaymath} (10c^{2}-10c+\frac{3}{2})q_{k+1}(s)+p_{k+2}(s) +p_{1}(s)v_{k}(s)+p(s)\biggl(\tilde{v}_{k+1}(s)+v(s)\tilde{v}_{k}(s)-w(s)u_{k}(s) \biggr) \end{displaymath} \begin{displaymath} -u_{k}(s)q_{2}(s) -q_{1}(s)\biggl(u_{k+1}(s)+u(s)\tilde{v}_{k}(s)-v(s)u_{k}(s)\biggr) \end{displaymath} \begin{displaymath} -q(s)\biggl(u_{k+2}(s)-v(s)u_{k+1} -v(s)u(s)\tilde{v}_{k}(s)+v(s)^{2}u_{k}(s)-u(s)\tilde{v}_{k+1}(s) \end{displaymath} \begin{displaymath} +u(s)w(s)u_{k}(s)-u(s)v(s)\tilde{v}_{k}(s)-v_{1}(s)u_{k}(s)+u_{1}(s)\tilde{v}_{k}(s)\biggr) \end{displaymath} \begin{displaymath} \left.+\frac{-20c^{2}+3}{2}p(s)u_{k}(s) +\frac{-20c^{2}+3}{2}Q(X;s)v_{k}(s)+20c^{2}q(s)u(s)u_{k}(s)\right]\frac{n^{-\frac{2}{3}}}{20} \end{displaymath} \begin{equation}\label{qni} \left. +O(\frac{1}{n})e_{q}(s)\right\}, \end{equation} \begin{displaymath} p_{n,i}(\tau(s))= \sum_{k=0}^{i}\frac{i!}{k! \> (i-k)!}\frac{2^{\frac{i}{2}-k}(n+c)^{\frac{i-k}{2}}}{n^{\frac{k}{2}-\frac{1}{6}}}\times \end{displaymath} \begin{displaymath} \left\{q_{k}(s)+\left[\frac{2c+1}{2}p_{k}(s)-cu_{k}(s)q(s)\right]n^{-\frac{1}{3}}+\left[-(10c+20c^{2})\tilde{v}_{k}(s)q(s)+\right.\right. \end{displaymath} \begin{displaymath} (10c^{2}+10c+\frac{3}{2})q_{k+1}(s)+p_{k+2}(s) +p_{1}(s)v_{k}(s)+ \end{displaymath} \begin{displaymath} p(s)\biggl(\tilde{v}_{k+1}(s)+v(s)\tilde{v}_{k}(s)-w(s)u_{k}(s) \biggr) \end{displaymath} \begin{displaymath} -u_{k}(s)q_{2}(s) -q_{1}(s)\biggl(u_{k+1}(s)+u(s)\tilde{v}_{k}(s)-v(s)u_{k}(s)\biggr) \end{displaymath} \begin{displaymath} -q(s)\biggl(u_{k+2}(s)-v(s)u_{k+1} -v(s)u(s)\tilde{v}_{k}(s)+v(s)^{2}u_{k}(s)-u(s)\tilde{v}_{k+1}(s) \end{displaymath} \begin{displaymath} +u(s)w(s)u_{k}(s)-u(s)v(s)\tilde{v}_{k}(s)-v_{1}(s)u_{k}(s)+u_{1}(s)\tilde{v}_{k}(s)\biggr) \end{displaymath} \begin{displaymath} \left.+\frac{-20c^{2}+3}{2}p(s)u_{k}(s) +\frac{-20c^{2}+3}{2}q(s)v_{k}(s)+20c^{2}q(s)u(s)u_{k}(s)\right]\frac{n^{-\frac{2}{3}}}{20} \end{displaymath} \begin{equation}\label{Pni} \left. +O(\frac{1}{n})e_{P}(s)\right\}. \end{equation} In \cite{Choup2}, we found an expression for $R_{n}(x,y)=\rho_{n}(x,y) \cdot K_{n,2}(x,y)$, this also follows from \eqref{hermitekernel} and \eqref{eq2}. Note that the following representation will give the same result, \begin{equation}R_{n}(x,y;t)=\frac{Q_{n}(x;t)P_{n}(y;t)-P_{n}(x;t)Q_{n}(y;t)}{x-y}\end{equation}. \begin{equation*} R_{n}(\tau(X),\tau(Y);\tau(s))dx= \left[R(X,Y;s)-c\,Q(X;s) Q(Y;s)\,n^{-\frac{1}{3}}\right. \end{equation*} \begin{equation*} + \frac{n^{-\frac{2}{3}}}{20} \biggl[P_{1}(X;s) P(Y;s) +P(X;s) P_{1}(Y;s) \end{equation*} \begin{equation*} - Q_{2}(X;s) Q(Y;s) - Q_{1}(X;s) Q_{1}(Y;s) - Q(X;s) Q_{2}(Y;s) + 20 c^{2} u_{0}(s) Q(X;s) Q(Y;s) \end{equation*} \begin{equation}\label{eq3} +\left. \left. \frac{3-20c^{2}}{2}\left(P(X;s) Q(Y;s) + Q(X;s) P(Y;s)\,\right) \right] + O(n^{-1})e_{n}(X,Y)\right]dX. \end{equation} \section{Epsilon dependent functions} The corresponding epsilon functions come from the study of the leftmost eigenvalue from GOE and GSE. We present here the system of equation satisfied by those functions and a solution to these equations leading to our desired functions.\\ To simplify notations we define \begin{equation} V_{n,\varepsilon}(t)=1-\tilde{v}_{n,\varepsilon}(t),\quad \textrm{and} \quad \tilde{\mathcal{R}}_{n,1}(t)=1-\mathcal{R}_{n,1}(t). \end{equation} With this notation, system is \begin{equation} \frac{d}{dt}\left( \begin{array}{c} u_{n,\varepsilon}(t) \\ V_{n,\varepsilon}(t) \\ q_{n,\varepsilon}(t) \\ \end{array} \right) =\left( \begin{array}{ccc} 0 & 0 & -q_{n}(t) \\ 0 & 0 & p_{n}(t) \\ -p_{n}(t) & q_{n}(t) & 0 \\ \end{array} \right)\,\cdot\, \left( \begin{array}{c} u_{n,\varepsilon}(t) \\ V_{n,\varepsilon}(t) \\ q_{n,\varepsilon}(t) \\ \end{array} \right), \end{equation} the boundary conditions in this case are \begin{equation} \left( \begin{array}{c} u_{n,\varepsilon}(\infty) \\ V_{n,\varepsilon}(\infty) \\ q_{n,\varepsilon}(\infty) \\ \end{array} \right) =\left( \begin{array}{c} 0\\ 1\\ c_{\varphi} \\ \end{array} \right). \end{equation} For the orthogonal ensemble \begin{equation} \frac{d}{dt}\left( \begin{array}{c} \mathcal{Q}_{n,1}(t) \\ \mathcal{P}_{n,1}(t) \\ \tilde{\mathcal{R}}_{n,1}(t) \\ \end{array} \right) =\left( \begin{array}{ccc} 0 & 0 & q_{n}(t) \\ 0 & 0 & p_{n}(t) \\ p_{n}(t) & q_{n}(t) & 0 \\ \end{array} \right)\,\cdot\, \left( \begin{array}{c} \mathcal{Q}_{n,1}(t) \\ \mathcal{P}_{n,1}(t) \\ \tilde{\mathcal{R}}_{n,1}(t) \\ \end{array} \right), \end{equation} with boundary conditions in this case are \begin{equation} \left( \begin{array}{c} \mathcal{Q}_{n,1}(\infty) \\ \mathcal{P}_{n,1}(\infty) \\ \tilde{\mathcal{R}}_{n,1}(\infty) \\ \end{array} \right) =\left( \begin{array}{c} 2c_{\varphi} \\ 0 \\ 1 \\ \end{array} \right) \quad \textrm{ as } n \textrm{ is even}. \end{equation} We also have for the symplectic ensemble \begin{equation} \frac{d}{dt}\left( \begin{array}{c} \mathcal{Q}_{n,4}(t) \\ \mathcal{P}_{n,4}(t) \\ \tilde{\mathcal{R}}_{n,4}(t) \\ \end{array} \right) =\left( \begin{array}{ccc} 0 & 0 & -q_{n}(t) \\ 0 & 0 & -p_{n}(t) \\ -p_{n}(t) & -q_{n}(t) & 0 \\ \end{array} \right)\,\cdot\, \left( \begin{array}{c} \mathcal{Q}_{n,4}(t) \\ \mathcal{P}_{n,4}(t) \\ \tilde{\mathcal{R}}_{n,4}(t) \\ \end{array} \right). \end{equation} where $ \tilde{\mathcal{R}}_{n,4}(t) =1+\mathcal{R}_{n,4}(t)$, with corresponding boundary conditions \begin{equation} \left( \begin{array}{c} \mathcal{Q}_{n,4}(\infty) \\ \mathcal{P}_{n,4}(\infty) \\ \tilde{\mathcal{R}}_{n,4}(\infty) \\ \end{array} \right) =\left( \begin{array}{c} -c_{\varphi} \\ -c_{\psi} \\ 1 \\ \end{array} \right)\,=\, \left( \begin{array}{c} 0 \\ -c_{\psi} \\ 1 \\ \end{array} \right)\quad \textrm{ as } n \textrm{ is odd}. \end{equation} The first two set of equations were solved in \cite{Choup3}, here we give the general solution from the series expansion derived there. We will not go back into the derivation, but would like to point out that this is the direct consequence of those matrix exponentials. Our goal here is to give a close formula for those functions.\\ We define \begin{equation}\label{a(t)} a(t)=\int_{t}^{\infty}q_{n}(x)\>dx \quad \textrm{and} \quad b(t)=\int_{t}^{\infty}p_{n}(x)\>dx. \end{equation} We note that these two functions scale (under the transformation $\tau$) in the large $n$ limit to the same function \begin{equation*} \frac{1}{\sqrt{2}}\int_{s}^{\infty}q(x)\> dx\>\>=\>\> \frac{1}{\sqrt{2}}\> \mu(s). \end{equation*} We give this to justify our notation used bellow, and it says that for very large $n$, the argument of all the hyperbolic functions is real. With this notation, we have \begin{equation}\label{une} u_{n,\varepsilon}(t)=\frac{a(t)}{2b(t)}[1-\cosh\sqrt{2a(t)b(t)}]+c_{\varphi}\sqrt{\frac{a(t)}{2b(t)}} \sinh\sqrt{2a(t)b(t)}, \end{equation} \begin{equation}\label{tildevne} \tilde{V}_{n,\varepsilon}(t)=\frac{1}{2}[1+\cosh\sqrt{2a(t)b(t)}]-c_{\varphi}\sqrt{\frac{b(t)}{2a(t)}} \sinh\sqrt{2a(t)b(t)}, \end{equation} or \begin{equation}\label{tildevne} \tilde{v}_{n,\varepsilon}(t)=1-\frac{1}{2}[1+\cosh\sqrt{2a(t)b(t)}]+c_{\varphi}\sqrt{\frac{b(t)}{2a(t)}} \sinh\sqrt{2a(t)b(t)} \end{equation} and \begin{equation}\label{qne} q_{n,\varepsilon}(t)=-\sqrt{\frac{a(t)}{2b(t)}} \sinh\sqrt{2a(t)b(t)} +c_{\varphi} \cosh\sqrt{2a(t)b(t)}. \end{equation} This result is valid\footnote{The computation for GOE assumes $n$ to be even and for GSE assumes $n$ to be odd} for the GOE when $c_{\varphi} \neq 0$ and for the GSE we have $c_{\varphi}=0$ In the same way we find that for the GOE case, the calligraphic functions are \begin{equation}\label{calQn1} \mathcal{Q}_{n,1}(t)=c_{\varphi}[1+\cosh\sqrt{2a(t)b(t)}]-\sqrt{\frac{a(t)}{2b(t)}} \sinh\sqrt{2a(t)b(t)}, \end{equation} \begin{equation}\label{calPn1} \mathcal{P}_{n,1}(t)=c_{\varphi}\frac{b(t)}{a(t)}[\cosh\sqrt{2a(t)b(t)}-1]-\sqrt{\frac{b(t)}{2a(t)}} \sinh\sqrt{2a(t)b(t)}, \end{equation} and \begin{equation}\label{caltildeRn1} \tilde{\mathcal{R}}_{n,1}(t)=-2c_{\varphi}\sqrt{\frac{b(t)}{2a(t)}}\sinh\sqrt{2a(t)b(t)}+ \cosh\sqrt{2a(t)b(t)} \end{equation} or \begin{equation}\label{caltildeRn1} \mathcal{R}_{n,1}(t)=1+2c_{\varphi}\sqrt{\frac{b(t)}{2a(t)}}\sinh\sqrt{2a(t)b(t)}- \cosh\sqrt{2a(t)b(t)} \end{equation} where \begin{equation}\label{c varphi} c_{\varphi}=(\pi\,n)^{1/4}2^{-3/4 -n/2}\frac{(n!)^{1/2}}{(n/2)!}. \end{equation} A large $n$ expansion for $v_{n,\varepsilon}$, $q_{n,\varepsilon}$ is given in \cite{Choup3} on page $17$. A large $n$ expansion of $\mathcal{P}_{n,1}$ is equation $(3.58)$ and $\mathcal{R}_{n,1}$ is equation $(3.59)$ of the same work. We will therefore give here an expression for $u_{n,\varepsilon}$ and $\mathcal{Q}_{n,1}$ for large $n$. Substitution of $a(t)=\int_{t}^{\iy}q_{n}(x)\>dx$ and $b(t) =\int_{t}^{\iy}p_{n}(x)\>dx$ into \eqref{une} and \eqref{calQn1} yields the following results. \begin{thm} \label{large n une} for $s$ bounded away from minus infinity, \begin{equation*} u_{n,\varepsilon}(\tau(s))\>=\> \frac{1}{2}(1-e^{-\mu(s)}) \>+\> \left(\frac{\nu(s)}{4\mu(s)}(e^{-\mu(s)}+\cosh(\mu(s))-2)-\frac{cq(s)}{2}e^{-\mu(s)}\right)n^{-\frac{1}{3}}+ \end{equation*} \begin{equation*} \frac{1}{32 \mu(s)^2}\left(e^{-\mu(s)} \left(\nu(s)^2 \left(-\left(-1+e^{\mu(s)}\right) \left(-5-12 c+(3+4 c) e^{\mu(s)}\right)-\right.\right.\right. \end{equation*} \begin{equation*} \left.2 \mu(s) \left(1+6 c-2 c e^{2 \mu(s)}+4 c^2 \mu(s)\right)\right)+ \end{equation*} \begin{equation*} 4 c \nu(s)\left(3-4 e^{\mu(s)}-e^{2 \mu(s)} (-1+\mu(s))+3 \mu(s)+4 c \mu(s)^2\right) \int_{s}^{\iy} q[x] u[x] \, dx+ \end{equation*} \begin{equation*} 8 \mu(s) \left(-10 c \left(-3+e^{\mu(s)}\right) \left(-1+e^{\mu(s)}\right) \left(\int_{s}^{\iy} q[x] v[x] \, dx-\int_{s}^{\iy} q_1[x] \, dx\right)+\right. \end{equation*} \begin{equation*} \mu(s) \left(\left(3-20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx+3 \int_{s}^{\iy} q[x] v[x] \, dx+\right. \end{equation*} \begin{equation*} 2 \int_{s}^{\iy} v[x] p_1[x] \, dx+2 \int_{s}^{\iy} p_2[x] \, dx+3 \int_{s}^{\iy} q_1[x] \, dx-c^2 \left((\int_{s}^{\iy} q[x] u[x] \, dx)^2-\right. \end{equation*} \begin{equation*} \left.20 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx-3 \int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)\right)-2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\right. \end{equation*} \begin{equation*} \left.\left.\left.\left.\int_{s}^{\iy} q_1[x] u_1[x] \, dx+\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)\right)\right) n^{-\frac{2}{3}} +O(n^{-1}) \end{equation*} \begin{equation*} \end{equation*} We also have \begin{equation*} \tilde{v}_{n,\ve}(\tau(s))\>=\>\frac{1}{2} \left(1-e^{-\mu(s)}\right) +(\frac{\nu(s)}{4\mu(s)}\sinh(\mu(s))+\frac{cq(s)}{2}e^{-\mu(s)})n^{-\frac{1}{3}} \end{equation*} \begin{equation*} \frac{1}{16 \mu(s)^2} \left\{\left(4 c \nu(s) \int_{s}^{\iy} q[x] u[x] \, dx \left(-\cosh[\mu(s)] \mu(s)+2 c e^{-\mu(s)} \mu(s)^2+ \sinh[\mu(s)]\right)\right.\right. \end{equation*} \begin{equation*} +\nu(s)^2 (\cosh[\mu(s)] \mu(s) \left.\left(-1+4 c-4 c^2 \mu(s)\right)+\left(1-4 c+\mu(s)+4 c^2 \mu(s)^2\right) \sinh[\mu(s)]\right)- \end{equation*} \begin{equation*} 4 \mu(s) \left(e^{-\mu(s)} \mu(s) \left(\left(-3+20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx-\right.\right.\\ 3 \int_{s}^{\iy} q[x] v[x] \, dx-2 \int_{s}^{\iy} v[x] p_1[x] \, dx \end{equation*} \begin{equation*} -2 \int_{s}^{\iy} p_2[x] \, dx-3 \int_{s}^{\iy} q_1[x] \, dx+ c^2 \left((\int_{s}^{\iy} q[x] u[x] \, dx)^2-20 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx \right.\right. \end{equation*} \begin{equation*} \left.\left.-3 \int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)\right)+ 2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx \right. + \end{equation*} \begin{equation*} \left. \left. \int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)+\\ \left.\left.20 c \left(\int_{s}^{\iy} q[x] v[x] \, dx-\int_{s}^{\iy} q_1[x] \, dx\right) \sinh[\mu(s)]\right)\right\} n^{-\frac{2}{3}} \end{equation*} \begin{equation*} +O(n^{-1}) \end{equation*} we also have \begin{equation*} q_{n,\varepsilon}(\tau(s))\>=\>\frac{e^{-\mu(s)}}{\sqrt{2}}+(\frac{\nu(s)}{2\sqrt{2}\mu(s)}\sinh\mu(s)+\frac{cq(s)}{\sqrt{2}}e^{-\mu(s)})n^{-\frac{1}{3}} \end{equation*} \begin{equation*} \left\{\frac{1}{8 \sqrt{2} \mu(s)^2}\\ \left(\left(-4 c \nu(s) (\int_{s}^{\iy} q[x] u[x] \, dx) \left(\mu(s) \left(\cosh[\mu(s)]+2 c e^{-\mu(s)} \mu(s)\right)- \sinh[\mu(s)]\right) \right. \right. \right.+ \end{equation*} \begin{equation*} \nu(s)^2 (\cosh[\mu(s)] \mu(s) \left.\left(1+4 c+4 c^2 \mu(s)\right)-\left(1+4 c+\mu(s)+4 c^2 \mu(s)^2\right) \sinh[\mu(s)]\right)+ \end{equation*} \begin{equation*} 4 \mu(s) \left(e^{-\mu(s)} \mu(s) \left(\left(-3+20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx-\right.\right.\\ 3 \int_{s}^{\iy} q[x] v[x] \, dx-2 \int_{s}^{\iy} v[x] p_1[x] \, dx- \end{equation*} \begin{equation*} 2 \int_{s}^{\iy} p_2[x] \, dx-3 \int_{s}^{\iy} q_1[x] \, dx+\\ c^2 \left((\int_{s}^{\iy} q[x] u[x] \, dx)^2-20 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx- \right.\right. \end{equation*} \begin{equation*} \left. \left.3 \int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)\right)+\\ 2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx \right. \end{equation*} \begin{equation*} \left. \left.+\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)+ \end{equation*} \begin{equation*} \left.\left.20 c \left(-\int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right) \sinh[\mu(s)]\right)\right\} n^{-\frac{2}{3}} \end{equation*} \end{thm} and the GOE$_{n}$ calligraphic variables are \begin{thm} \label{large n Qn1} for $s$ bounded away from minus infinity, \begin{equation*} \mathcal{Q}_{n,1}(\tau(s))\>=\> \frac{1}{\sqrt{2}}(1+e^{-\mu(s)}) \>+\> \left(\frac{\nu(s)}{2\sqrt{2}\mu(s)}\sinh(\mu(s))+\frac{cq(s)}{\sqrt{2}}e^{-\mu(s)}\right)n^{-\frac{1}{3}}+ \end{equation*} \begin{equation*} \frac{1}{8 \sqrt{2} \mu(s)^2} \left(\left(-4 c \nu(s) \int_{s}^{\iy} q[x] u[x] \, dx \left(\mu(s) \left(\cosh[\mu(s)]+2 c e^{-\mu(s)} \mu(s)\right)-\right.\right.\right. \end{equation*} \begin{equation*} \sinh\mu(s))+\nu(s)^2 (\cosh[\mu(s)] \mu(s) \\ \left.\left(1+4 c+4 c^2 \mu(s)\right)-\left(1+4 c+\mu(s)+4 c^2 \mu(s)^2\right) \sinh\mu(s)\right)+ \end{equation*} \begin{equation*} 4 \mu(s) \left(e^{-\mu(s)} \mu(s) \left(\left(-3+20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx-\right.\right.\\ 3 \int_{s}^{\iy} q[x] v[x] \, dx-2 \int_{s}^{\iy} v[x] p_1[x] \, dx \end{equation*} \begin{equation*} -2 \int_{s}^{\iy} p_2[x] \, dx-3 \int_{s}^{\iy} q_1[x] \, dx+ c^2 \left((\int_{s}^{\iy} q[x] u[x] \, dx)^2-20 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx\right. \right. \end{equation*} \begin{equation*} \left. \left. -3 \int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)\right)+ \left. 2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx +\right. \right. \end{equation*} \begin{equation*} \left. \left. \int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)+\\ \left.\left.20 c \left(\int_{s}^{\iy} q_1[x] \, dx-\int_{s}^{\iy} q[x] v[x] \, dx\right) \sinh\mu(s)\right)\right) n^{-\frac{2}{3}} \end{equation*} \begin{equation*} + O(n^{-1}), \end{equation*} we also have \begin{equation*} \mathcal{P}_{n,1}(\tau(s))\>=\> \left\{\frac{-1+e^{-\mu(s)}}{\sqrt{2}}+ \left(\frac{\nu(s)}{2\sqrt{2} \mu(s)}(e^{-\mu(s)}+\cosh\mu(s))+\frac{cq(s)}{\sqrt{2}}e^{-\mu(s)} \right)n^{-\frac{1}{3}} \right. \end{equation*} \begin{equation*} - \frac{1}{16 \left(\sqrt{2} \mu(s)^2\right)}\left(\left(e^{-\mu(s)} \left(\nu(s)^2 \left(\left(-1+e^{\mu(s)}\right) \left(5-12 c+(-3+4 c) e^{\mu(s)}\right)-\right.\right.\right.\right. \end{equation*} \begin{equation*} \left.2 (\mu(s)) \left(1+2 c \left(e^{2 \mu(s)}-3\right)+4 c^2 \mu(s)\right)\right)+4 c \nu(s) \\ \left(4 e^{\mu(s)}-3+e^{2 \mu(s)} (\mu(s)-1)-3 \mu(s)+4 c \mu(s)^2\right) \end{equation*} \begin{equation*} \int_{s}^{\iy} q[x] u[x] \, dx+8 \mu(s) \left(10 c \left(-3+e^{\mu(s)}\right) \left(-1+e^{\mu(s)}\right) \right.\\ \left(\int_{s}^{\iy} q[x] v[x] \, dx-\int_{s}^{\iy} q_1[x] \, dx\right)+ \end{equation*} \begin{equation*} \mu(s) \left(\left(3-20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx+3 \right.\\ \int_{s}^{\iy} q[x] v[x] \, dx+2 \int_{s}^{\iy} v[x] p_1[x] \, dx+2 \int_{s}^{\iy} p_2[x] \, dx+ \end{equation*} \begin{equation*} 3 \int_{s}^{\iy} q_1[x] \, dx-c^2 \\ \left((\int_{s}^{\iy} q[x] u[x] \, dx)^2-20 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx-3 \int_{s}^{\iy} q[x] v[x] \, dx + \right.\right. \end{equation*} \begin{equation*} \left. \left.\int_{s}^{\iy} q_1[x] \, dx\right)\right)- \end{equation*} \begin{equation*} \left.\left.\left.\left. 2\left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx+\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)\right)\right)\right) \\ n^{-\frac{2}{3}} \end{equation*} \begin{equation*} + O(n^{-1}) \end{equation*} and the last of the GOE$_{n}$ function is\\ \begin{equation*} \mathcal{R}_{n,1}(\tau(s))\>=\> (1-e^{-\mu(s)}) +\left(\frac{\nu(s)}{2\mu(s)}\sinh\mu(s) -cq(s)e^{-\mu(s)}\right)n^{-\frac{1}{3}} \>\>+ \end{equation*} \begin{equation*} \frac{1}{8 \mu(s)^2}\\ \left(\left(4 c \nu(s) \mu(s) \left(-\cosh[\mu(s)] \mu(s)+2 c e^{-\mu(s)} \mu(s)^2+\right.\right.\right.\\ \sinh[\mu(s)])+ \end{equation*} \begin{equation*} \nu(s)^2 (\cosh[\mu(s)] \mu(s) \\ \left.\left(-1+4 c-4 c^2 \mu(s)\right)+\left(1-4 c+\mu(s)+4 c^2 \mu(s)^2\right) \sinh[\mu(s)]\right)- \end{equation*} \begin{equation*} 4 \mu(s) \left(e^{-\mu(s)} \mu(s) \left(\left(-3+20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx-\right.\right.\\ 3 \int_{s}^{\iy} q[x] v[x] \, dx-2 \int_{s}^{\iy} v[x] p_1[x] \, dx \end{equation*} \begin{equation*} -2 \int_{s}^{\iy} p_2[x] \, dx-3 \int_{s}^{\iy} q_1[x] \, dx+\\ c^2 \left((\int_{s}^{\iy} q[x] u[x] \, dx)^2-20 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx \right. \right. \end{equation*} \begin{equation*} \left. \left.-3 \int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)\right)+\\ \left.2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx+ \right. \right. \end{equation*} \begin{equation*} \left. \left. \int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)+ \end{equation*} \begin{equation*} \left.\left.20 c \left(\int_{s}^{\iy} q[x] v[x] \, dx-\int_{s}^{\iy} q_1[x] \, dx\right) \sinh[\mu(s)]\right)\right) n^{-\frac{2}{3}}+ O(n^{-1}) \end{equation*} \end{thm} For the symplectic ensemble, the calligraphic variables were mentioned in \cite{Choup3}, the similarity of the corresponding system of differential equations to the GOE calligraphic system makes our derivation simpler, the coefficient matrices are the same up the minus sign. Keeping with the same notation, we see that \begin{equation} \left( \begin{array}{c} \mathcal{Q}_{n,4}(t) \\ \mathcal{P}_{n,4}(t) \\ \tilde{\mathcal{R}}_{n,4}(t) \\ \end{array} \right) =\left( \begin{array}{ccc} \frac{1}{2}(1+\cosh\sqrt{2ab}) & \frac{a}{2b}(\cosh\sqrt{2ab}-1) & \sqrt{\frac{a}{2b}}\sinh\sqrt{2ab} \\ \frac{b}{2a}(\cosh\sqrt{2ab}-1) & \frac{1}{2}(1+\cosh\sqrt{2ab}) & \sqrt{\frac{b}{2a}}\sinh\sqrt{2ab} \\ \sqrt{\frac{b}{2a}}\sinh\sqrt{2ab} & \sqrt{\frac{a}{2b}}\sinh\sqrt{2ab} & \cosh\sqrt{2ab} \\ \end{array} \right)\,\cdot\, \left( \begin{array}{c} 0 \\ -c_{\psi} \\ 1 \\ \end{array} \right). \end{equation} We dropped the $t$ dependence of $a$ and $b$ in the above matrix for esthetic reason. This gives \begin{equation}\label{calQn4} \mathcal{Q}_{n,4}(t)=-c_{\psi}\frac{a(t)}{2b(t)}[\cosh\sqrt{2a(t)b(t)}-1]-\sqrt{\frac{a(t)}{2b(t)}} \sinh\sqrt{2a(t)b(t)}, \end{equation} \begin{equation}\label{calPn4} \mathcal{P}_{n,4}(t)=-c_{\psi}\frac{1}{2}[1+\cosh\sqrt{2a(t)b(t)}]-\sqrt{\frac{b(t)}{2a(t)}} \sinh\sqrt{2a(t)b(t)}, \end{equation} and \begin{equation}\label{caltildeRn4} \tilde{\mathcal{R}}_{n,4}(t)=-c_{\psi}\sqrt{\frac{a(t)}{2b(t)}}\sinh\sqrt{2a(t)b(t)}+ \cosh\sqrt{2a(t)b(t)} \end{equation} or \begin{equation}\label{calRn4} \mathcal{R}_{n,4}(t)=-c_{\psi}\sqrt{\frac{a(t)}{2b(t)}}\sinh\sqrt{2a(t)b(t)}+ \cosh\sqrt{2a(t)b(t)}-1. \end{equation} for the GSE, we have the corresponding formula for $u_{n,\varepsilon}$, $\tilde{v}_{n,\varepsilon}$ and $q_{n,\varepsilon}$. \begin{thm} \label{large n une gse} for $s$ bounded away from minus infinity, \begin{equation*} u_{n,\varepsilon}(\tau(s))\>=\> -\sinh^{2}(\frac{\mu(s)}{2})+\> \left(\frac{\nu(s)}{2\mu(s)}(\cosh(\mu(s))-1)-\frac{cq(s)}{2}\sinh(\mu(s))\right)n^{-\frac{1}{3}} + \end{equation*} \begin{equation*} \frac{1}{16 \mu(s)^2}\left(\left(8 c \nu(s) \int_{s}^{\iy} q[x] u[x] \, dx \left(-1+\cosh[\mu(s)] \left(1+c \mu(s)^2\right)-\right.\right.\right. \end{equation*} \begin{equation*} \mu(s) \sinh[\mu(s)])+\nu(s)^2 (4+8 c- \left.4 \cosh[\mu(s)] \left(1+2 c+c^2 \mu(s)^2\right)+(1+8 c) \mu(s) \sinh[\mu(s)]\right)+ \end{equation*} \begin{equation*} 4 \mu(s) \left(40 c (-1+\cosh[\mu(s)]) \left(-\int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)+\right. \end{equation*} \begin{equation*} \mu(s) \left(-c^2 \cosh[\mu(s)] (\int_{s}^{\iy} q[x] u[x] \, dx)^2+\left(\left(-3+20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx-\right.\right. \end{equation*} \begin{equation*} 3 \int_{s}^{\iy} q[x] v[x] \, dx-2 \int_{s}^{\iy} v[x] p_1[x] \, dx-2 \int_{s}^{\iy} p_2[x] \, dx-3 \int_{s}^{\iy} q_1[x] \, dx- \end{equation*} \begin{equation*} 20 c^2 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx-3 \int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)+2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\right. \end{equation*} \begin{equation*} \left.\left.\left.\left.\left.\int_{s}^{\iy} q_1[x] u_1[x] \, dx+\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right) \sinh[\mu(s)]\right)\right)\right) \\ n^{-\frac{2}{3}} \end{equation*} \begin{equation*} +O(n^{-1}) \end{equation*} and \begin{equation*} \tilde{v}_{n,\varepsilon}(\tau(s))\>=\> -\sinh^{2}(\frac{\mu(s)}{2})+\> \frac{cq(s)}{2}\sinh(\mu(s))n^{-\frac{1}{3}} + \end{equation*} \begin{equation*} \frac{1}{16 \mu(s)}\left(\left(-4 c^2 \cosh[\mu(s)] \mu(s) (\nu(s)-\int_{s}^{\iy} q[x] u[x] \, dx)^2+\right.\right. \end{equation*} \begin{equation*} \left(\nu(s)^2+4 \mu(s) \left(\left(-3+20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx-3 \int_{s}^{\iy} q[x] v[x] \, dx-\right.\right. \end{equation*} \begin{equation*} 2 \int_{s}^{\iy} v[x] p_1[x] \, dx-2 \int_{s}^{\iy} p_2[x] \, dx-3 \int_{s}^{\iy} q_1[x] \, dx-20 c^2 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx-\right. \end{equation*} \begin{equation*} \left.3 \int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)+2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx+\right. \end{equation*} \begin{equation*} \left.\left.\left.\left.\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)\right) \sinh[\mu(s)]\right) n^{-\frac{2}{3}}+ O(n^{-1})\\ \end{equation*} we also have \begin{equation*} q_{n,\varepsilon}(\tau(s))\>=\> -\frac{1}{\sqrt{2}}\sinh(\mu(s))+\> \left(\frac{\nu(s)}{2\sqrt{2}\mu(s)}\sinh(\mu(s))+\frac{cq(s)}{\sqrt{2}}\cosh(\mu(s))\right)n^{-\frac{1}{3}} + \end{equation*} \begin{equation*} \frac{1}{8 \sqrt{2} \mu(s)^2} \left(\left(\cosh[\mu(s)] \mu(s) \left((1+4 c) \nu(s)^2-4 c \nu(s) \int_{s}^{\iy} q[x] u[x] \, dx+\right.\right.\right. \end{equation*} \begin{equation*} 4 \mu(s) \left(\left(-3+20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx-3 \int_{s}^{\iy} q[x] v[x] \, dx-2 \int_{s}^{\iy} v[x] p_1[x] \, dx-2 \right. \end{equation*} \begin{equation*} \int_{s}^{\iy} p_2[x] \, dx-3 \int_{s}^{\iy} q_1[x] \, dx-20 c^2 \left(2 \int_{s}^{\iy} q[x] u[x]^2 \, dx-3 \int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right)+ \end{equation*} \begin{equation*} \left.\left.2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx+\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)\right)- \end{equation*} \begin{equation*} \left(\nu(s)^2 \left(1+4 c+4 c^2 \mu(s)^2\right)-4 c \nu(s) \right.\\ \left(1+2 c \mu(s)^2\right) \int_{s}^{\iy} q[x] u[x] \, dx+\\ \end{equation*} \begin{equation*} \left.4 c \mu(s) \left(c \mu(s) (\int_{s}^{\iy} q[x] u[x] \, dx)^2+20 \left(\int_{s}^{\iy} q[x] v[x] \, dx-\int_{s}^{\iy} q_1[x] \, dx\right)\right)\right) \\ \sinh[\mu(s)]) n^{-\frac{2}{3}} \end{equation*} \begin{equation*} +O(n^{-1}). \end{equation*} \end{thm} For the GSE Calligraphic functions we have the following expansions, \begin{thm} \label{large n une gse} for $s$ bounded away from minus infinity, \begin{equation*} \mathcal{Q}_{n,4}(\tau(s))\>=\frac{1}{2\sqrt{2}}\left(1-\cosh\mu(s)+2\sinh\mu(s)\right)\> +\> \end{equation*} \begin{equation*} \left(\frac{\nu(s)}{2\sqrt{2}\mu(s)}(e^{\mu(s)}-1)-\frac{cq(s)}{2\sqrt{2}}(2\cosh\mu(s)-\sinh\mu(s))\right)n^{-\frac{1}{3}} + \end{equation*} \begin{equation*} \frac{1}{32 \sqrt{2} \mu(s)^2}\left(e^{-\mu(s)} \left(\nu(s)^2 \left(-2 \left(-1+e^{\mu(s)}\right) \left(-3-8 c+e^{\mu(s)}\right)-\right.\right.\right.\\ \end{equation*} \begin{equation*} \left.\left(3+16 c+e^{2 \mu(s)}\right) \mu(s)+4 c^2 \left(-3+e^{2 \mu(s)}\right) \mu(s)^2\right)-8 c \nu(s) \\ \end{equation*} \begin{equation*} \left(2 \left(-1+e^{\mu(s)}\right)+\mu(s) \left(-2+c \left(-3+e^{2 \mu(s)}\right) \mu(s)\right)\right) \int_{s}^{\iy} q[x] u[x] \, dx+\\ \end{equation*} \begin{equation*} 4 \mu(s) \left(80 c \left(-1+e^{\mu(s)}\right) \left(\int_{s}^{\iy} q[x] v[x] \, dx-\int_{s}^{\iy} q_1[x] \, dx\right)+\mu(s) \right.\\ \end{equation*} \begin{equation*} \left(-\left(-3+20 c^2\right) \left(3+e^{2 \mu(s)}\right) \int_{s}^{\iy} p[x] u[x] \, dx+c^2 \left(-3+e^{2 \mu(s)}\right) (\int_{s}^{\iy} q[x] u[x] \, dx)^2+\right.\\ \end{equation*} \begin{equation*} \left(3+e^{2 \mu(s)}\right) \left(40 c^2 \int_{s}^{\iy} q[x] u[x]^2 \, dx+\left(3-60 c^2\right) \int_{s}^{\iy} q[x] v[x] \, dx+\right.\\ \end{equation*} \begin{equation*} 2 \int_{s}^{\iy} v[x] p_1[x] \, dx+2 \int_{s}^{\iy} p_2[x] \, dx+\left(3+20 c^2\right) \int_{s}^{\iy} q_1[x] \, dx-2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\right.\\ \end{equation*} \begin{equation*} \left.\left.\left.\left.\left.\int_{s}^{\iy} q_1[x] u_1[x] \, dx+\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right)\right)\right)\right) n^{-\frac{2}{3}} +O(n^{-1}) \end{equation*} the next function is \begin{equation*} \mathcal{P}_{n,4}(\tau(s))\>=\> \frac{1}{2\sqrt{2}}(2\sinh\mu(s)-\cosh\mu(s)-1) \end{equation*} \begin{equation*} +\left(\frac{\nu(s)}{2\sqrt{2}\mu(s)}\sinh\mu(s)-\frac{cq(s)}{2\sqrt{2}}\left(2\cosh\mu(s)-\sinh\mu(s)\right)\right)n^{-\frac{1}{3}} \end{equation*} \begin{equation*} \frac{1}{16 \sqrt{2} \mu(s)^2}((8 c \nu(s) \int_{s}^{\iy} q[x] u[x] \, dx (-\cosh[\mu(s)] \mu(s)+\\ \end{equation*} \begin{equation*} \left.c \mu(s)^2 (\cosh[\mu(s)]-2 \sinh[\mu(s)])+\sinh[\mu(s)]\right)+\\ \end{equation*} \begin{equation*} \nu(s)^2 \left(-2 \cosh[\mu(s)] \mu(s) \left(1-4 c+2 c^2 \mu(s)\right)+\right.\\ \end{equation*} \begin{equation*} \left.\left(2-8 c+\mu(s)+8 c^2 \mu(s)^2\right) \sinh[\mu(s)]\right)-\\ \end{equation*} \begin{equation*} 4 \mu(s) \left(\mu(s) \left(c^2 (\int_{s}^{\iy} q[x] u[x] \, dx)^2 (\cosh[\mu(s)]-2 \sinh[\mu(s)])+\right.\right.\\ \end{equation*} \begin{equation*} \left(-3+20 c^2\right) (\int_{s}^{\iy} p[x] u[x] \, dx) (2 \cosh[\mu(s)]-\sinh[\mu(s)])-\\ \end{equation*} \begin{equation*} \left(40 c^2 \int_{s}^{\iy} q[x] u[x]^2 \, dx+\left(3-60 c^2\right) \int_{s}^{\iy} q[x] v[x] \, dx+2 \int_{s}^{\iy} v[x] p_1[x] \, dx+\right.\\ \end{equation*} \begin{equation*} 2 \int_{s}^{\iy} p_2[x] \, dx+\left(3+20 c^2\right) \int_{s}^{\iy} q_1[x] \, dx-2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx+\right.\\ \end{equation*} \begin{equation*} \left.\left.\left.\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right) (2 \cosh[\mu(s)]-\sinh[\mu(s)])\right)+\\ \end{equation*} \begin{equation*} \left.\left.40 c \left(\int_{s}^{\iy} q[x] v[x] \, dx-\int_{s}^{\iy} q_1[x] \, dx\right) \sinh[\mu(s)]\right)\right) n^{-\frac{2}{3}}+O(n^{-1}) \end{equation*} the last of our function is \begin{equation*} \mathcal{R}_{n,4}(\tau(s))\>=\> \left( \cosh\mu(s)-\frac{1}{2}\sinh\mu(s) -1\right)+ \end{equation*} \begin{equation*} \left(\frac{\nu(s)}{4\mu(s)}\sinh\mu(s) +\frac{cq(s)}{2}\left(\cosh\mu(s)-2\sinh\mu(s)\right)\right)n^{-\frac{1}{3}} \end{equation*} \begin{equation*} \frac{1}{16 \mu(s)^2}((-4 c \nu(s) (\int_{s}^{\iy} q[x] u[x] \, dx) (\cosh[\mu(s)] \mu(s) \\ \end{equation*} \begin{equation*} \left.(1+4 c \mu(s))-\left(1+2 c \mu(s)^2\right) \sinh[\mu(s)]\right)+\\ \end{equation*} \begin{equation*} \nu(s)^2 \left(\cosh[\mu(s)] \mu(s) \left(1+4 c+8 c^2 \mu(s)\right)-\right.\\ \end{equation*} \begin{equation*} \left.\left(1+4 c+2 \mu(s)+4 c^2 \mu(s)^2\right) \sinh[\mu(s)]\right)+4 \mu(s) \\ \end{equation*} \begin{equation*} \left(\mu(s) \left(\left(-3+20 c^2\right) \int_{s}^{\iy} p[x] u[x] \, dx (\cosh[\mu(s)]-2 \sinh[\mu(s)])-\right.\right.\\ \end{equation*} \begin{equation*} \left(40 c^2 \int_{s}^{\iy} q[x] u[x]^2 \, dx+\left(3-60 c^2\right) \int_{s}^{\iy} q[x] v[x] \, dx+2 \int_{s}^{\iy} v[x] p_1[x] \, dx+\right.\\ \end{equation*} \begin{equation*} 2 \int_{s}^{\iy} p_2[x] \, dx+\left(3+20 c^2\right) \int_{s}^{\iy} q_1[x] \, dx-2 \left(\int_{s}^{\iy} u[x] q_2[x] \, dx+\int_{s}^{\iy} q_1[x] u_1[x] \, dx+\right.\\ \end{equation*} \begin{equation*} \left.\left.\int_{s}^{\iy} q[x] u_2[x] \, dx-\int_{s}^{\iy} p[x] v_1[x] \, dx\right)\right) (\cosh[\mu(s)]-2 \sinh[\mu(s)])+\\ \end{equation*} \begin{equation*} \left.c^2 (\int_{s}^{\iy} q[x] u[x] \, dx)^2 (2 \cosh[\mu(s)]-\sinh[\mu(s)])\right)+\\ \end{equation*} \begin{equation*} \left.\left.20 c \left(-\int_{s}^{\iy} q[x] v[x] \, dx+\int_{s}^{\iy} q_1[x] \, dx\right) \sinh[\mu(s)]\right)\right) n^{-\frac{2}{3}} +O(n^{-1}) \end{equation*} \end{thm} \clearpage \vspace{3ex} \noindent\textbf{\large Acknowledgements: } The author would like to thank Alice and Aimee Choup for their continued affection and the Department of Mathematical Sciences at the University of Alabama in Huntsville.
2,869,038,155,752
arxiv
\section{Introduction} Let $K$ be a field that is algebraically closed and complete with respect to a non-archimedean non-trivial absolute value. Given a closed subvariety $X$ of a toric variety $Y$ over $K$, one can associate a so-called tropical variety $\Trop(X)$ which is a polyhedral complex. Note however, that $\Trop(X)$ is not an invariant of $X$, but depends on the embedding into $Y$. In good situations $\Trop(X)$ can retain a lot of information about $X$. Let us mention here work by Katz, Markwig and Markwig on the $j$-invariant of elliptic curves \cite{KMM1, KMM2} and work by Itenberg, Mikhalkin, Katzarkov and Zharkov on recovering Hodge numbers in degenerations of complex projective varieties \cite{IKMZ}. In the latter work a smoothness condition for tropical varieties in arbitrary codimension appears: a tropical variety is called \emph{smooth} if it is locally isomorphic to the Bergman fan of a matroid. (See Definition \ref{defn smooth} for an equivalent definition for curves.) For tropical hypersurfaces, this is equivalent to the associated subdivision of the Newton polytope being a primitive triangulation, which is the definition of smoothness that is generally used for tropical hypersurfaces \cite[Remark p. 24]{IKMZ}. The definition in \cite{IKMZ} is motivated by complex analytic geometry. A complex variety is smooth if it is locally isomorphic to open subsets of $\C^n$ in the analytic topology. Berman fans of matroids are the local models for linear spaces in tropical geometry, thus it makes sense to call a tropical variety smooth if it is locally isomorphic to the Bergman fan of a matroid. This smoothness condition has been shown to imply many tropical analogues of classical theorems from complex and algebraic geometry, for example an intersection theory, Poincar\'e duality and a Lefschetz $(1,1)$-theorem \cite{Shaw:IntMat, JSS, JRS}. In this paper, we treat the question for which smooth projective curves there exist closed embeddings $\varphi$ into toric varieties such that $\Trop_{\varphi}(X) := \Trop(\varphi(X))$ is smooth. The answer turns out to be Mumford curves (see Definition \ref{defn Mumford}). Indeed, we show that for these curves we can ``repair`` any given embedding by passing to a refinement (see Definition \ref{defn refinement} for a definition of refinement). \begin{theointro} [Theorem \ref{main theorem}, Theorem \ref{main theorem II}] \label{thmA} Let $X$ be a smooth projective curve of positive genus. Then the following are equivalent: \begin{enumerate} \item $X$ is a Mumford curve. \item There exists a closed embedding $\varphi \colon X \to Y$ for a toric variety $Y$ that meets the dense torus such that $\Trop(\varphi(X))$ is a smooth tropical curve. \item Given a closed embedding $\varphi \colon X \to Y$ of $X$ into a toric variety $Y$ that meets the dense torus, there exists a refinement $\varphi' \colon X \to Y'$ of $\varphi$ such that $\Trop(\varphi'(X))$ is a smooth tropical curve. \end{enumerate} \end{theointro} Denote by $\Xan$ the Berkovich analytification of $X$ \cite{BerkovichSpectral}. We give equivalent characterizations of Mumford curves in terms of $\Xan$ in Remark \ref{bem Mumford}. Theorem \ref{thmA}, specifically the equivalence of i) and ii), may be viewed as an equivalent characterization that is purely tropical. Payne showed in \cite[Theorem 4.2]{Payne} that we have a homeomorphism \begin{align} \Xan = \varprojlim_{\varphi \colon X \to Y} \Trop_{\varphi}(X). \end{align} Theorem \ref{thmA} shows that if $X$ is a Mumford curve we can let the limit on the right hand side run only over closed embeddings $\varphi$ such that $\Trop_{\varphi}(X)$ is a smooth tropical curve, meaning the smoothness on the left hand side is reflected on the right hand side. Another often used property of tropicalizations is faithfulness. For curves this means that given a finite skeleton $\Gamma$ of $\Xan$, one requires that $\varphi_{\trop} := \trop \circ \varphi^{\an}$ is a homeomorphism from $\Gamma$ onto its image, preserving the piecewise linear structure. Existence of faithful tropicalizations was proved by Baker, Payne and Rabinoff for curves and generalized to higher dimension by Gubler, Rabinoff and Werner \cite{BPR, GRW}. For further work on faithful tropicalizations see for example \cite{Manjunath, KY}. Baker, Payne and Rabinoff also introduced so-called completed extended skeleta for curves. For a smooth projective curve $X$, these are metric subgraphs $\Sigma$ of $\Xan$, potentially with edges of infinite length, that come with a canonical retraction $\tau \colon \Xan \to \Sigma$. Given a closed embedding $\varphi \colon X \to Y$ for $Y$ a toric variety with dense torus $T$, there exists an \emph{associated complete skeleton} $\Sigma(\varphi)$, which has the property that $\varphi_{\trop}$ factors through the retraction $\tau \colon \Xan \to \Sigma(\varphi)$ (see Definition \ref{defn minimal skeleton}). Denote by $X^\circ := \varphi^{-1}(T)$. We call $\varphi_{\trop}$ \emph{fully faithful} if $\varphi_{\trop}$ maps $\Sigma(\varphi)$ homeomorphically onto its image and is an isometry when restricted to $\Sigma(\varphi) \cap X^{\circ, \an}$. Note that this is much stronger than a faithful tropicalization, since by definition the image of $\Sigma(\varphi)$ is $\Trop_{\varphi}(X)$. We prove the following fully faithful tropicalization result. \begin{theointro} [Theorem \ref{prop edge}] \label{thmB} Let $X$ be a Mumford curve and $\varphi \colon X \to Y$ a closed embedding into a toric variety $Y$ that meets the dense torus. Then there exists a refinement $\varphi'$ of $\varphi$ that is fully faithful. \end{theointro} As a direct consequence of Theorem \ref{thmB} we obtain that there is a section to $\varphi'_{\trop}$ by composing the inverse of $\varphi'_{\trop}|_{\Sigma(\varphi')}$ with the inclusion of $\Sigma(\varphi')$ into $\Xan$ (see Corollary \ref{cor section}). Such sections, though only defined on subsets, were also constructed in \cite[Theorem 5.24]{BPR} and \cite[Theorem 8.15]{GRW2}. We prove Theorem \ref{thmB} as a first step to prove Theorem \ref{thmA}, more precisely that i) implies iii) therein. Our techniques to prove these results are based on the following lifting theorem for rational functions on metric graphs, which is a variant of a theorem by Baker and Rabinoff \cite[Theorem 1.1]{BRab}. The relevant notions are recalled in Section \ref{functions and divisors}. \begin{theointro} [Theorem \ref{lifting theorem}] \label{thmC} Let $X$ be a Mumford curve and $\Gamma$ be a finite skeleton with retraction $\tau$. Let $D \in \Div(X)$ be a divisor of degree $g$ and let $B = p_1 + \dots + p_g \in \Div(\Gamma)$ be a break divisor such that $\tau_* D - B$ is a principal divisor on $\Gamma$. Assume that $B$ is supported on two-valent points of $\Gamma$. Then there exist $x_i \in X(K)$ such that $\tau_* x_i = p_i$ and such that $D - \sum_{i=1}^g x_i$ is a principal divisor on $X$. \end{theointro} Theorem \ref{thmC} is of independent interest, since, given a skeleton of $X$, it enables one to construct closed embeddings with nice tropicalizations. We treat an example of this in Example \ref{tate curves} for a genus $1$ Mumford curve (also called a Tate curve). We give an idea of the proof of Theorem \ref{thmB}, which is carried out in Section \ref{section ff trop}. Given an edge $e$ of $\Sigma(\varphi)$, using Theorem \ref{thmC}, we construct a rational function $f_e \in K(X)^*$ in such a way that $\log \vert f \vert$ has slope $1$ along $e$. Considering the embedding $\varphi' := (\varphi, f_e) \colon X \to Y \times \Pbb^1$, this ensures that $\varphi'_{\trop}$ maps $e$ homeomorphically onto its image and that the corresponding stretching factor equals $1$ (see Definition \ref{weight} for the definition of stretching factor). Using a good choice of $D \in \Div(X)$ and $B \in \Div(\Gamma)$, Theorem \ref{thmC} moreover allows us to construct $f_e$ in such a way that the same holds for all edges of $\Sigma(\varphi')$ that are not contained in $\Sigma(\varphi)$. Doing so for every edge of $\Sigma(\varphi)$, we obtain Theorem \ref{thmB}. In Section \ref{section smooth trop}, we proceed similarly for smoothness and thus prove that i) implies iii) in Theorem \ref{thmA}. In Section \ref{section smooth mumford} we prove that, for a smooth projective curve $X$, the existence of a closed embedding with a smooth tropicalization already implies that $X$ is a Mumford curve. The key result we use is a theorem by Katz and Payne, which Katz attributes to Mikhalkin and Ziegler, that states that over a trivially valued field, a variety whose tropicalization is a tropical linear space is actually a linear space (see Theorem \ref{duck theorem}). We also show that if $\Trop_\varphi(X)$ is smooth then $\varphi_{\trop}$ is necessarily fully faithful (see Theorem \ref{smooth implies ff}). \subsection*{Tropicalization of closed subvarieties of tori} We assume that the reader is familiar with tropicalizations of closed subvarieties of algebraic tori \cite{MaclaganSturmfels, Gubler2}. Here we consider tropicalizations of closed subvarieties of toric varieties, which may be seen as a compactification of the latter. We quickly sketch the relation: Given a closed embedding $\varphi \colon X \to Y$ of a smooth projective curve X into a toric variety $Y$ that meets the dense torus $T$, denote by $X^\circ := \varphi^{-1}(T)$. Then $\Trop_\varphi(X^\circ)$ is a dense open subset of $\Trop_\varphi(X)$ and we obtain the latter from the former by putting points at the end of the unbounded edges. We give a consequence of Theorem \ref{main theorem} for closed embeddings into tori in Corollary \ref{main theorem very affine}. \begin{ack} The author was inspired to reconsider the questions in this paper by a question asked by Hannah Markwig during an open problem session at the program ``Tropical geometry, amoebas and polytopes`` at the Institute Mittag-Leffler. He would like to thank Hannah Markwig for the encouragement and the Institute Mittag-Leffler for the wonderful working conditions. He would also like to thank Matt Baker, Walter Gubler, Yoav Len, Hannah Markwig, Sam Payne, Joe Rabinoff, Veronika Wanner and Annette Werner for helpful discussions and comments. He would also like to thank the anonymous referee for their precise report and detailed comments. \end{ack} \subsection*{Conventions} Throughout, $K$ will be an algebraically closed field that is complete with respect to a non-archimedean non-trivial absolute value $\abs_K$. We denote the value group by $\Lambda := \log \vert K^\times \vert_K$ and the residue field by $\Ktilde$. A variety over $K$ is a separated reduced irreducible scheme of finite type and a curve is a one-dimensional variety. $X$ will be a smooth projective curve over $K$. We will denote finite skeleta of $X$ by $\Gamma$ and completed extended skeleta in the sense of \cite{BPR2} by $\Sigma$. We will generally denote toric varieties by $Y$ and their dense tori by $T$. \section{Preliminaries} \subsection{Tropical toric varieties and tropical curves} Let $N$ be a free abelian group of rank $n$, $M := \Hom_\Z(N, \Z)$ its dual, $N_\R := N \otimes \R$ and $\Delta$ a rational pointed fan in $N_\R$. We write $\T := \R \cup \{-\infty\}$. For $\sigma \in \Delta$ we define the monoid $S_\sigma := \{ \varphi \in M \mid \varphi(v) \geq 0 \text{ for all } v \in \sigma \}$ and write $N(\sigma) := N_\R / \langle \sigma \rangle_\R$, where $\langle \sigma \rangle_{\R}$ denotes the real vector space spanned by $\sigma$. We write \begin{align*} N_\Delta = \coprod \limits_{\sigma \in \Delta} N(\sigma). \end{align*} We endow $N_\Delta$ with a topology in the following way: For $\sigma \in \Delta$ write $N_\sigma = \coprod \limits_{\tau \prec \sigma} N(\tau)$. This is naturally identified with $\Hom_{\Monoids}(S_\sigma, \T)$. We give $N_\sigma$ the subspace topology of $\T^{S_\sigma}$. For $\tau \prec \sigma$, the space $\Hom(S_\tau, \T)$ is naturally identified with the open subspace of $\Hom_{\Monoids}(S_\sigma, \T)$ of maps that map $\tau^{\perp} \cap M$ to $\R$. We define the topology of $N_\Delta$ to be the one obtained by gluing along these identifications. \begin{defn} We call the space $N_\Delta$ a \emph{tropical toric variety}. \end{defn} The space $N_\Delta$ is sometimes called the canonical compactification of $N_\R$ with respect to $\Delta$. Note that $N_\Delta$ contains $N_\R$ as a dense open subset. \begin{Ex} \label{example tropical toric} Let $N = \Z^n$ with basis $x_1,\dots,x_n$ and $\Delta$ be the complete fan whose rays are spanned by $-x_1,\dots,-x_n$ and $x_0 := \sum x_i$. For any $d$-rays there is a face $\sigma$ of dimension $d$ that contains exactly these rays. Then $N(\sigma)$ is an $n-d$-dimensional vector space. The topology is in such a way that $N_\Delta$ is homeomorphic to an $n$-simplex, where $N(\sigma)$ is identified with the relative interior of a $n-d$-dimensional simplex in the boundary. For example, $N_\R$ corresponds to the vertex at the origin in $\Delta$ and to the interior of $N_\Delta$ when we view $N_\Delta$ as a simplex. However, we will heavily use the structure of $N_\R$ as a vector space, so we generally view $N_\Delta$ as a compactification of $N_\R$ by strata that are infinitely far away. \end{Ex} \begin{defn} Let $\CC$ be a one dimensional $\Lambda$-rational polyhedral complex in $N_\R$. For an edge $e$ (i.e.~a one-dimensional polyhedron) of $\CC$ we denote by $\Linear(e) = \{\lambda (u_1 - u_2) \mid u_1, u_2 \in e, \lambda \in \R \}$ the \emph{linear space of e}. Since $X$ is $\Lambda$-rational, $\Linear(e)$ contains a canonical lattice which we denote by $\Z(e)$. For a vertex $v$ of $e$ we denote by $w_{v,e}$ the unique generator of $\Z(e)$ that points inside of $e$ from $v$. We call $\CC$ \emph{weighted} if every edge is equipped with a positive integral weight $m(e)$ and \emph{balanced} if for every vertex $v$ of $\CC$ we have \begin{align*} \sum_{e \colon v \prec e} m(e) w_{v,e} = 0. \end{align*} The \emph{local cone} at $v$ is the one-dimensional fan whose rays are spanned by the $w_{w,e}$ and given weight $m(e)$ for $v \prec e$. \end{defn} \begin{defn} \label{defn tropical curve} A \emph{tropical curve in $N_\R$} is a one dimensional $\Lambda$-rational polyhedral complex equipped with weights on its edges that satisfies the balancing condition, up to the equivalence relation generated by subdivision of edges preserving the weights. A \emph{tropical curve $X$ in a tropical toric variety $N_\Delta$} is the closure in $N_\Delta$ of a tropical curve $X^\circ$ in $N_\R$. \end{defn} $X \setminus X^\circ$ is a finite set, whose points we consider as vertices of $X$ and call the \emph{infinite vertices}. The edges of $X$ are the closures of the edges of $X^\circ$. A \emph{$\Lambda$-metric graph} (which we will often just call a \emph{metric graph}) is roughly speaking a finite graph in which every edge $e$ has a positive length $l_e \in \Lambda \cup \{\infty\}$. We allow loop edges, meaning edges whose endpoints agree and half open edges, meaning edges which have only one endpoint. If $l_e \in \Lambda_{>0}$, we view $e$ as an interval of length $e$. Half open edges are identified with $\R_{\geq 0}$. Leaf edges are the only edges that are allowed to have infinite length and are identified with $[0, \infty]$ with the topology of a closed interval. For a more precise account on metric graphs, we refer the reader to \cite[Section 2.1]{ABBR}. By an \emph{edge} of a metric graph $\Gamma$ we mean an edge in some graph model $G$ of $\Gamma$. For an edge $e$ of $\Gamma$ we denote by $\mathring e$ the relative interior of $e$, meaning $e$ with its endpoints removed. For two points $x,y \in \mathring e$ we denote by $d_e(x,y)$ their distance in $\mathring e$. (Note that this might not be the distance in $\Gamma$, as there might be a shorter path that leaves $\mathring e$.) We call a metric graph \emph{finite} if all its edges are of finite length. \begin{Ex} A tropical curve in $N_\R$ has a canonical structure as a metric graph where the length of an edge is given by the \emph{lattice length}, meaning the length of the primitive vector $w_{v,e}$ equals $1$. \end{Ex} A tropical curve $X$ in a tropical toric variety $N_\Delta$ is not necessarily a metric graph since two infinite rays might meet at infinity, creating a vertex at infinity which does not have valence $1$. However, $X$ is a metric graph if every point in $X \setminus X^\circ$ has exactly one adjacent edge. \begin{defn} \label{defn smooth} An edge in a tropical curve is \emph{smooth} if its weight is $1$. A finite vertex $v$ is \emph{smooth} if $\langle w_{v,e} \mid v \prec e \rangle_\Z$ is a saturated lattice of rank $\val(v)-1$ in $N$, where $\val(v)$ is the number of edges adjacent to $v$. An infinite vertex is smooth if it has one adjacent edge. A vertex that is not smooth is called \emph{singular}. A tropical curve is \emph{smooth} if all its edges and vertices are smooth. \end{defn} \begin{bem} Following \cite{IKMZ} a tropical variety is \emph{smooth} if it is locally isomorphic to the Bergman fan of a matroid. A one-dimensional weighted fan in $\R^n$ is the Bergman fan of a matroid if and only if it is isomorphic to the fan whose rays are spanned by $x_1,\dots,x_n$ and $- \sum_{i=1}^n x_i$ and all weights are $1$. Thus Definition \ref{defn smooth} agrees with the one in \cite{IKMZ} for the case of curves. \end{bem} \begin{Ex} Consider the tropical curves in Figure \ref{Figure smooth}. Each of them depicts a vertex in a tropical curve in $\R^2$ with lattice $N = \Z^2$. In the leftmost picture, the outgoing directions are $(-1,0), (0,-1)$ and $(1,1)$, which span $\Z^2$, thus $v_1$ is a smooth vertex. In the picture in the middle, the span of the primitive vectors is again $\Z^2$, but there are $4$ vertices adjacent to $v_2$, thus $v_2$ is not smooth. In the picture on the right, the outgoing directions are $(2,-1), (-1,2)$ and $(-1,-1)$. The span of these vectors is $\{ (x,y) \in \Z^2 \mid x-y \text{ is divisible by } 3\}$. This has rank $2$, but is not saturated in $\Z^2$, thus $v_3$ is not a smooth vertex. \end{Ex} \begin{figure} \begin{tikzpicture} \draw (0,0) -- (-1,0); \draw (0,0) -- (0,-1); \draw (0,0) -- (1,1); \node [above] at (0,0) {$v_1$}; \fill (0,0) circle (2pt); \draw (4,-1) -- (4,1); \draw (3,0) -- (5,0); \node[below] at (4.2,0) {$v_2$}; \fill (4,0) circle (2pt); \draw (8,0) -- ++(2,-1); \draw (8,0) -- ++(-1, 2); \draw (8,0) -- ++(-1,-1); \node[right] at (8,0) {$v_3$}; \fill (8,0) circle (2pt); \end{tikzpicture} \caption{The types of vertices in tropical curves in $\R^2$. The vertex on the left is smooth, the other two vertices are not smooth.} \label{Figure smooth} \end{figure} \subsection{Berkovich curves and their extended skeleta} Let $X$ be a variety over $K$. The associated Berkovich space \cite{BerkovichSpectral} is \begin{align*} \Xan := \{ x = (p_x, \abs_x) \mid p_x \in X, \abs_x \text{ is an absolute value on } k(p_x) \text{ extending } \abs_K\} \end{align*} with the topology such that the canonical forgetful map $\Xan \to X$ is continuous and for all open subsets $U$ of $X$ and $f \in \Ocal(U)^\times$ the map $U^{\an} \to \R, (p_x, \abs_x) \mapsto \vert f(p_x) \vert_x$ is continuous. We will often write $\vert f(x) \vert := \vert f(p_x) \vert_x$. If $X = \Spec(A)$ is an affine variety then \begin{align*} \Xan = \{\abs \text{ multiplicative seminorm on } A \text{ extending } \abs_K\} \end{align*} with the topology such that for all $f \in A$ the map $\Xan \to \R; \abs \mapsto \vert f \vert$ is continuous. For morphism $\varphi \colon X \to Y$ of $K$-varieties we obtain a morphism $\varphi^{\an} \colon \Xan \to Y^{\an}$. Now let $X$ be a curve over $K$. For $x \in \Xan$ we denote by $\Hscr(x)$ the completion of $k(p_x)$ with respect to $\abs_x$ and by $\Htilde$ its residue field. Following Berkovich and Thuillier \cite{BerkovichSpectral, Thuillier} we say $x$ is of type I if $p_x \in X(K)$ and of type II if $p_x$ is the generic point of $X$ and $\trdeg [ \Htilde : \Ktilde ] = 1$. If $x$ is of type I, then $\abs_x = \abs_K$, thus the forgetful map $\Xan \to X$ induces a bijection from the set of type I points of $\Xan$ onto $X(K)$. We will thus identify $X(K)$ with the subset of $\Xan$ that consists of type I points. If $x$ is of type II, then we denote by $C_x$ the smooth projective $\Ktilde$-curve with function field $\Htilde$ and by $g(x)$ its genus, which we call the \emph{genus of $x$}. We now recall the notion of completed skeleta of $\Xan$, which is due to Baker, Payne and Rabinoff \cite{BPR2}. \begin{defn} We consider $\A^1 = \Spec K [T]$. For $- \infty \leq s < r \in \R$ denote \begin{align*} B(r) = \{ x \in \A^{1, \an} \mid \log \vert T \vert_x < r\} \text{ and } A(r,s) = \{ x \in \A^{1, \an} \mid s < \log \vert T \vert_x < r \}. \end{align*} We call $B(r)$ an \emph{open disc} of logarithmic radius $r$ and $A(r,s)$ a \emph{generalized open annulus} of logarithmic radii $s$ and $r$. We call $A(r,s)$ an \emph{annulus} with logarithmic radii $s$ and $r$ if $s \in \R$ and a \emph{punctured disc} of radius $r$ if $s = -\infty$. We call $r - s \in \R \cup \infty$ the \emph{length of A(r,s)}. We denote by $\rho_{B(t)}$ the element of $B(t)$ defined by $\left \vert \sum a_i T^i \right \vert_{\rho_{B(t)}} = \max_i \vert a_i \vert t^i$ and call the set \begin{align*} \Sigma(A(r,s)) = \left\{ \rho_{B(t)} \mid s < \log t < r \right\} \end{align*} the \emph{skeleton} of $A(r,s)$. There is a canonical retraction $\tau \colon A(r,s) \to \Sigma(A(r,s))$ which is a strong deformation retraction. \end{defn} \begin{defn} Let $X$ be a smooth projective curve over $K$. A \emph{completed semistable vertex set} $V$ of $X$ is a finite subset of $\Xan$ consisting of type I and II points such that $\Xan \setminus V$ is isomorphic to a disjoint union of finitely many generalized open annuli and infinitely many open discs. \end{defn} For a completed semistable vertex set $V$ of $\Xan$ there is a canonical associated subspace $\Sigma(V)$ of $\Xan$, called the \emph{completed skeleton} $\Sigma(V)$, which is a metric graph. There is a canonical retraction $\tau_V \colon \Xan \to \Sigma(V)$, such that $\Sigma(V)$ is a strong deformation retract of $\Xan$. As the name suggests, the vertex set of $\Sigma_V$ is $V$. The edges are the skeleta of the generalized open annuli that are connected components of $\Xan \setminus V$. The length of such an edge is the length of the corresponding annulus. If $X$ is projective and $V$ is a completed semistable vertex set that only consists of type II points, we call $V$ a \emph{semistable vertex set} and $\Sigma(V)$ a \emph{finite skeleton} of $X$. A finite skeleton is a finite metric graph and we will often denote it by $\Gamma$. Let $V$ be a completed semistable vertex set of $X$. Then the set of type II points in $V$ forms a semistable vertex set for $X$. We call the associated finite skeleton the \emph{finite part} of $\Sigma(V)$ and denote it by $\Sigma(V)_{\fin}$. \begin{defn} \label{defn Mumford} A smooth projective curve of genus $g > 0$ is called \emph{Mumford curve} if for some semistable vertex set $V$ the skeleton $\Gamma(V)$ has first Betti number equal to $g$. \end{defn} \begin{bem} \label{bem Mumford} Note that since $\Gamma(V)$ is a deformation retract of $\Xan$, the first Betti number of $\Gamma(V)$ is independent of $V$. Thus we might replace ``some`` by ``every`` in Definition \ref{defn Mumford}. Furthermore $X$ is a Mumford curve if and only if $g(x) = 0$ for all type II points $x$ in $\Xan$. Another equivalent definition of Mumford curve is that any point $x \in \Xan$ has a neighborhood that is isomorphic to an open subset of $\Pbb^{1,\an}$ \cite[Proposition 2.26 \& Theorem 2.28]{JellWanner}. \end{bem} \subsection{Tropicalization of curves} Let $Y$ be a toric variety with dense torus $T$. Let $N$ be the cocharacter lattice of $T$, $N_\R := N \otimes \R$ and $\Delta$ the fan in $N_\R$ associated to $Y$. \begin{defn} The \emph{tropicalization of Y} is \begin{align*} \Trop(Y) := N_\Delta. \end{align*} \end{defn} There is a canonical tropicalization map $\trop \colon Y^{\an} \to \Trop(Y)$, which is a continuous proper map of topological spaces \cite[Section 3]{Payne}. \begin{Ex} If $Y = \G_m^n$ is a torus of dimension $n$ with fixed coordinates, then $\Delta$ is only the origin in $\R^n$ and we have $\Trop(Y) = \R^n$. The restriction of the map $\trop \colon \G_m^{n, \an} \to \R^n$ to $\G_m^n(K) = (K^*)^n$ is the usual tropicalization map $X(K) \to \R^n; x \mapsto (\log \vert x_1 \vert_K,\dots,\log \vert x_n \vert_K)$. If $Y = \Pbb^1$, then Example \ref{example tropical toric} shows that $\Trop(\Pbb^1)$ is homeomorphic to a closed interval. Since it contains a one-dimensional vector space as a dense open subset, a good point of view is $\Trop(\Pbb^1) = [-\infty, \infty]$ with the topology of a closed interval. The map $\trop \colon \Pbb^{1, \an} \to \Trop(\Pbb^1)$ is then given by $(p, \abs_x) \mapsto \log \vert z(p) \vert_x$, where $z$ is the coordinate function on $\Pbb^1$. \end{Ex} \begin{bem} For two toric varieties $Y_1$ and $Y_2$, we have $\Trop(Y_1 \times Y_2) = \Trop(Y_1) \times \Trop(Y_2)$. This holds because the fan of $Y_1 \times Y_2$ is the product of the fans of $Y_1$ and $Y_2$. \end{bem} Let $X$ be a curve over $K$. For a closed embedding $\varphi \colon X \to Y$ we denote $\varphi_{\trop} := \trop \circ \varphi^{\an}$ and $\Trop_\varphi(X) := \varphi_{\trop}(\Xan)$ the associated tropicalization of $X$. One can define canonical weights on $\Trop_{\varphi}(X)$ that make it into a tropical curve in $\Trop(Y)$ in the sense of Definition \ref{defn tropical curve}. We will define these weights in Definition \ref{weight}. \begin{defn} \label{defn refinement} If $Y'$ is another toric variety, $\varphi' \colon X \to Y'$ is another closed embedding and $\pi \colon Y' \to Y$ is a morphism of toric varieties, there exists a canonical map $\Trop(Y') \to \Trop(Y)$, which is linear on the dense subset $N_\R$ and maps $\Trop_{\varphi'}(X)$ onto $\Trop_{\varphi}(X)$. We call $\varphi'$ a \emph{refinement} of $\varphi$. \end{defn} \subsection{Factorization skeleta} \label{extended skeleton of an embedding} Let $\varphi \colon X \to Y$ be a closed embedding of a smooth projective curve $X$ into a toric variety $Y$ that meets the dense torus $T$. Denote by $X^\circ := \varphi^{-1}(T)$ the preimage of the dense torus. \begin{defn} \label{defn minimal skeleton} Let $\Sigma(\varphi)$ be the set of points in $\Xan$ that do not have an open neighborhood that is isomorphic to an open disc and contained in $(X^\circ)^{\an}$. We call $\Sigma(\varphi)$ the \emph{completed skeleton associated to $\varphi$}. \end{defn} The set $\Sigma(\varphi)$ is indeed a completed skeleton for $X$ \cite[Theorem 4.22]{BPR2}. We denote by $\tau_\varphi \colon \Xan \to \Sigma(\varphi)$ the retraction. Baker, Payne and Rabinoff show that we have a commutative diagram \begin{align} \label{dia factorization} \begin{xy} \xymatrix{ \Xan \ar[rr]^{\varphi_{\trop}} \ar[rd]_{\tau_{\varphi}} && \Trop_\varphi(X) \\ & \Sigma(\varphi) \ar[ru]_{\varphi_{\trop}|_{\Sigma(\varphi)}} } \end{xy} \end{align} and that $\varphi_{\trop}|_{\Sigma(\varphi)}$ is linear on each edge of $\Sigma(\varphi)$ \cite[Lemma 5.3 \& Proposition 5.4 (1)]{BPR}. We can subdivide $\Trop_{\varphi}(X)$ and $\Sigma(\varphi)$ in such a way that each edge of $\Sigma(\varphi)$ is either contracted to a point or mapped homeomorphically to an edge of $\Trop_{\varphi}(X)$ \cite[Lemma 5.4. (2)]{BPR}. Let $e$ be an edge in $\Trop_\varphi(X)$. Let $e_1,\dots,e_k$ be the edges of $\Sigma(\varphi)$ mapping homeomorphically to $e$. For each $i$, we fix $x_i \neq y_i \in \mathring e_i$. \begin{defn} \label{weight} We call \begin{align*} m(e_i) = \frac {d_e(\varphi_{\trop}(x_i),\varphi_{\trop}(y_i))} {d_{e_i}(x_i,y_i)} \text{ and } m(e) = \sum_{i =1}^k m(e_i) \end{align*} the \emph{stretching factor} of $\varphi_{\trop}|_{e_i}$ and \emph{the weight of $e$}, respectively. \end{defn} The definition of weight agrees with the usual definition (see for example \cite[Definition 3.14]{Gubler2}) by \cite[Corollary 5.9]{BPR}. \begin{prop} \label{prop defn fully faithful} Let $\varphi \colon X \to Y$ be a closed embedding of $X$ into a toric variety that meets the dense torus $T$ and $\Sigma(\varphi)$ the associated skeleton. Denote by $X^{\circ} := \varphi^{-1}(T)$. Then the following are equivalent: \begin{enumerate} \item $\varphi_{\trop}$ maps $\Sigma(\varphi)$ homeomorphically onto its image and is an isometry when restricted to $\Sigma(\varphi) \cap X^{\circ, \an}$. \item The map $\varphi_{\trop}|_{\Sigma(\varphi)} \colon \Sigma(\varphi) \to \Trop_{\varphi}(X)$ is injective and all weights on $\Trop_{\varphi}(X)$ are $1$. \end{enumerate} \end{prop} \begin{proof} Assume that $ii)$ holds. The map $\varphi_{\trop}|_{\Sigma(\varphi)}$ is surjective, thus bijective. Since it is a bijective map between compact Hausdorff spaces, it is a homeomorphism. Thus it remains to show that if $\varphi_{\trop}|_{\Sigma(\varphi)}$ is a homeomorphism it is an isometry when restricted to $\Sigma(\varphi) \cap X^{\circ, \an}$ if and only if all weights on $\Trop_{\varphi}(X)$ are all equal to one. This follows from the definition of stretching factor \ref{weight}. \end{proof} \begin{defn} We say that $\varphi_{\trop}$ is \emph{fully faithful} if the equivalent conditions of Proposition \ref{prop defn fully faithful} hold. \end{defn} The notion of fully faithful tropicalization is stronger then the notion of faithful tropicalization introduced by Baker, Payne and Rabinoff \cite{BPR}. It is also slightly stronger then the notion of totally faithful tropicalization introduced by Cheung, Fantini, Park and Ulirsch \cite{CFPU} (see also \cite{CDMY}). The difference is that a totally faithful tropicalization only needs to be an isometry when restricted to $\Sigma(\varphi) \cap X^{\circ, \an}$. Note however that the authors of \cite{CFPU} mainly work in the situation of toric compactifications and in this case the notions of totally faithful and fully faithful agree. \subsection{Rational functions and divisors on metric graphs} \label{functions and divisors} Let $\Gamma$ be a finite $\Lambda$-metric graph. A point $x \in \Gamma$ is called \emph{$\Lambda$-rational} if its distance from some, or equivalently every, vertex is in $\Lambda$. A \emph{rational function} on $\Gamma$ is a piecewise linear function $F \colon \Gamma \to \R$ with integer slopes all of whose points of non-linearity are $\Lambda$-rational. A \emph{divisor} on $\Gamma$ is a finite formal linear combination of $\Lambda$-rational points. Its \emph{degree} is the sum of the coefficients. We denote by $\Div(\Gamma)$ the group of divisors. For a rational function $F$ its divisor is \begin{align*} \divisor(F) := \sum \lambda_i x_i \text{ where } \lambda_i := \sum_{e \colon x_i \prec e} d_eF(x_i) \end{align*} and $d_eF(x_i)$ is the outgoing slope of $F$ along the edge $e$ at $x_i$. We call $\divisor(F)$ a \emph{principal divisor} on $\Gamma$. We denote by $\Prin(\Gamma)$ the group of principal divisors on $\Gamma$. Let $X$ be a smooth projective curve and $\Gamma$ a finite skeleton with retraction $\tau$. Let $f$ be in $K(X)^*$. Then $F := \log \vert f(x) \vert \big \vert_{\Gamma}$ is a rational function on $\Gamma$ and $\tau_* (\divisor(f)) = \divisor(F)$ \cite[Theorem 5.15]{BPR2} (see also \cite[Proposition 3.3.15]{Thuillier} for the same result phrased in a slightly different language). \begin{defn} We say that edges $e_1,\dots,e_g$ \emph{form the complement of a spanning tree of $\Gamma$} if there exists a graph model $G$ for $\Gamma$ with set of edges $E$ such that $e_i \in E$ and the subgraph of $G$ spanned by the edges $E \setminus \{e_1,\dots,e_g\}$ is connected, contractible and contains all vertices of $G$. \end{defn} Note that in this definition, $g$ is necessarily the first Bett number of $\Gamma$. The notion of break divisor was introduced by Mikhalkin, and Zharkov \cite{MikZharII} and studied in detail by An, Baker, Kuperberg, and Shokrieh \cite{ABKS}. Break divisors will play an important role in Theorem \ref{lifting theorem}. \begin{defn} Let $\Gamma$ be a metric graph and $g = \dim_\R \HH^1(\Gamma, \R)$ its first Betti number. A \emph{break divisor} is a degree $g$ effective divisor $B = p_1 + \dots + p_g$ such that there exist edges $e_1,\dots,e_g$ that form the complement of a spanning tree of $\Gamma$ such that $p_i \in e_i$. \end{defn} \begin{satz} [Mikhalkin - Zharkov] Let $D$ be a degree $g$ divisor on $\Gamma$. Then there exists a unique break divisor $B$ on $\Gamma$ such that $D - B \in \Prin(\Gamma)$. \end{satz} We will later deal with break divisors that are supported on two-valent points of $\Gamma$. If $B$ is such a break divisor then $\Gamma \setminus \supp(B)$ is connected and contractible. \section{Lifting theorem} In this section $X$ is a smooth projective Mumford curve of genus $g$ over $K$. We fix a semistable vertex set $V$ with corresponding finite skeleton $\Gamma$ and retraction $\tau$. We denote by $J_0(X) := \{ [D] \in \Pic(X) \mid \tau_* D \in \Prin(\Gamma) \}$. \begin{prop} \label{prop BR lifting} Let $B = p_1 + \ldots + p_g$ be a break divisor on $\Gamma$ that is supported on two-valent points and write $R_i = \tau^{-1}(p_i)$. Then for all $E = (y_1,\dots,y_g) \in R_1 \times \dots \times R_g$ the map \begin{align*} \varphi_{E} \colon R_1 \times \ldots \times R_g &\to J_0(X) \\ (x_1,\dots,x_g) &\mapsto \sum_{i=1}^g \left[x_i-y_i \right] \end{align*} is a surjection. \end{prop} \begin{proof} We consider \cite[Proof of Theorem 1.1]{BRab}. Baker and Rabinoff work in the same setup, but for them $X$ is any curve, not necessarily a Mumford curve. Thus in their situation $E$ has two components, $E_1 \in R_1 \times \ldots \times R_b$ and $E_2 \in C^*$. Here $b$ is the first Betti number of the skeleton of $X$ and $C^* = \prod_{x \in \Xan; g(x) > 0} C_x(\Ktilde)^{g(x)}$. The map $\varphi_{(E_1,E_2)}$ is also defined on the product $R_1 \times \ldots \times R_b \times C^*$ and they show that $\varphi_{(E_1,E_2)}$ is surjective when $E_2$ is generic. If $X$ is a Mumford curve, then $b = g$ and $C^*$ is just a one point set. Thus $E_2$ is automatically generic and our proposition follows. \end{proof} \begin{satz} \label{lifting theorem} Let $D \in \Div(X)$ of degree $g$ and $B = p_1 + \dots + p_g \in \Div(\Gamma)$ a break divisor such that $\tau_* D - B$ is a principal divisor on $\Gamma$. Assume that $B$ is supported on two-valent points of $\Gamma$. Then there exist $x_i \in X(K)$ such that $\tau_* x_i = p_i$ and such that $D - \sum_{i=1}^g x_i$ is a principal divisor on $X$. \end{satz} \begin{proof} Let $y_i \in X(K)$ such that $\tau_* y_i = p_i$. We have $\left[ D - \sum_{i=1}^g y_i \right] \in J_0(X)$. Thus by Proposition \ref{prop BR lifting} there exist $x_i \in \tau^{-1}(p_i)$ such that $\left[ D - \sum_{i=1}^g y_i \right] = \left[ \sum_{i=1}^g (x_i - y_i) \right]$. In other words $\left[ D - \sum_{i=1}^g x_i \right] = 0$ which means that $D - \sum_{i=1}^g x_i$ is a principal divisor on $X$. \end{proof} \begin{defn} Let $e$ be an edge of $\Gamma$. Four points $p_1,p_2,p_3,p_4 \in \mathring e$ are called \emph{pillar points in $e$} if they are $\Lambda$-rational, $d_e(p_1, p_2) = d_e(p_3, p_4)$ and for $i = 2, 3$ we have $[p_{i-1},p_{i}] \cap [p_{i}, p_{i+1}] = p_i$. (See Figure \ref{figure pillar points}.) \end{defn} \begin{figure} \begin{tikzpicture} \draw (-1,0.3) -- (0,0.3) -- (1,1.3) -- (3,1.3) -- (4,0.3) -- (5,0.3); \draw (-1,0) -- (0,0) -- (1,0) -- (3,0) -- (4,0) -- (5,0); \node [below] at (0,0) {$p_1$}; \fill (0,0) circle (2pt); \node[below] at (1,0) {$p_2$}; \fill (1,0) circle (2pt); \node[below] at (3,0) {$p_3$}; \fill (3,0) circle (2pt); \node[below] at (4,0) {$p_4$}; \fill (4,0) circle (2pt); \node[right] at (5,0) {$e$}; \end{tikzpicture} \caption{An edge $e$ with four pillar points $p_1,p_2,p_3$ and $p_4$ and a piecewise linear function with divisor $p_1 - p_2 - p_3 + p_4$.} \label{figure pillar points} \end{figure} \begin{kor} \label{lifting corollary} Let $D \in \Div^0(X)$ such that $\tau_* D$ is a principal divisor on $\Gamma$. Let $e_1,\dots,e_g$ be edges that form the complement of a spanning tree of $\Gamma$. Fixing pillar points $p_{i,1},p_{i,2},p_{i,3},p_{i,4}$ in $\mathring e_i$ there exist $x_{ij} \in X(K)$ such that $\tau(x_{ij}) = p_{ij}$ and $f \in K(X)^*$ such that $\divisor(f) = D + \sum_{i=1}^g (x_{i,1} + x_{i,4}) - \sum_{i=1}^g (x_{i,2} + x_{i,3})$. \end{kor} \begin{proof} The divisor $\sum_{i=1}^g (p_{i,1} + p_{i,4}) - \sum_{i=1}^g (p_{i,2} + p_{i,3})$ is principal on $\Gamma$, thus so is $\tau_*D + \sum_{i=1}^g (p_{i,1} + p_{i,4}) - \sum_{i=1}^g (p_{i,2} + p_{i,3})$. Thus, for $j = 1, 3,4$, fixing $x_{ij}$ such that $\tau_* x_{ij} = p_{ij}$ and writing $D' = D + \sum_{i=1}^g (x_{i,1} + x_{i,4}) - \sum_{i=1}^g x_{i,3}$ and $B = p_{1,2} + \dots + p_{g,2}$, we find that $\tau_* D' - B$ is a principal divisor on $\Gamma$. Since $B$ is a break divisor supported on two-valent points, applying Theorem \ref{lifting theorem} to $D'$ and $B$ we get the result. \end{proof} \begin{Ex} [Tate curves] \label{tate curves} Chan and Sturmfels use theta functions to produce nice tropicalizations of elliptic curves \cite{ChanSturmfels} (see also \cite[Theorem 6.2]{BPR}). In this example we show how Theorem \ref{lifting theorem} can be used to construct such nice tropicalizations combinatorially. Let $E$ be an elliptic curve with bad reduction. We will use Theorem \ref{lifting theorem} to construct a closed embedding $\varphi \colon E \to \Pbb^2$ whose tropicalization looks like the right hand side of Figure \ref{figure tate curve}, which Chan and Sturmfels call \emph{symmetric honeycomb form}. The minimal skeleton $\Gamma_{\min}$ is a circle. We pick three points $q_1, q_2, q_3 \in \Gamma_{\min}$ that are equidistant from each other. Our skeleton $\Gamma$ is obtained from $\Gamma_{\min}$ by adding edges of length $d(q_i, q_j)/2$ at each of the $q_i$, denoting their endpoints by $p_i$. We subdivide each edge $[q_i,q_j]$ at its midpoint and label our new vertices as on the left hand side of Figure \ref{figure tate curve}. The solid part of the figure is now our skeleton $\Gamma$. We pick points $x_{1,1} \neq x_{1,2}, x_{2,1} \neq x_{2,2}, x_{3,1} \neq x_{3,2}$ and $x_{6} \in E(K)$ such that $\tau(x_{i,j}) = p_i$ and $\tau(x_6) = p_6$. Let $D_1 = - x_{1,1} + x_{2,1} - x_{2,2} + x_{3,1} - x_6$. Then $\tau_* D_1 = -p_1 + p_3 - p_6$ and $\tau_* D_1 + p_4 = \divisor(F_1)$ for a rational function $F_1$ on $\Gamma$. Now applying Theorem \ref{lifting theorem} to $-D_1$ and $p_4$ we obtain a function $f_1 \in K(E)^*$ and $x_4 \in E(K)$ such that $\tau(x_4) = p_4$ and $\divisor(f_1) = D_1 + x_4$. We normalize $f_1$ such that $F_1 = \log \vert f_1 \vert \big\vert_\Gamma$. Similarly let $D_2 = -x_{1,1} + x_{1,2} - x_{2,2} + x_{3,2} - x_6$ then $\tau_* D_2 = - p_2 + p_3 - p_6$ and $\tau_* D_2 + p_5 = \divisor(F_2)$, for a rational function $F_2$ on $\Gamma$. We obtain a function $f_2 \in K(E)^*$ and $x_5 \in E(K)$ such that $\tau(x_5) = p_5$ and $\divisor(f_2) = D_2 + x_5$. Let $\varphi$ be the morphism associated to the rational map $[f_1: f_2 : 1] \colon E \to \Pbb^2$. By construction, the graph on the left hand side of Figure \ref{figure tate curve}, including the dashed lines, which are infinite edges, is the associated completed skeleton $\Sigma(\varphi)$. We write $G_i = \log \vert f_i \vert \big \vert_{\Sigma(\varphi)}$. Note that $G_i|_{\Gamma} = F_i$. Further, $\varphi_{\trop}|_{\Sigma(\varphi)} = (G_1,G_2)$. Thus $\Trop_{\varphi}(E) = (G_1,G_2)(\Sigma(\varphi))$ is the tropical curve on the right hand side of Figure \ref{figure tate curve}. The functions $f_1, f_2, 1$ are linearly independent over $K$, since $f_1$ is not constant on the zeros of $f_2$. Thus by the Riemann-Roch theorem, they form a basis of $L(D)$ where $D = x_{1,1} + x_{2,2} + x_6$. Since $D$ is very ample by \cite[Corollary IV.3.2(b)]{Hartshorne}, this shows that $\varphi$ is a closed embedding. \end{Ex} \begin{figure} \begin{tikzpicture} \draw (0,0) circle (1); \draw (1,0) -- ++(1,0); \draw (-0.5, 0.866) -- ++(-0.5, 0.866); \draw (-0.5, -0.866) -- ++(-0.5, -0.866); \draw [dashed] (-1,0) -- ++ (-1,0); \draw [dashed] (0.5, 0.866) -- ++(0.5, 0.866); \draw [dashed] (0.5, -0.866) -- ++(0.5, -0.866); \draw [dashed] (2,0) -- ++(0.5,0.5); \draw [dashed] (2,0) -- ++(0.5,-0.5); \draw [dashed] (-1, 1.732) -- ++(-0.8, 0); \draw [dashed] (-1,1.732) -- ++(0,0.8); \draw [dashed] (-1, -1.732) -- ++(-0.8, 0); \draw [dashed] (-1,-1.732) -- ++(0,-0.8); \node [left] at (1,0) {$q_1$}; \fill (1,0) circle (2pt); \node [below] at (-0.5, 0.866) {$q_2$}; \fill (-0.5, 0.866) circle (2pt); \node [above] at (-0.5, -0.866) {$q_3$}; \fill (-0.5, -0.866) circle (2pt); \node [right] at (2,0) {\;$p_1$}; \fill (2,0) circle (2pt); \node [right] at (-1, 1.732) {$p_2$}; \fill (-1, 1.732) circle (2pt); \node [right] at (-1, -1.732) {$p_3$}; \fill (-1, -1.732) circle (2pt); \node [right] at (-1,0) {$p_4$}; \fill (-1,0) circle (2pt); \node [above] at (0.5, -0.866) {$p_5$}; \fill (0.5, -0.866) circle (2pt); \node [below] at (0.5, 0.866) {$p_6$}; \fill (0.5, 0.866) circle (2pt); \node [right] at (2.5, 0.5) {$x_{1,1}$}; \fill (2.5,0.5) circle (2pt); \node [right] at (2.5, -0.5) {$x_{1,2}$}; \fill (2.5,-0.5) circle (2pt); \node [above] at (-1,2.532) {$x_{2,2}$}; \fill (-1,2.532) circle (2pt); \node [left] at (-1.8,1.732) {$x_{2,1}$}; \fill (-1.8,1.732) circle (2pt); \node [below] at (-1,-2.532) {$x_{3,2}$}; \fill (-1,-2.532) circle (2pt); \node [left] at (-1.8,-1.732) {$x_{3,1}$}; \fill (-1.8,-1.732) circle (2pt); \node [right] at (1, -1.732) {$x_5$}; \fill (1, -1.732) circle (2pt); \node [left] at (-2,0) {$x_4$}; \fill (-2,0) circle (2pt); \node [above] at (1, 1.732) {$x_6$}; \fill (1,1.732) circle (2pt); \draw (6,-1) -- ++(1,0) -- ++(1,1) -- ++(0,1)-- ++(-1,0) -- ++(-1,-1) -- ++(0,-1); \draw (8,1) -- ++ (1,1); \draw (7,1) -- ++ (0,1); \draw (7,2) -- ++(1,1); \draw (7,2) -- ++(-1,0); \draw (6,0) -- ++(-1,0); \draw (6,-1) -- ++(-1,-1); \draw (5,-2) -- ++(0,-1); \draw (5,-2) -- ++ (-1,0); \draw (7,-1) -- ++(0,-1); \draw (8,0) -- ++(1,0); \draw (9,0) -- ++(1,1); \draw (9,0) -- ++(0,-1); \end{tikzpicture} \caption{The skeleton and tropicalization of a Tate curve.} \label{figure tate curve} \end{figure} \begin{Ex} In the same example, we can also see that Theorem \ref{lifting theorem} does not hold if we do not require $B$ to be supported on two-valent points. Let $D = p_1$. Then the unique break divisor that is linearly equivalent to $D$ is $B = q_1$. However we cannot find $x$ and $y$ such that $\tau(x) = p_1$ and $\tau(y) = q_1$ such that $x-y$ is principal, since no difference of two distinct points is principal on an elliptic curve. \end{Ex} \section{Fully faithful and smooth tropicalizations} \subsection{Describing tropicalizations using extended skeleta} Let $X$ be smooth projective curve of genus $g>0$. Let $V$ be a minimal semistable vertex set of $X$ with associated finite skeleton $\Gamma$ and retraction $\tau$. \begin{defn} Let $\Sigma$ be a completed skeleton of $X$ with retraction $\tau_{\Sigma}$, $f \in K(X)^*$ and write $\divisor(f) = \sum \pm x_i$. Then $f$ is said to be \emph{faithful} with respect to $\Sigma$ if we have $\tau_{\Sigma}(x_i) \neq \tau_{\Sigma}(x_j)$ for all $i \neq j$. \end{defn} Note that this implies that $f$ has only simple poles and zeros. \begin{Const} \label{higher tropicalization} Let $\varphi \colon X \to Y$ be a closed embedding of $X$ into a toric variety $Y$ that meets the dense torus. Let $\Sigma(\varphi)$ be the completed skeleton associated to $\varphi$. Let $f \in K(X)^*$ be faithful with respect to $\Sigma(\varphi)$. Consider the induced closed embedding $\varphi' = (\varphi, f) \colon X \to Y \times \Pbb^1$. We obtain the associated skeleton $\Sigma(\varphi')$ for $\varphi'$ by adding infinite rays $[x_i, \tau_{\varphi}(x_i)]$ for all $x_i \in \supp(\divisor(f))$. We denote by $\tau_{\varphi'}$ the associated retraction. We have the following diagram \begin{align*} \begin{xy} \xymatrix{ \Xan \ar[rr]^{\tau_{\varphi'}} \ar[rrd]_{\tau_{\varphi}} &&\Sigma(\varphi') \ar[rr]^{\varphi'_{\trop}} \ar[d] && \Trop_{\varphi'}(X) \ar[d] \ar[r] & \Trop(Y) \times \Trop(\Pbb^1) \ar[d]^{\pi_1} \\ && \Sigma(\varphi) \ar[rr]^{\varphi_{\trop}} && \Trop_{\varphi}(X) \ar[r] &\Trop(Y). } \end{xy} \end{align*} The map on the left contracts the edges $[x_i, \tau_{\varphi}(x_i)]$ to $\tau_{\varphi}(x_i)$. The map $\pi_1$ on the right is forgetting the last coordinate. Thus we obtain $\Trop_{\varphi'}(X)$ from $\Trop_\varphi(X)$ in two steps: \begin{enumerate} \item Take the graph of $\log \vert f \vert$ restricted to $\Sigma(\varphi)$. \item Add the images of the edges $e_i = [x_i, \tau(x_i)]$. These are infinite rays from $(\varphi_{\trop}(x_i), \log \vert f(x_i) \vert)$ to $(\varphi_{\trop}(x_i), \pm \infty)$ where the sign of $\infty$ is the opposite of the sign of $x_i$ in $\divisor(f)$. \end{enumerate} \end{Const} \begin{lem} \label{lem new edges} In the situation of Construction \ref{higher tropicalization}, every edge $e$ in $\Sigma(\varphi')$ that is not an edge of $\Sigma(\varphi)$ is infinite and satisfies $m(e) = 1$. \end{lem} \begin{proof} The edge $e$ has to be infinite since we only added infinite rays to $\Sigma(\varphi)$ in Construction \ref{higher tropicalization}. Since $f$ has only simple poles and zeros, the slope of $\log \vert f \vert$ along $e$ is equal to one, thus the corresponding expansion factor equals one. \end{proof} \subsection{Fully faithful tropicalization} \label{section ff trop} Throughout this section, $X$ is a Mumford curve and $\varphi \colon X \to Y$ a closed embedding of $X$ into a toric variety that meets the dense torus. In this section, we prove Theorem \ref{thmB} from the introduction, showing that $\varphi$ has a refinement that is fully faithful. We fix a minimal semistable vertex set $V$ and denote by $\Gamma$ the corresponding finite skeleton of $X$ with retraction $\tau$. For our completed skeleton $\Sigma(\varphi)$ associated to $\varphi$ we denote the retraction by $\tau_{\varphi}$ and the finite part by $\Sigma(\varphi)_{\fin}$. We will now construct for an edge $e$ a function $f_e \in K(X)^*$ such that the slope of $\log \vert f_e \vert$ is equal to $1$ along $e$ and such that $f_e$ is faithful with respect to $\Sigma(\varphi)$. \begin{Const} \label{construction finite edge} Let $e$ be a finite edge of $\Sigma(\varphi)$ that is not in $\Gamma$. We label the endpoints $v$ and $w$ in such a way that $w$ and $\Gamma$ lie in different connected components of $\Sigma(\varphi) \setminus v$ (see Figure \ref{figure finite edges}). Let $v'$ and $w'$ be such that $\tau_{\varphi}(v') = v$ and $\tau_{\varphi}(w') = w$. We fix edges $e_1,\dots,e_g$ that form the complement of a spanning tree of $\Sigma(\varphi)$ and pillar points $p_{ij}^e$ in $e_i$. Applying Corollary \ref{lifting corollary} to $\Sigma(\varphi)_{\fin}$ and $D' = v' - w'$ we obtain $f_e \in K(X)^*$ such that $\divisor(f_e) = v' - w' + \sum \pm x_{ij}^e$. By construction $f_e$ is faithful with respect to $\Sigma(\varphi)$ and the slope of $\log \vert f_e \vert$ along $e$ is $1$. Replacing $f_e$ by $a^{-1} \cdot f_e$ where $a \in K$ such that $\vert f_e(v) \vert = \vert a \vert $ we may assume $\log \vert f_e (v) \vert = 0$. \end{Const} \begin{figure} \begin{tikzpicture} \draw (0,0) -- ++(1,0) -- (2,1) -- (2,2) -- (1,2) -- (0,1) -- (0,0); \draw (-2, 1) -- (0,1); \node[above] at (0,1) {$v$}; \fill (0,1) circle (2pt); \node[above] at (-2,1) {$w$}; \fill (-2,1) circle (2pt); \draw[dashed] (-4,1) -- (-2,1); \draw[dashed] (-1,0) -- (0,1); \node[above] at (-4,1) {$w'$}; \fill (-4,1) circle (2pt); \node[below] at (-1,0) {$v'$}; \fill (-1,0) circle (2pt); \node at (1,1) {$\Gamma$}; \end{tikzpicture} \caption{Situation in Construction \ref{construction finite edge}. The dashed lines are infinite edges and solid lines are finite edges.} \label{figure finite edges} \end{figure} \begin{Const} \label{construction infinite edge} Let $e$ be an infinite edge of $\Sigma(\varphi)$ with finite vertex $v$ and infinite vertex $w'$. Let $v'$ be a point in $X(K)$ such that $\tau_{\varphi}(v') = v$ (see Figure \ref{figure infinite edge}). We fix edges $e_1,\dots,e_g$ that form the complement of a spanning tree of $\Sigma(\varphi)_{\fin}$ and pillar points $p_{ij}^e$ in $e_i$. Applying Corollary \ref{lifting corollary} to $\Sigma(\varphi)_{\fin}$ and $D = v' - w'$ we obtain $f_e \in K(X)^*$ that is faithful with respect to $\Sigma(\varphi)$ and such that $\log \vert f_e \vert$ has slope $1$ along $e$. We again normalize such that $\log \vert f_e(v) \vert = 0$. \end{Const} \begin{figure} \begin{tikzpicture} \draw (0,0) -- ++(1,0) -- ++(1,1) -- (2,2) -- (1,2) -- (0,1) -- (0,0); \node[above] at (0,1) {$v$}; \fill (0,1) circle (2pt); \draw[dashed] (-4,1) -- (0,1); \draw[dashed] (-1,0) -- (0,1); \node[above] at (-4,1) {$w'$}; \fill (-4,1) circle (2pt); \node[below] at (-1,0) {$v'$}; \fill (-1,0) circle (2pt); \node at (1,1) {$\Gamma$}; \end{tikzpicture} \caption{Situation in Construction \ref{construction infinite edge}. The dashed lines are infinite edges and solid lines are finite edges.} \label{figure infinite edge} \end{figure} \begin{satz} \label{prop edge} Let $\varphi \colon X \to Y$ be a closed embedding of $X$ into a toric variety that meets the dense torus. Then there exists a refinement $\varphi' \colon X \to Y'$ for $\varphi$ that is fully faithful. \end{satz} \begin{proof} Recall that we fixed a finite skeleton of $\Gamma$ of $X$. By \cite[Theorem 1.1]{BPR} we may assume, after possibly replacing $\varphi$ by a refinement, that the map $\varphi_{\trop}\vert_\Gamma$ is an isometry onto its image. Let $E$ be the set of edges of $\Sigma(\varphi)$ that are not in $\Gamma$. For each $i = 1,\dots,g$, $j=1,\dots,4$ and $e \in E$ we pick $p_{ij}^e \in \Gamma$ such that \begin{enumerate} \item for all $e \in E$ there are edges $e^e_i$, $i = 1, \dots, g$, that form the complement of a spanning tree of $\Sigma(\varphi)$ and $p_{i1}^e,\dots,p_{i4}^e$ are pillar points in $e^e_{i}$; \item $\varphi_{\trop}([p_{i,1}^e, p_{i,4}^e]) \cap \varphi_{\trop}(e) = \emptyset$ for all $i = 1,\dots, g$; \item $[p_{i,1}^e, p_{i,4}^e] \cap [p_{i',1}^{e'}, p_{i',4}^{e'}] = \emptyset$ for $(e,i) \neq (e', i')$. \end{enumerate} Note that a choice of $p_{ij}^e$ that satisfies ii) is possible since $\varphi_{\trop}(e)$ is a line segment, thus cannot cover a full cycle of $\Gamma$. Now for all finite (resp. infinite) edges $e \in E$ we apply Construction \ref{construction finite edge} (resp. Construction \ref{construction infinite edge}) and obtain functions $f_e \in K(X)^*$. We consider the closed embedding \begin{align*} \varphi' := (\varphi, (f_e)_{e \in E}) \colon X \to Y \times (\Pbb^1)^E. \end{align*} Following Construction \ref{higher tropicalization}, the completed skeleton $\Sigma(\varphi')$ associated to $\varphi'$ is obtained from $\Sigma(\varphi)$ by attaching an infinite edge $e_{ij}^e$ at each $p_{ij}^e$ and by attaching for each $e \in E$ infinite edges to its finite endpoints. If $e = [v, w]$, we denote these edges by $e_v^e$ and $e_w^e$ respectively. We claim that the map \begin{align*} \varphi'_{\trop} \colon \Sigma(\varphi') \to \Trop(Y) \times \Trop(\Pbb^1)^E \end{align*} is injective. We denote $F_e := \log \vert f_e \vert \big \vert_{\Sigma(\varphi')}$. By construction, $\varphi'_{\trop}$ is injective when restricted to an edge, since $\varphi_{\trop}|_\Gamma$ is injective and $F_e$ is injective when restricted to $e$ and $e_{ij}^e$ for $e \in E$. To show global injectivity, let us set up some notation. Recall that for each edge $e \in E$ we denote by $v_e$ the endpoint of $e$ such that $\Gamma$ and $\mathring e$ lie in different connected components of $\Gamma \setminus v$ and by $w_e$ the other endpoint. Furthermore, $f_e$ was normalized in such a way that $F_e (v_e) = 0$. Recall that $\Gamma$ is a deformation retract of $\Sigma(\varphi')$. Thus, we may define a partial order on $E$ by declaring $e \leq e'$ if ``$e$ is closer to $\Gamma$ then $e'$``, meaning that $\mathring e$ and $\mathring e'$ lie in the same connected component of $\Sigma(\varphi') \setminus v_e$. Now assume $\varphi'_{\trop}(z_1) = \varphi'_{\trop}(z_2)$ for $z_1,z_2 \in \Sigma(\varphi')$. This means that $\varphi_{\trop}(z_1) = \varphi_{\trop}(z_2)$ and $F_e(z_1) = F_e(z_2)$ for all $e \in E$. Note that we may assume $z_1 \notin \Gamma$, since if both $z_1$ and $z_2$ are in $\Gamma$, then we are done since $\varphi_{\trop}$ is already injective on $\Gamma$. Denote \begin{align*} E' := \{ e \in E \mid F_e(z_1) \neq 0 \} = \{ e \in E \mid F_e(z_2) \neq 0 \}. \end{align*} Since $F_e(v_e) = 0$ and $\divisor(F) = v_e - w_e + \sum_{i=1}^g (p_{i,1}^e - p_{i,2}^e - p_{i,3}^e + p_{i,4}^e)$ we have $F_e(v_e) = 0$, $F_e(w_e) > 0$, $F_e(p_{i,1}^e) = F_e(p_{i,4}^e) = 0$ and $F_e$ is constant on the connected components of $\Sigma(\varphi') \setminus (e \cup [p_{i,1}^e, p_{i,4}^e])$ (see Figure \ref{figure graph}). Thus \begin{align} \label{eq:fe} \supp(F_e) = \bigcup_{e' \geq e} e' \cup \bigcup_{i=1}^g [p^e_{i1}, p^e_{i4}]. \end{align} We deduce that $E'$ is closed under $\leq$ and non-empty since $z_1 \notin \Gamma$. If $\vert E' \vert = 1$, say $E' = \{e \}$, then \begin{align} \label{eq zi contained} z_1 \in e \cup \bigcup_{ij} e_{ij}^e \text{ and } z_2 \in e \cup \bigcup_{ij} e_{ij}^e \cup \bigcup_{i} [p_{i,1}^e, p_{i,4}^e]. \end{align} In the case $z_1 \in e$, we have $\varphi_{\trop}(z_2) = \varphi_{\trop}(z_1) \in \varphi_{\trop}(e)$ which forces $z_2 \in e$ by $ii)$ above and (\ref{eq zi contained}). Since $F_e |_{e}$ is injective, it follows that $z_1 = z_2$. In the case $z_1 \in e_{ij}^e$, we have $\varphi_{\trop}(z_2) = \varphi_{\trop}(z_1) = p_{ij}^e$, thus $z_2 \in e_{ij}^e$ and because $F_e|_{e_{ij}^e}$ is injective we have $z_1 = z_2$. If $\vert E' \vert > 1$, then there exists $e \in E$ such that $E' = \{ e' \in E \mid e' \leq e\}$ by iii) above and (\ref{eq:fe}). For the same reason $\vert E' \vert > 1$ implies $z_1, z_2 \in e$ and consequently $z_1 = z_2$. Thus $\varphi'_{\trop}|_{\Sigma(\varphi')}$ is injective. For edges $e$ of $\Gamma$ the stretching factor is $1$ since $\varphi |_{e}$ is an isometry onto its image. For all $e \in E$, the stretching factors are equal to $1$ since the slope of $f_e$ along $e$ is $1$. For all $e^e_{ij}$ the stretching factor is equal to $1$ by Lemma \ref{lem new edges}. Since $\varphi'_{\trop}|_{\Sigma(\varphi')}$ in injective this means all weights are equal to $1$. Thus $\varphi'_{\trop}$ represents $\Sigma(\varphi')$ faithfully. \end{proof} \begin{figure} \begin{tikzpicture} \draw (-1,3) -- ++(1,0) -- ++(1,1) -- ++(2,0) -- ++(1,-1) -- ++(1,0); \draw[dashed] (0,3) -- ++(0,-1); \draw[dashed] (1,4) -- ++(0,1); \draw[dashed] (3,4) -- ++(0,1); \draw[dashed] (4,3) -- ++(0,-1); \node [above] at (0,3) {$p_{i,1}^e$}; \fill (0,3) circle (2pt); \node [below] at (1,4) {$p_{i,2}^e$}; \fill (1,4) circle (2pt); \node [below] at (3,4) {$p_{i,3}^e$}; \fill (3,4) circle (2pt); \node [above] at (4,3) {$p_{i,4}^e$}; \fill (4,3) circle (2pt); \node[right] at (0,2.5) {$e_{i,1}^e$}; \node[right] at (1,4.5) {$e_{i,2}^e$}; \node[right] at (3,4.5) {$e_{i,3}^e$}; \node[right] at (4,2.5) {$e_{i,4}^e$}; \end{tikzpicture} \caption{The graph of $\log \vert f_e \vert$ on $e_{i}^e$ and the adjacent edges $e_{ij}^e$ in $\Sigma(\varphi')$.} \label{figure graph} \end{figure} \begin{kor} \label{cor section} Let $\varphi \colon X \to Y$ be a closed embedding of $X$ into a toric variety $Y$ that meets the dense torus. Then there exists a refinement $\varphi'$ of $\varphi$ and a section $\psi_{\varphi'} \colon \Trop_{\varphi'}(X) \to \Xan$ for $\varphi'_{\trop}$. \end{kor} \begin{proof} By Theorem \ref{prop edge} we can choose $\varphi'$ such that $\varphi'_{\trop}$ is fully faithful. Thus $\varphi'_{\trop}\vert_{\Sigma(\varphi')}$ is a homeomorphism and we define $\psi_{\varphi'}$ as the composition of the inclusion of $\Sigma(\varphi')$ into $\Xan$ with $(\varphi'_{\trop}\vert_{\Sigma(\varphi')})^{-1}$. \end{proof} \subsection{Smooth tropicalization} \label{section smooth trop} Throughout this section, we will work in the following situation: $X$ is a Mumford curve over $K$ and $\varphi \colon X \to Y$ a closed embedding that meets the dense torus such that $\varphi_{\trop}$ is fully faithful. We denote by $\Sigma(\varphi)$ the associated complete skeleton and by $\tau_\varphi$ the retraction. \begin{lem} \label{less singular points} Let $f \in K(X)^*$ that is faithful with respect to $\Sigma(\varphi)$. Then $\varphi' = (\varphi, f) \colon X \to Y \times \Pbb^1$ is fully faithful. Further, all vertices in $\Sigma(\varphi')$ that map to singular vertices in $\Trop_{\varphi'}(X)$ are contained in $\Sigma(\varphi)$ and map to singular vertices in $\Trop_{\varphi}(X)$. \end{lem} \begin{proof} All edges of $\Sigma(\varphi')$ that are not edges of $\Sigma(\varphi)$ have expansion factor equal to $1$ by Lemma \ref{lem new edges}. Since $f$ is faithful all these edges have different images under $\tau_{\varphi}$. Since $\varphi_{\trop}$ is fully faithful, they have different images under $\varphi'_{\trop}$. Consequently $\varphi'_{\trop}|_{\Sigma(\varphi')}$ is injective. Thus $\varphi'_{\trop}$ is fully faithful. Let $v$ be a vertex of $\Sigma(\varphi')$. Then $v$ is a vertex in $\Sigma(\varphi)$ (after potential subdivision) or infinite. Since $\varphi'_{\trop}$ is fully faithfully, the infinite vertices of $\Trop_{\varphi'}(X)$ have only one adjacent vertex and are thus smooth. Thus we have to show that if $\varphi_{\trop}(v)$ is a smooth finite vertex of $\Trop_{\varphi}(X)$, then $\varphi'_{\trop}(v)$ is a smooth vertex of $\Trop_{\varphi'}(X)$. Let $e_0,\dots,e_n$ be the adjacent edges of $v$ and write $w_i := w_{v, e_i}$ for the primitive integral vector pointing from $\varphi_{\trop}(v)$ into $\varphi_{\trop}(e_i)$. We denote $F = \log \vert f \vert \big \vert_{\Sigma(\varphi')}$ and $L(F) = F - F(v)$. If $v$ is not in $\divisor(F)$, then $F$ is locally around $\varphi_{\trop}(v)$ the restriction to $\Trop_\varphi(X)$ of an affine function on $N_\R$. The vertex $v$ still has $n+1$ adjacent edges $e'_0,\dots,e'_n$ in $\Sigma(\varphi')$ and the primitive vectors are $w_0' = (w_0,L(F)(e_0)),\dots,w'_n = (w_n,L(F)(e_n))$. Since $F$ has integer slopes and is the restriction of an affine function and the $w_i$ span a saturated lattice of rank $n$, so do the $w'_i$, which shows that $\varphi_{\trop}(v)$ is a smooth vertex of $\Trop_{\varphi'}(X)$. If $v \in \divisor(F)$, since $f$ is faithful with respect to $\Sigma(\varphi)$, $v$ has $n+2$ adjacent edges $e'_0,\dots,e'_n,e'_{n+1}$ in $\Sigma(\varphi')$ and the primitive vectors are \begin{align*} w_0' = (w_0,L(F)(e_0)),\dots,w'_n = (w_n,L(F)(e_n)), w'_{n+1} = (0, \pm 1). \end{align*} Since the $w_i$ span a saturated lattice of rank $n$, the $w'_i$ span a saturated lattice of rank $n+1$, which shows that $\varphi'_{\trop}(v)$ is a smooth vertex of $\Trop_{\varphi'}(X)$. \end{proof} For a vertex $v$ of $\Sigma(\varphi)$ and two adjacent edges $e_0$ and $e_1$, we now construct a function $f_{e_1}$ in $K(X)^*$ that we will use to construct a tropicalization that is smooth at $v$. This may be viewed as generalization to any ambient dimension and any vertex of the constructions done for special vertices and ambient dimension $2$ by Cueto and Markwig \cite[Section 3]{CuetoMarkwig} \begin{Const} \label{new construction vertex} Let $v$ be a vertex of $\Sigma(\varphi)$ and let $e_0$ and $e_1$ be adjacent edges. Let $F_{e_1}$ be a piecewise linear function such that $F_{e_1}(v) = 0$, $d_{e_1}F_{e_1}(v) = 1$, $d_{e_0}F_{e_1}(v) = -1$, $d_{e}F_{e_1} = 0$ for all other adjacent edges, $\supp(F_{e_1}) \subset e_1 \cup e_0$, and such that $\divisor(F_{e_1}) = \sum \pm p_{i}$ for distinct points $p_i$ (see Figure \ref{figure resolution}). For each $i$ fix $x_{i} \in X(K)$ such that $\tau_* x_{i} = p_i$ and write $D = \sum \pm x_{i}$. Fixing pillar points $p_{jk}$ outside of $\supp(F)$ and applying Corollary \ref{lifting corollary} we obtain a function $f_{e_1} \in K(X)^*$ that is faithful with respect to $\Sigma(\varphi)$ and such that the outgoing slope at $v$ of $\log \vert f_{e_1} \vert$ equals $1$ along $e_1$ and $-1$ along $e_0$. \end{Const} \begin{figure} \begin{tikzpicture} \draw (0,0) -- (1,1) -- (3,1) -- (4,0) -- (5,0); \draw (0,0) -- (-1,-1) -- (-3,-1) -- (-4,0) -- (-5,0); \draw[dashed] (-4,0) -- (4,0); \draw (-5,-1.3) -- (5,-1.3); \node[below] at (-2.5,-1.3) {$e_0$}; \node[below] at (2.5,-1.3) {$e_1$}; \node [below] at (0,-1.3) {$v$}; \fill (0,-1.3) circle (2pt); \end{tikzpicture} \caption{The graph of $F_{e_1}$ along the edges $e_0$ and $e_1$ in Construction \ref{new construction vertex}, with the dashed line being the zero level.} \label{figure resolution} \end{figure} \begin{satz} \label{main theorem} Let $\varphi \colon X \to Y$ be an embedding of $X$ into a toric variety $Y$ that meets the dense torus. Then there exists a refinement $\varphi' \colon X \to Y'$ for a toric variety $Y'$ such that $\Trop_{\varphi'}(X)$ is a smooth tropical curve. \end{satz} \begin{proof} By Theorem \ref{prop edge}, after replacing $\varphi$ by a refinement, we may assume that $\varphi'_{\trop}$ is fully faithful. Let $v$ be a vertex of $\Sigma(\varphi)$ such that $\varphi_{\trop}(v)$ is a singular vertex of $\Trop_{\varphi}(X)$. Let $e_0,\dots,e_n$ be the adjacent edges. For $k=1,\dots,n$ we pick functions $F_{e_i} \colon \Sigma(\varphi) \to \R$ as in Construction \ref{new construction vertex}. For each $k = 1,\dots,n$ and $i = 1,\dots,g$ we pick pillar points $p^k_{i,1}, p^k_{i,2}, p^k_{i,3},p^k_{i,4}$ in such a way that \begin{align*} [p^k_{i,1}, p^k_{i,4}] &\cap \supp (F_k) = \emptyset \text{ for all } i,k \text{ and } \\ [p^{k}_{i,1}, p^k_{i,4} ] &\cap [p^{k'}_{i',1}, p^{k'}_{i',4}] = \emptyset \text{ for }(k,i) \neq (k',i'). \end{align*} Applying now Construction \ref{new construction vertex}, we obtain functions $f_{e_i} \in K(X)$. We consider the closed embedding $\varphi' := (\varphi, (f_{e_k})_{k=1,\dots,n}) \colon X \to (\Pbb^{1})^n$ and its tropicalization \begin{align*} \varphi'_{\trop} \colon \Xan \to \Trop(Y) \times \Trop(\Pbb^1)^n. \end{align*} Applying Lemma \ref{lem new edges} $n$ times, we see that $\varphi'_{\trop}$ is fully faithful. By construction $v$ still has $n+1$ adjacent edges $e'_0,\dots,e'_n$ in $\Sigma(\varphi')$ and $\log \vert f_{e_i} \vert$ has slope $1$ along $e'_i$, slope $-1$ along $e'_0$, and is constant on the other edges. This means that projecting a neighborhood of $\varphi'_{\trop}(v)$ in $\Trop_{\varphi'}(X) \subset \Trop(Y) \times \Trop(\Pbb^1)^n$ to the second factor, the image is isomorphic to the one-dimensional fan in $\R^n$ whose rays are spanned by the coordinate vectors $x_1,\dots,x_n$ and their negative sum $x_0 = - \sum_{i=1}^n x_i$. Further the primitive vector $w_{v,e'_i}$ is mapped to $x_i$. Thus the $w_{v,e'_i}$ span a saturated lattice of rank $n$, which means that $v$ is smooth in $\Trop_{\varphi'}(X)$. Since $v$ is singular in $\Trop_{\varphi}(X)$ but not in $\Trop_{\varphi'}(X)$, by inductively applying Lemma \ref{less singular points}, we see that $\Trop_{\varphi'}(X)$ has fewer singular points than $\Trop_{\varphi}(X)$. Thus inductively we can construct $\varphi'$ such that $\Trop_{\varphi'}(X)$ is smooth. \end{proof} \begin{kor} \label{main theorem very affine} Let $X$ be a smooth curve such that all type II points in $\Xan$ have genus $0$. Then there exists a very affine open subset $U$ of $X$ and a closed embedding $\varphi \colon U \to T$ for a torus $T$ such that $\Trop_\varphi(U)$ is a smooth tropical curve in $\R^n$. \end{kor} \begin{proof} Replacing $X$ by it canonical smooth compactification, we may assume that $X$ is a Mumford curve. Then by Theorem \ref{main theorem} there exists a closed embedding $\varphi \colon X \to Y$ for a toric variety $Y$ such that $\Trop_{\varphi}(X)$ is smooth. Denote by $T$ the dense torus of $Y$. Then we take $U := \varphi^{-1}(T)$ and the closed embedding $\varphi|_{U} \colon U \to T$. \end{proof} \section{Only Mumford curve admit smooth tropicalizations} \label{section smooth mumford} Let $X$ be a smooth projective curve. In this section we show that the existence of a closed embedding $\varphi \colon X \to Y$ such that $\Trop_{\varphi}(X)$ is smooth already implies that $X$ is a Mumford curve. Since we will not change the embedding in this section, we will identify $X$ with its image and simply treat $X$ as a closed subcurve of $Y$. We denote the completed skeleton associated to the inclusion of $X$ into $Y$ by $\Sigma$. We denote by $T$ the dense torus of $Y$, by $N$ its cocharacter lattice and $N_\Lambda = N \otimes \Lambda \subset N_\R$. We will use the notion of affinoid domains in $\Xan$ and their formal models. For an introduction to these notions we refer the reader to \cite[Section 3]{BPR}. \begin{defn} Let $w \in N_\Lambda \cap \Trop(X)$. Then $X^w := \trop^{-1}(w)$ is an affinoid domain in $\Xan$. The point $w$ determines a formal model $\Xcal^w$ for $X^w$. The \emph{initial degeneration} is the special fiber $\inn_w(X) := \XS^w_s := \XS^w \otimes_{K^\circ} \Ktilde$. \end{defn} \begin{bem} \label{lem initial smooth} Assume that $\inn_w(X)$ is reduced. By \cite[Proposition 3.13]{BPR} we have that $\XS^w$ is the canonical model of $X^w$. Then we have a canonical \emph{reduction map} $\red \colon X^w \to \inn_w(X)$ \cite[Section 2.4]{BerkovichSpectral}. Let $C$ be an irreducible component of $\inn_w(X)$ with generic point $\eta$. Then there is a unique point $x_w \in X^w$ such that $\red(x_w) = \eta$ and that point satisfies that $C_{x_w}$ is birational to $C$ \cite[Proposition 2.4.4]{BerkovichSpectral}. If $z$ is a smooth closed point of $\inn_w(X)$ then $\red^{-1}(z)$ is isomorphic to an open disc \cite[Proposition 2.2]{BL}. In particular, if $\inn_w(X)$ is smooth and rational, then all type II points in $X^w$ have genus $0$. \end{bem} We will use the following Proposition. Since we will apply it in the case of a trivially valued field, we allow the absolute value of the field to be trivial. \begin{prop} \label{prop translate contained} Let $T$ be an algebraic torus over a non-archimedean field, whose absolute value may be trivial. Let $T'$ be a subtorus and let $U$ be a closed subvariety of $T$. If $\Trop(U) \subset \Trop(T')$ then a translate of $U$ that has the same tropicalization as $U$ is contained in $T'$. \end{prop} \begin{proof} We consider the quotient torus $T / T'$. Denote by $\overline{U}$ the image of $U$ in the quotient torus $T / T'$. Then the tropicalization of $\overline{U}$ in $\Trop(T / T') = \Trop(T) / \Trop(T')$ is a point by construction. This shows that $\overline{U}$ is a point, which implies that $U$ is contained in a translate $t \cdot T'$ of $T'$, where all entries of $t$ have absolute value $1$. Thus $t^{-1} \cdot U$ is a translate of $U$ that is contained in $T'$ and has the same tropicalization as $U$. \end{proof} In the following, we view $\Ktilde$ as a non-archimedean field, carrying the trivial absolute value. \begin{satz} \label{duck theorem} Let $T$ be an algebraic torus over $\Ktilde$. Let $U \subset T$ be a closed curve. If $\Trop(U)$ is smooth then $U$ is smooth and rational. \end{satz} \begin{proof} The case where $\Trop(U)$ spans $\Trop(T)$ follows from \cite[Proposition 4.2]{KatzPayne}. We reduce to this case: Let $V$ be the vector subspace of $\Trop(T)$ that is spanned by $\Trop(U)$. Since $V$ is a rational subspace, there exists a subtorus $T'$ of $T$ such that $\Trop(T') = V$. Now replacing $U$ by the translate from Proposition \ref{prop translate contained} and applying Katz's and Payne's result to $U$ and $T'$ proves the theorem. \end{proof} \begin{kor} \label{duck corollary} If $\Trop(X)$ is smooth, then $\inn_w(X)$ is a smooth rational curve for all $w \in \Trop(X) \cap N_\Lambda$. \end{kor} \begin{proof} Let $w \in \Trop(X) \cap N_{\Lambda}$. Then $\inn_w(X)$ is a closed subvariety of a torus $T_{\Ktilde}$ over $\Ktilde$. Denote by $\Trop(\inn_w(X))$ its tropicalization. Then the local cone at $w$ in $\Trop(X)$ equals $\Trop(\inn_w(X))$ by \cite[10.15]{Gubler2}. Thus $\inn_w(X)$ is a smooth rational curve by Theorem \ref{duck theorem}. \end{proof} \begin{satz} \label{main theorem II} If $\Trop(X)$ is smooth, then $X$ is a Mumford curve. \end{satz} \begin{proof} Let $w \in \Trop(X) \cap N_\Lambda$. By Corollary \ref{duck corollary}, $\inn_w(X)$ is smooth and rational. Thus all type II points in $X^w$ have genus $0$ by Remark \ref{lem initial smooth}. Since all type two points map to $N_\Lambda$ under the tropicalization map, all type II points in $\Xan$ have genus zero which shows that $X$ is a Mumford curve by Remark \ref{bem Mumford} \end{proof} \begin{satz} \label{smooth implies ff} If $\Trop(X)$ is smooth, then the tropicalization map is fully faithful. \end{satz} \begin{proof} By Corollary \ref{duck corollary}, all initial degenerations are smooth and rational. For all $w \in N_{\Lambda} \cap \Trop(X)$, by Remark \ref{lem initial smooth}, there is a unique point $x_w \in X^w$ that satisfies that $\red(x_w)$ is the generic point of $\inn_w(X)$. Furthermore, every point in $X^w \setminus \{x_w\}$ has a neighborhood isomorphic to an open disc, thus is not contained in $\Sigma$. We conclude that every point $w \in N_\Lambda \cap \Trop(X)$ has $x_w$ as its unique preimage under $\trop|_{\Sigma}$. Since $\trop|_{\Sigma}$ is continuous and linear on each edge, this implies that $\trop|_{\Sigma} \colon \Sigma \to \Trop(X)$ is bijective. Since all weights are $1$, this shows that the tropicalization map is fully faithful. \end{proof} Note that when $X$ comes by base change from a family of Riemann surfaces over the punctured disc, Theorems \ref{main theorem II} and \ref{smooth implies ff} are consequences of \cite[Corollary 2]{IKMZ}. Using this Corollary one finds that the first Betti number of $\Trop(X)$ is equal to $g$, which, since $\Trop(X)$ is smooth, implies that $\trop|_{\Sigma}$ is injective, hence bijective. \bibliographystyle{alpha} \def\cprime{$'$}
2,869,038,155,753
arxiv
\section{Introduction} Let $\gamma_n$ denote the standard Gaussian measure on $\mathbb{R}^n$, and let $K \subset \mathbb{R}^n$ denote a convex body, that is a convex compact set with non-empty interior. For simplicity, we assume that $K$ is origin-symmetric, $K = -K$, and denote by $\norm{\cdot}_K$ the associated norm whose unit-ball is $K$. The dual norm is denoted $\norm{\cdot}^*_K$. Given two compact sets $A,B \subset \mathbb{R}^n$, we denote by $M(A,B)$ the packing number of $B$ in $A$, i.e. the maximal integer $M$ so that there exist $\set{x_i}_{i=1,\ldots,M} \subset A$ with $x_i + B$ mutually disjoint (``$\set{x_i}$ are $B$-separated"). \medskip This paper is dedicated to the study of a conjectural generalized version of the classical Sudakov Minoration estimate \cite{SudakovMinoration} and its dual version (due to Pajor--Tomczak-Jaegermann \cite{PajorTomczakLowMStar}, see also \cite[Chapter 3.3]{LedouxTalagrand-Book}): \begin{thm*}[Sudakov and Dual Sudakov Minoration] \hfill \begin{enumerate} \item Sudakov Minoration for $\ell^*(K) := \int \norm{x}_K^* d\gamma_n(x)$: \[ M(K , t B_2^n) \leq \exp( C \ell^*(K)^2 / t^2) \;\;\; \forall t > 0 . \] \item Dual Sudakov Minoration for $\ell(K) := \int \norm{x}_K d\gamma_n(x)$: \[ M(B_2^n , t K) \leq \exp( C \ell(K)^2 / t^2) \;\;\; \forall t > 0 . \] \end{enumerate} \end{thm*} The term ``minoration" refers to the resulting lower bounds on $\ell^*(K)$ and $\ell(K)$ as a function of the packing numbers. Here and throughout this work, $C$,$C'$,$C''$,$c$, etc... denote positive universal constants, independent of any other parameter (and in particular the dimension $n$), whose value may change from one occurrence to the next. We use $A \simeq B$ to signify that $c \leq A/B \leq C$. The Euclidean unit-ball is denoted by $B_2^n$. \medskip Sudakov Minoration plays a key-role in the proof of M.~Talagrand's ``Majorizing Measures" theorem \cite{Talagrand-RegularityOfGaussianProcesses,Talagrand-GenericChaining,Talagrand-Book}, which gives two-sided bounds on the expected supremum of a Gaussian process in terms of a certain geometric parameter associated with the indexing set. Subsequently (see \cite{Talagrand-Book} and the references therein), Talagrand extended his characterization to more general processes sampled from i.i.d. Bernoulli random variables and measures of the form $\exp(-\sum_{i=1}^n \abs{x_i}^p) dx$, $p \in [1,\infty]$, and the case of general log-concave \emph{product} measures (with moderate tail decay) was obtained by R.~Lata{\l}a \cite{Latala-SudakovMinorationAndGenericChaining}. Recall that a probability measure $\mu$ on $\mathbb{R}^n$ is called log-concave if $\mu = \exp(-V(x)) dx$ with $V : \mathbb{R}^n \rightarrow \mathbb{R} \cup \set{+\infty}$ convex. Motivated by an attempt to extend Talagrand's characterization to more general log-concave measures, Lata{\l}a \cite{Latala-GeneralizedSudakovMinoration} and independently the authors (unpublished) conjectured the following \emph{Generalized Sudakov Minoration} bounds: \begin{conj*} For any origin-symmetric log-concave probability measure $\mu$ on $\mathbb{R}^n$ and origin-symmetric convex body $K\subset \mathbb{R}^n$: \begin{enumerate} \item Generalized Sudakov Minoration for $I^*_1(\mu,K) := \int \norm{x}_K^* d\mu(x)$: \[ M(K , C I_1^*(\mu,K) B_p(\mu)) \leq \exp(C p) \;\;\; \forall p \geq 1. \] \item Generalized Dual Sudakov Minoration for $I_1(\mu,K) := \int \norm{x}_K d\mu(x)$: \[ M(Z_p(\mu) , C I_1(\mu,K) K) \leq \exp(C p) \;\;\; \forall p \geq 1 . \] \end{enumerate} \end{conj*} Here $B_p(\mu)$ denotes the unit-ball of the norm given by: \[ \norm{x}_{B_p(\mu)} := \brac{\int \abs{\scalar{y,x}}^p d\mu(y)}^{1/p} , \] and $Z_p(\mu)$ denotes the polar-body $B_p(\mu)^\circ$, defined by $\norm{\cdot}_{Z_p(\mu)} = \norm{\cdot}^*_{B_p(\mu)}$. Up to normalization, the $Z_p(\mu)$ bodies coincide with the $L_p$ centroid-bodies introduced by E. Lutwak and G. Zhang \cite{LutwakZhang-IntroduceLqCentroidBodies}, and have played a pivotal role in the development of our understanding of the volumetric properties of log-concave measures in the last decade (e.g. \cite{GreekBook}). Note that when $\mu = \gamma_n$, since $Z_p(\gamma_n) \simeq \sqrt{p} B_2^n$, the above conjecture precisely coincides with the classical Sudakov Minoration and its dual version. Assuming a general positive answer to a conjecture of Pietsch on the duality of entropy-numbers \cite[p. 38]{Pietsch-Book} (cf. \cite{AMS-Duality-For-Ball,AMST-Duality-For-K-Convex,EMilman-Duality-of-Entropy}), the primal version (1) and dual version (2) of the above conjecture are in fact equivalent; for instance, the results of \cite{AMS-Duality-For-Ball} imply that the two versions are equivalent when $K$ is an ellipsoid. We refer to \cite{Talagrand-GenericChaining,Latala-SudakovMinorationAndGenericChaining,Latala-GeneralizedSudakovMinoration,LatalaTkocz-SudakovMinorationForRegularProduct} for further partial results confirming the Generalized Sudakov Minoration conjecture in particular cases and additional applications. Let us presently only mention that this conjecture has been confirmed in the following cases: \begin{itemize} \item For all log-concave \emph{product} measures \cite{Latala-SudakovMinorationAndGenericChaining}; in fact, the same holds for more general regular product measures \cite{LatalaTkocz-SudakovMinorationForRegularProduct}. \item For all log-concave measures when the extremal points of $K$ are antipodal pairs with disjoint supports \cite{Latala-GeneralizedSudakovMinoration}. \item For $\mu = \exp(-\varphi(\norm{x}_p)) dx$, $p \in [1,\infty]$, with $\varphi : [0,\infty) \rightarrow \mathbb{R} \cup \set{+\infty}$ non-decreasing and convex \cite{Latala-GeneralizedSudakovMinoration}. \item For all log-concave measures when $p \geq 2 n \log(e + n)$ \cite{Latala-GeneralizedSudakovMinoration}. \end{itemize} \subsection{Johnson--Lindenstrauss Lemma} In this work, we choose to concentrate on the conjectured Generalized \emph{Dual} Sudakov estimate, and propose a novel program for establishing it. The Program is based on a conjectural dimension reduction step, which may be thought of as a ``\emph{one-sided} Johnson--Lindenstrauss lemma". Recall that the classical lemma of W.~B.~Johnson and J.~Lindenstrauss \cite{JohnsonLindenstraussLemma} asserts that if $\set{x_i}_{i=1,\ldots,e^k}$ is a collection of (say distinct) points in Euclidean space $X = (\mathbb{R}^n,\abs{\cdot})$ and $\epsilon \in (0,1)$, then there exists a map $T : X \rightarrow Y$, $Y = (\mathbb{R}^m,\abs{\cdot})$ Euclidean, so that: \begin{equation} \label{eq:distance-preserve} 1-\epsilon \leq \frac{\norm{T x_i - T x_j}_{Y}}{\norm{x_i - x_j}_{X}} \leq 1+ \epsilon \;\;\; \forall i \neq j \end{equation} with $m \leq C k / \epsilon^2$. Moreover, $T$ may be chosen to be linear, and a random (appropriately rescaled) orthogonal projection does the job with very high-probability (see Lemma \ref{lem:JL}). \medskip For The Program, we will require an extension of this classical result to more general normed spaces. Such a question was studied by Johnson and A.~Naor \cite{JohnsonNaor-JL}, who showed that this is essentially impossible - even for a fixed $\epsilon \in (0,1)$, if the normed space $Y$ is an $m$-dimensional subspace of $X$ with $m \leq C_\epsilon k$, the distance-preservation property (\ref{eq:distance-preserve}) for a linear map $T$ implies that $X$ must be \emph{essentially} Hilbertian (see \cite{JohnsonNaor-JL} for the precise formulation, and also \cite[Section 4, Remark 7]{JohnsonNaor-JL} for the case that $Y$ is not assumed to be a subspace of $X$). It follows that when $X = (\mathbb{R}^n,\norm{\cdot}_K)$, we cannot in general hope for a \emph{two-sided} estimate (\ref{eq:distance-preserve}). \medskip However, for our purposes, we will only need to satisfy the \emph{left-hand-side} inequality in (\ref{eq:distance-preserve}): if the points $\set{x_i}$ are well-separated in $X$, so should their images $\set{T x_i}$ be in $Y$ (``Separation Dimension Reduction"). Of course, without some additional requirement, this is always possible, simply by scaling the norm of $Y$ or the map $T$ in the numerator above. The additional requirement which replaces the right-hand-side inequality in (\ref{eq:distance-preserve}) is that the unit-ball of $Y$ be ``massive enough", as measured with respect to $T_* \mu := \mu \circ T^{-1}$, the push-forward of the measure $\mu$ by $T$, thereby precluding trivial rescaling attempts. In a sense, this is an averaged variant (with respect to the given measure $\mu$) of the pointwise right-hand-side requirement in (\ref{eq:distance-preserve}). This conjectural ``one-sided Johnson--Lindenstrauss" separation dimension-reduction is in our opinion a fascinating question, which we plan to explore more in depth in the future; it constitutes Part 1 (or more precisely, Part 1') of our proposed program. We are now ready to describe it and the remaining parts of The Program in more detail. \subsection{The Program - Simplified Version} \label{subsec:TheProgram} Our proposed program consists of three parts; for simplicity, we describe here a simplified version, postponing a description of the full version to Section \ref{sec:full-program}. Part 1 is a conjectural dimension-reduction step, already alluded to above: if $Z_p(\mu)$ is separated by $e^k$ translates of $K$, then there should be a linear map $T : \mathbb{R}^n \rightarrow \mathbb{R}^m$ with $m\simeq k$ so that $T Z_p(\mu) = Z_p(T_* \mu)$ is separated by $e^k$ translates of another star-body $L \subset \mathbb{R}^m$, which we may choose at our discretion from a family of candidates $\L_m$, with the only requirement being that it should be massive enough with respect to $T_* \mu$. Part 2 consists of establishing a weak version of the Generalized Dual Sudakov estimate for the pair $Z_p(T_* \mu)$ and $L$, which is allowed to depend (in an appropriate manner) on the dimension $m$ of the ambient space. Part 3 consists of establishing the Generalized Dual Sudakov estimate for the latter pair when $p$ is larger than $m$. We will show in Theorem \ref{thm:program} below that the weak estimate of Part 2 may be amplified by means of a bootstrap argument employing Part 1, so as to fit the correct estimate of Part 3, thereby concluding that confirmation of all three parts would imply the Generalized Dual Sudakov Minoration Conjecture. We begin with describing the simplified version of The Program in greater detail. \smallskip A compact set $L \subset \mathbb{R}^n$ having the origin in its interior is called a star-body if $t L \subset L$ for all $t \in [0,1]$. Given an absolutely continuous probability measure $\mu$ on $\mathbb{R}^n$, denote $m_q(\mu,L) := \sup \set{ s > 0 ; \mu( s L) \leq e^{-q} }$ so that $\mu(m_q(\mu,L) L) = e^{-q}$. \smallskip Fix an origin-symmetric convex body $K \subset \mathbb{R}^n$, origin-symmetric log-concave measure $\mu$ on $\mathbb{R}^n$ and $p \geq 1$. It is known (see Lemma \ref{lem:Guedon}) that in such a case $I_1(\mu,K) \simeq m_1(\mu,K)$, and so up to universal constants we need not distinguish between these two parameters. For all $m = 1,\ldots, n$, set $\mathbb{M}_m := \set{ T_* \mu \; ; \; T : \mathbb{R}^n \rightarrow \mathbb{R}^m \text{ linear}}$, which is a family of log-concave measures on $\mathbb{R}^m$ by the Pr\'ekopa--Leindler theorem (e.g. \cite{GardnerSurveyInBAMS}). In addition, let $\L_m$ denote some family of origin-symmetric star-bodies in $\mathbb{R}^m$, so that $K \in \L_n$. The simplified version of The Program for establishing the Generalized Dual Sudakov estimate: \begin{equation} \label{eq:gen-dual-Sudakov} \mu(K) \geq \frac{1}{e} \;\; \; \Rightarrow \;\;\; M(Z_p(\mu) , R K) \leq \exp(C_{A,B,\varphi} p ) , \end{equation} consists of establishing the following three parts for some constants $R,A,B \geq 1$ and a certain function $\varphi$, described below; here $k$ is a positive real number. \begin{enumerate} \item \textbf{Part 1 (Massive Separation Dimension Reduction)}. \\ If $M(Z_p(\mu) , R K) = e^k$ with $\mu(K) \geq \frac{1}{e}$ and $2B \leq k \leq n/A$, show that there exists a linear map $T: \mathbb{R}^n \rightarrow \mathbb{R}^m$ and $L \in \L_m$, with $m \leq A k$, so that: \begin{enumerate} \item $M(T Z_p(\mu), L) \geq e^k$ (\textbf{``Separation Dimension Reduction"}). \item $T_* \mu(L) \geq \exp(-q_m)$, $1 \leq q_m \leq k/2$ (``\textbf{$L$ is sufficiently massive}"). \end{enumerate} \item \textbf{Part 2 (Weak Generalized Dual Sudakov)}. \\ For all $m=1,\ldots,n$, $L \in \L_m$ and $\nu \in \mathbb{M}_m$, show that: \[ 1 \leq p \leq m \;\; , \;\; \nu(L) \geq \exp(-q_m) \;\;\; \Rightarrow \;\;\; M(Z_p(\nu) , L) \leq \exp(q_m + m \varphi(p/m)) , \] where $\varphi : [0,1] \rightarrow \mathbb{R}_+$ is an increasing function with $\varphi(0) = 0$ and $x \mapsto \varphi(x) / x$ non-increasing (and is independent of all other parameters). \item \textbf{Part 3 (Large $p$)}. \\ For all $m=1,\ldots,n$, $L \in \L_m$ and $\nu \in \mathbb{M}_m$, show that: \[ p \geq m \;\; , \;\; \nu(L) \geq \exp(-q_m) \;\;\; \Rightarrow \;\;\; M(Z_p(\nu) , L) \leq \exp(q_m + B p) . \] \end{enumerate} \begin{rem} The following \emph{linear} version of Part 1 should be kept in mind: \begin{enumerate} \renewcommand\theenumi{(\arabic{enumi}')} \renewcommand\labelenumi{\theenumi} \item \textbf{Part 1' - Linear Version} \\ If $\set{x_i}_{i=1,\ldots,e^k} \subset \mathbb{R}^n$ is a collection of $K$-separated points with $\mu(K) \geq \frac{1}{e}$ and $2B \leq k \leq n/A$, show that there exist a linear map $T: \mathbb{R}^n \rightarrow \mathbb{R}^m$ and $L \in \L_m$, with $m \leq A k$, so that: \begin{enumerate} \item $\set{T(x_i)}_{i=1,\ldots,e^k} \subset \mathbb{R}^m$ are $\frac{1}{R} L$-separated (\textbf{``One-sided Johnson--Lindenstrauss"}). \item $T_* \mu(L) \geq \exp(-q_m)$, $1 \leq q_m \leq k/2$ (``\textbf{$L$ is sufficiently massive}"). \end{enumerate} \end{enumerate} By applying this linear version of Part 1 to the maximal collection of $K$-separated points $\set{x_i}$ in $\frac{1}{R} Z_p(\mu)$, it is evident that establishing Part 1' is sufficient for establishing Part 1 of The Program. However, this is not an equivalent reformulation, and we will also see in Section \ref{sec:part1-cubes} an example where a non-linear combinatorial argument is required for establishing Part 1. \end{rem} \begin{rem} \label{rem:part2} Note that using $\varphi(t) = t$ in Part 2 with $m=n$ precisely corresponds to establishing the Generalized Dual Sudakov Minoration conjecture. Part 2 provides the added flexibility of using a weaker function $\varphi$. For instance, using $\varphi(t) = t^q$ for some $q \in (0,1)$ corresponds to establishing $M(Z_p(\nu) , L) \leq \exp(q_m + m^{1-q} p^q)$, i.e. a weak dimension-dependent confirmation of the conjecture for $\nu \in \mathbb{M}_m$ and $L \in \L_m$. \end{rem} \begin{thm}[The Program Yields Generalized Dual Sudakov] \label{thm:program} Establishing (the simplified version of) The Program above yields the Generalized Dual Sudakov Estimate (\ref{eq:gen-dual-Sudakov}). \end{thm} \begin{proof} Assume that $\mu(K) \geq 1/e$. We will show that: \begin{equation} \label{eq:goal-again} e^k := M(Z_p(\mu) , R K) \leq \exp(C_{A,B,\varphi} p) ~,~ C_{A,B,\varphi} := \max\brac{2 B , \frac{1}{A \varphi^{-1}(1/(2A))}} . \end{equation} Since $p \geq 1$, we may assume that $k \geq C_{A,B,\varphi} \geq 2B$, otherwise there is nothing to prove. We now claim there exists a linear map $T : \mathbb{R}^n \rightarrow \mathbb{R}^m$ and $L \in \L_m$ for some $m \leq \min(n, A k)$, so that $M(T Z_p(\mu), L) \geq e^k$ and $T_* \mu( L) \geq \exp(-q_m)$, $1 \leq q_m \leq k/2$. Indeed, if $k < n/A$ this follows from Part 1, whereas if $k \geq n/A$ this is actually trivial by using $m=n$, $T = Id$, $L=K$ and $q_m=1$. Denoting $\nu = T_* \mu \in \mathbb{M}_m$, note that $T Z_p(\mu) = Z_p(\nu)$. Consequently, if $p \geq m$ then by Part 3: \[ \exp(k) \leq M(Z_p(\nu) , L) \leq \exp(q_m + B p) \leq \exp(k/2 + B p) , \] implying that $k \leq 2 B p$, as required. Alternatively, if $p \leq m$ then by Part 2 and the assumption that $x \mapsto \varphi(x)/x$ is non-increasing: \[ \exp(k) \leq M(Z_p(\nu) , L) \leq \exp(q_m + m \varphi(p/m)) \leq \exp(k/2 + A k \varphi(p/(Ak))) . \] It follows since $\varphi$ is increasing from $0$ that: \[ \frac{p}{Ak} \geq \varphi^{-1}\brac{ \frac{1}{2 A} } > 0 , \] implying that $k \leq C_{A,B,\varphi} p$, and concluding the proof. \end{proof} \subsection{Results} Besides introducing The Program, our main results in this work are as follows: \begin{enumerate} \item As a warm-up, we demonstrate in Section \ref{sec:Sudakov} the usefulness of The Program by running an analogous version which yields a new proof of the classical Sudakov Minoration (in fact, an improved version, known to experts). To take care of the Separation Dimension-Reduction step (Part 1), we simply employ the usual Johnson--Lindenstrauss Lemma, while for Parts 2 and 3 we invoke an elementary weak volumetric estimate based on Urysohn's inequality. \item In Section \ref{sec:part3}, we establish Part 3 of The Program in full generality, for all (origin-symmetric) log-concave measures $\nu$ and star-bodies $L$ in $\mathbb{R}^m$. In fact, we obtain the following \emph{regular version} thereof: \begin{equation} \label{eq:intro-part3} p \geq m \;\;\; \Rightarrow \;\; \; M(Z_p(\nu) , C t m_q(\nu,L) L) \leq \exp(1 + q+ \frac{p}{t}) \;\;\; \forall t ,q > 0 . \end{equation} \item In Section \ref{sec:full-program}, we formulate the full version of The Program, which extends the simplified one presented above in two aspects. First, in Part 1, we allow the packing number after dimension reduction to \emph{drop} by a $D$-th root, where $D\geq 1$ is an additional parameter we introduce; this additional flexibility will be crucial for applying The Program to the case of the cube $K = B_\infty^n$, analyzed in Section \ref{sec:part1-cubes}. Second, we also introduce a \emph{regularity} parameter $t > 0$, whose role is to scale the bodies $K$ and $L$. We prove an analogue of Theorem \ref{thm:program}, stating that establishing Parts 1 and 2 of The (full) Program, together with the regular version of Part 3 established in (\ref{eq:intro-part3}), yields a Generalized Regular Dual Sudakov upper bound on $M(Z_p(\mu) , t m_1(\mu,K) K)$ for all $t > 0$. This is important for obtaining a regular version of the Generalized Dual Sudakov estimate for ellipsoids in Section \ref{sec:part1-ellipsoids}, which is later used for establishing a Weak Generalized Dual Sudakov estimate (Part 2 of The Program) for more general convex bodies in Section \ref{sec:part2}. \item In Section \ref{sec:part2-ellipsoids}, we establish Part 2 of The Program for the case that $L$ is an (origin-symmetric) ellipsoid, by invoking a weak volumetric estimate involving all intrinsic volumes of $Z_p(\nu)$. \item In Section \ref{sec:part1-ellipsoids}, we establish the remaining Part 1 of The Program for the case that $K$ is an ellipsoid, by decoupling the separation dimension-reduction and massiveness requirements using a general probabilistic argument, and applying a small-ball one-sided variant of the Johnson--Lindenstrauss Lemma. Running The Program, we obtain the following estimate: \begin{equation} \label{eq:intro-ellipsoids} M(Z_p(\mu) , t m_1(\mu, \mathcal{E}) \mathcal{E}) \leq \exp \brac{C \brac{ \frac{p}{t^2} + \frac{p}{t} } } \;\;\; \forall t > 0 , \end{equation} for any (origin-symmetric) ellipsoid $\mathcal{E} \subset \mathbb{R}^n$. We verify in Section \ref{sec:conclude} that for general log-concave measures $\mu$ and ellipsoids $\mathcal{E}$, this estimate is best-possible (up to numeric constants) for all $p \in [1,n]$ and $t \geq \sqrt{p/n}$. When $\mu$ has identity covariance matrix (``$\mu$ is isotropic") and $\mathcal{E} = B_2^n$, we have $m_1(\mu,\mathcal{E}) \simeq \sqrt{n}$, and so the estimate (\ref{eq:intro-ellipsoids}) precisely coincides with the one obtained in \cite{GPV-ImprovedPsi2} for \emph{isotropic} log-concave measures and Euclidean balls. An alternative proof of this particular case was obtained in \cite{GPV-DistributionOfPsi2} using a very similar approach to the one we employ in this work, namely self-improving a weak Sudakov Minoration estimate via dimension-reduction. In the isotropic case, further improved packing estimates (for an appropriate range of $p,t$) were obtained in \cite[Subsection 3.3]{EMilman-IsotropicMeanWidth}. However, we do not know how to extend the approaches of \cite{GPV-ImprovedPsi2,EMilman-IsotropicMeanWidth} to the general non-isotropic case so that our sharp estimate (\ref{eq:intro-ellipsoids}) is recovered (see Subsection \ref{subsec:conclude-ell} for more details). \item In Section \ref{sec:pure}, we introduce the class of $h$-pure log-concave probability measures $\mu$, which includes several important sub-families, such as unconditional, sub-Gaussian and super-Gaussian log-concave measures. In particular, a log-concave measure is called $1$-pure if all of its lower-dimensional marginals have uniformly bounded isotropic constant (see Section \ref{sec:pure}). A regular packing estimate for $M(B_2^n, t Z_n(\mu))$ when $\mu$ is an isotropic $1$-pure log-concave probability measure was obtained by Giannopoulos--Milman in \cite{GiannopoulosEMilman-IsotropicM}, and we extend it here to general $h$-pure measures, as it plays an important role in the subsequent section. \item In Section \ref{sec:part2}, we use the previous results to establish Part 2 of The Program in a variety of scenarios, such as when the log-concave measure $\nu$ is assumed $h$-pure, or when $Z_m(\nu)$ or $L \in \L_m$ are assumed to have regular small-diameter, such as for type-2 convex bodies, sub-Gaussian convex bodies or unconditional convex bodies with small-diameter, and in particular for $\ell_q^m$ unit-balls, $q \in [2,\infty]$. In view of Remark \ref{rem:part2}, this confirms the Generalized Dual Sudakov Minoration conjecture for such $\nu$ and $L$ up to non-trivial, but unfortunately dimension-dependent, constants. In particular, assuming a positive answer to the Slicing Problem (see Section \ref{sec:pure}), we confirm the conjecture up to non-trivial dimension-dependent constants, a highly non-trivial challenge which constitutes one of the main results of this work. \item In Section \ref{sec:part1-cubes} we establish Part 1 of (the full version of) The Program for $K = B_\infty^n$, the $n$-dimensional cube, with additional logarithmic terms in the dimension. Running The Program, this yields for all $p \geq 1$ and $t > 0$: \begin{equation} \label{eq:intro-cubes} M(Z_p(\mu) , t C \log \log (e+n) m_1(\mu, B_\infty^n) B_\infty^n) \leq \exp \brac{C \log(e+n) \brac{ \frac{p}{t^2} + \frac{p}{t} } } . \end{equation} In Section \ref{sec:conclude}, we verify that for general log-concave measures $\mu$ and up to the above logarithmic terms, this estimate is best-possible (up to numeric constants) for all $p \in [1,n]$ and $t \geq \min(1,\sqrt{p/n^\alpha})$, for any fixed $\alpha \in (0,1)$. Removing these logarithmic terms would establish the Generalized Dual Sudakov conjecture in full generality, since any origin-symmetric convex-body $K \subset \mathbb{R}^n$ may be approximated by an $n$-dimensional section of $B_\infty^N$ as $N \rightarrow \infty$ (in fact, using $N = e^n$ would be enough). A similar argument verifies that (\ref{eq:intro-cubes}) also holds with $B_\infty^n$ replaced by any origin-symmetric polytope with $n^\beta$ facets, for any fixed $\beta \geq 1$ (see Corollary \ref{cor:RegularSudakovPolytopes}). So from an optimistic perspective, we are only $\log N$ far from establishing the conjecture, where $N$ is the dimension of the cube into which $K$ (isomorphically) embeds. \end{enumerate} In Section \ref{sec:conclude} we present some further concluding remarks. \medskip \noindent \textbf{Acknowledgements.} We thank the anonymous referee for the meticulous reading of the manuscript and for providing many useful comments. \section{Notation} \label{sec:prelim} We work in Euclidean space $(\mathbb{R}^n,\abs{\cdot})$, where $\abs{\cdot}$ denotes the standard Euclidean norm. The Euclidean unit-ball is denoted by $B_2^n$ and the Euclidean unit-sphere by $S^{n-1}$. We also use $\abs{A}$ to denote the volume (or Lebesgue measure) of a Borel set $A \subset \mathbb{R}^n$ in its $m$-dimensional affine hull (there will be no ambiguity with this standard double role of $\abs{\cdot}$); the volume-radius of $A$ is then defined as ${\rm vrad}(A) := (\abs{A} / \abs{B_2^m})^{1/m}$. It is well-known that $\abs{B_2^m}^{1/m} \simeq 1 / \sqrt{m}$. The Grassmannian of all $m$-dimensional linear subspaces of $\mathbb{R}^n$ is denoted by $G_{n,m}$, $m=1,\ldots,n$. All homogeneous spaces $G$ of the group of rotations $SO(n)$ are equipped with their Haar probability measures $\sigma_G$, and in particular $\sigma = \sigma_{S^{n-1}}$ denotes the corresponding Haar probability measure on $S^{n-1}$. Given $F \in G_{n,m}$, we denote by $P_F$ the orthogonal projection onto $F$, and set $B_2(F) := B_2^n \cap F$ and $S(F) := S^{n-1} \cap F$. Given a Borel measure $\mu$ on $\mathbb{R}^n$, its marginal $\pi_F \mu$ is defined as the push-forward $(P_F)_*(\mu) = \mu \circ P_F^{-1}$. A consequence of the Pr\'ekopa--Leindler celebrated extension of the Brunn--Minkowski inequality (e.g. \cite{GardnerSurveyInBAMS}), is that the marginal $\pi_F \mu$ of a log-concave measure $\mu$ is itself log-concave on $F$. The support function of a compact set $L$ is defined as $h_L(\theta) := \max \set{\scalar{x,\theta} ; x \in L}$, $\theta \in S^{n-1}$. Recall that a star-body $L \subset \mathbb{R}^n$ is a compact set containing the origin in its interior so that $t L \subset L$ for all $t \in [0,1]$. We denote $\norm{x}_L := \min \set{t > 0 ; x \in t L}$. The radial function $\rho_L(\theta)$ is defined as $1/\norm{\theta}_L$ for $\theta \in S^{n-1}$. When $K$ is an origin-symmetric convex body, $\norm{\cdot}_K$ is a genuine norm whose unit-ball is precisely $K$, and its support function coincides with the dual-norm $h_K(\theta) = \norm{\theta}_K^*$. The Minkowski sum of two compact sets $A,B \subset \mathbb{R}^n$ is defined as the compact set $A + B := \set{ a+b \; ; \; a \in A ~,~ b \in B}$, and satisfies $h_{A+B} = h_A + h_B$. We will write $L_1 \simeq L_2$ if $c L_2 \subset L_1 \subset C L_2$ for some universal constants $c,C >0$. \subsection{Quantiles of log-concave probability measures} Given an absolutely continuous probability measure $\mu$ on $\mathbb{R}^n$, a star-body $L\subset \mathbb{R}^n$ and $q > 0$, recall that: \[ m_q(\mu,L) := \sup \set{ s > 0 ; \mu( s L) \leq e^{-q} } , \] so that $\mu(m_q(\mu,L) L) = e^{-q}$. In addition, given $q > -1$, we define: \[ I_q(\mu,L) := \brac{\int \norm{x}_L^q d\mu(x)}^{1/q} . \] \begin{lem} \label{lem:Guedon} Let $K$ be an origin-symmetric convex body and let $\mu$ denote a log-concave probability measure on $\mathbb{R}^n$. Then for all $q \geq 1$: \[ c e^{-q} m_1(\mu,K) \leq m_q(\mu,K) \leq m_1(\mu,K) \simeq I_1(\mu,K) \leq I_q(\mu,K) \leq C q I_1(\mu,K) . \] \end{lem} \begin{proof} The first inequality follows by a Kahane--Khintchine-type inequality for negative moments due to O.~Gu\'edon \cite{Guedon-extension-to-negative-p}, which asserts that under our assumptions: \[ \mu(\epsilon \; m_1(\mu,K) K) \leq 2 \ln (\frac{e}{e-1}) \epsilon \;\;\; \forall \epsilon \in [0,1] . \] The second inequality is trivial. The inequality $m_1(\mu,K) \leq \frac{e}{e-1} I_1(\mu,K)$ follows directly by the Markov-Chebyshev inequality. The reverse inequality $I_1(\mu,K) \leq C m_1(\mu,K)$ follows again by Markov-Chebyshev in conjunction with the negative moment comparison $I_1(\mu,K) \leq C_q I_{q}(\mu,K)$ for all $q \in (-1,0]$ established in \cite{Guedon-extension-to-negative-p}. The inequality $I_1(\mu,K) \leq I_q(\mu,K)$ is immediate by Jensen's inequality. Finally, the Kahane--Khintchine-type inequality $I_q(\mu,K) \leq C q I_1(\mu,K)$ is a known consequence of Borell's lemma \cite{Borell-logconcave} (e.g. \cite{Milman-Pajor-LK}, \cite[Appendix III]{Milman-Schechtman-Book} or \cite[Theorem 2.4.6]{GreekBook}). \end{proof} \subsection{Centroid Bodies} Recall that the $L_p$ ($p\geq 1$) centroid-bodies $Z_p(\mu)$ associated to a log-concave probability measure $\mu$ on $\mathbb{R}^n$ are defined by: \[ h_{Z_p(\mu)}(\theta) = \brac{\int \abs{\scalar{x,\theta}}^p d\mu(x) }^{1/p} \;\; , \;\; \theta \in S^{n-1} . \] Note that $T(Z_p(\mu)) = Z_p(T_* \mu)$ for any linear mapping $T$, and in particular $P_F Z_p(\mu) = Z_p(\pi_F \mu)$ for all $F \in G_{n,m}$. It is well-known that: \begin{equation} \label{eq:Zpq} 1 \leq p \leq q \;\; \Rightarrow \;\; Z_p(\mu) \subset Z_q(\mu) \subset C \frac{q}{p} Z_p(\mu) , \end{equation} for some constant $C \geq 1$. The first inequality is simply Jensen's inequality, whereas the second one is due to Berwald \cite{BerwaldMomentComparison}, or may be deduced as in Lemma \ref{lem:Guedon} as a consequence of Borell's Lemma \cite{Borell-logconcave}. In fact, it was noted by Lata{\l}a and Wojtaszczyk \cite[Proposition 3.8]{LatalaJacobInfConvolution} that when $\mu$ is origin-symmetric, one may use $C=1$ above (note that the argument in \cite{LatalaJacobInfConvolution} applies to the entire range $1 \leq p \leq q$). \begin{lem} \label{lem:IpWish1} For any probability measure $\mu$, origin-symmetric convex body $K$ and $p \geq 1$: \[ Z_p(\mu) \subset I_p(\mu,K) K . \] \end{lem} \begin{proof} For all $\theta \in S^{n-1}$: \[ h^p_{Z_p(\mu)}(\theta) = \int \abs{\scalar{x,\theta}}^p d\mu(x) \leq \int \norm{x}_K^p d\mu(x) \; h_K^p(\theta) ~. \] \end{proof} \medskip We denote by ${\rm Cov}(\mu)$ the covariance matrix of $\mu$, defined as ${\rm Cov}(\mu) := \int x \otimes x \; d\mu(x) - \int x \; d\mu(x) \otimes \int x \; d\mu(x)$. We will say that $\mu$ is isotropic if its barycenter is at the origin and ${\rm Cov}(\mu)$ is the identity matrix $Id$. It is easy to see that by applying an affine transformation, any absolutely continuous probability measure may be brought to isotropic ``position", which is unique up to orthogonal transformations. We will always assume that $\mu$ has barycenter at the origin, so that $\mu$ is isotropic if and only if $Z_2(\mu) = B_2^n$; more generally, we always have $Z_2(\mu) = {\rm Cov}(\mu)^{1/2}(B_2^n)$, so that $\abs{Z_2(\mu)} = \abs{B_2^n} (\det \; {\rm Cov}(\mu))^{1/2}$ (where we identified between a matrix and its associated linear operator). \subsection{Packing and Covering Numbers} \label{subsec:pack-cover} Recall that given two compact sets $A,B \subset \mathbb{R}^n$, the packing number $M(A,B)$ of $B$ in $A$ is defined as the maximal integer $M$ so that there exist $\set{x_i}_{i=1,\ldots,M} \subset A$ with $x_i + B$ mutually disjoint (``$\set{x_i}$ are $B$-separated"); note that a more standard definition in the literature is to assume that $x_i - x_j \notin \tilde{B}$, which coincides with our definition if $\tilde{B} = B - B$, yielding a factor of $2$ in $B$ if the latter is origin-symmetric. The covering number $N(A,B)$ of $A$ by $B$ is defined as the minimal integer $N$ so that there exist $\set{x_i}_{i=1,\ldots,N}$ with $A \subset \bigcup_{i=1}^N (x_i + B)$. The following relation between packing-numbers and covering-numbers is well-known (see e.g. \cite[Chapter 4]{AGA-Book-I}): \[ N(A,B-B) \leq M(A,B) \leq N(A,-B) . \] When $B$ is an origin-symmetric convex body $K$, it follows that: \begin{equation} \label{eq:cover-pack} N(A,2K) \leq M(A,K) \leq N(A,K) , \end{equation} and so up to this immaterial factor of $2$, we need not distinguish between packing and covering numbers. Note that Lemma \ref{lem:IpWish1} implies that $N(Z_p(\mu) , I_p(\mu,K) K) = 1$. The Dual Generalized Sudakov Conjecture asserts that for log-concave measures, it is possible to replace $I_p(\mu,K)$ by $C I_1(\mu,K)$, and still cover $Z_p(\mu)$ with $\exp(C p)$ copies of $C I_1(\mu,K) K$. \medskip Clearly $M(A,B)$ and $N(A,B)$ are both invariant under simultaneously applying a non-singular linear transformation to both $A$ and $B$, and under translation of $A$ or $B$. The following triangle inequality for covering numbers is obvious for all compact $A,B,D$: \[ N(A,B) \leq N(A,D) N(D,B) . \] Note that the following variant also holds for packing numbers: \begin{lem} \[ M(A,B) \leq N(A,D) M(D,B) . \] \end{lem} \begin{proof} Let $Z$ denote a $B$-separated set in $A$ of cardinality $M$, and assume that $A \subset \bigcup_{i=1}^N (x_i + D)$. Clearly $(Z - x_i) \cap D$ is a $B$-separated subset of $D$, and hence: \[ M = \# Z \leq \sum_{i=1}^N \# (Z \cap (x_i + D)) \leq N M(D,B) . \] \end{proof} We will frequently use the following obvious volumetric estimates: \begin{equation} \label{eq:packing-vol} \frac{\abs{A}}{\abs{B}} \leq N(A,B) \;\;\; , \;\;\; M(A,B) \leq \frac{\abs{A+B}}{\abs{B}} . \end{equation} In particular, when $K \subset \mathbb{R}^n$ is an origin-symmetric convex body, we have the standard volumetric estimate: \begin{equation} \label{eq:volumetric} \brac{\frac{1}{t}}^n \leq N(K , t K) \leq M(K , (t/2) K) \leq \brac{\frac{1+ t/2}{t/2}}^n \leq \brac{1 + \frac{2}{t}}^n \;\;\; \forall t \in (0,1] . \end{equation} \begin{lem} \label{lem:extend} Assume that for some compact $A$ and convex body $K$ in $\mathbb{R}^n$: \[ N(A,t K) \leq \exp(n \varphi(t)) \;\;\; \forall t \geq t_0 , \] for some function $\varphi : [t_0,\infty) \rightarrow \mathbb{R}_+$. Then the same estimate holds for all $t > 0$ after defining: \[ \varphi(t) := \varphi(t_0) + \log(1 + (2t_0)/t) \;\; , \;\; t \in (0,t_0) . \] \end{lem} \begin{proof} \[ N(A,t K) \leq N(A, t_0 K) N(t_0 K , t K) \leq \exp(n \varphi(t_0)) (1+ (2t_0)/t)^n . \] \end{proof} \section{Warm Up - Sudakov Minoration via The Program} \label{sec:Sudakov} In this section, we demonstrate the usefulness of The Program by applying an analogous program which yields a new proof of the Sudakov Minoration Inequality. Given a compact set $K \subset \mathbb{R}^n$, denote (half) the mean-width of $K$ by: \[ M_{\mathbb{R}^n}^*(K) = M^*(K) := \int_{S^{n-1}} h_K(\theta) d\sigma(\theta) . \] \begin{thm}[Improved Sudakov Minoration] \label{thm:Sudakov} For any compact $K \subset \mathbb{R}^n$, one has: \[ M(K , C t M^*(K) B_2^n) \leq \exp\brac{ \frac{n}{\max(t ,t^2)} } \;\;\; \forall t > 0 , \] for some universal constant $C \geq 1$. \end{thm} \begin{rem} Note that $M^*(K)$ is invariant under taking the convex hull of $K$, and so we may as well assume that $K$ is a convex body above. In that case, it is well-known (e.g. \cite[p. 203]{AGA-Book-I}) and easy to check by polar-integration that $I^*_1(\gamma_n,K) \simeq \sqrt{n} M^*(K)$. Translating the classical Sudakov Minoration stated in the Introduction using the present notation, it asserts that the left-hand-side is majorized by $\exp(n / t^2)$. The improved version above for $t \in (0,1)$ is known to experts, and follows from an elementary volumetric argument, reproduced below. \end{rem} The simplest text-book proof of Sudakov Minoration we are aware of is obtained by first establishing a dual version using a covering estimate of Talagrand, and then applying a duality argument due to Tomczak-Jaegermann (see \cite[Chapter 3.3]{LedouxTalagrand-Book},\cite[Chapter 4.2]{AGA-Book-I}). The proof we provide below is very different: we work with the primal version directly; we first establish the easy ``weak" covering estimate $\exp(n / t)$ from elementary volumetric considerations (taking care of the analogues of Part 2 and Part 3 of The Program); and finally self-improve this estimate when $t \geq 1$ by employing dimension reduction (Part 1 of The Program) via the usual Johnson--Lindenstrauss lemma. \begin{lem}[Weak Sudakov Inequality, folklore] \label{lem:weak-Sudakov} For any convex body $K \subset \mathbb{R}^n$: \[ M(K, t M^*(K) B_2^n) \leq \exp \brac{\frac{n}{t}} \;\;\; \forall t > 0 . \] \end{lem} \begin{proof} By linearity of the support functions we have $M^*(K + t L) = M^*(K) + t M^*(L)$ for all $t \geq 0$. We invoke Urysohn's inequality (e.g. \cite{GiannopoulosMilmanHandbook}, \cite[Chapter 6]{Schneider-Book}), which states that $\vrad(K) \leq M^*(K)$. Coupled with the standard volumetric covering estimate (\ref{eq:packing-vol}), we obtain: \[ M(K, t M^*(K) B_2^n) \leq \frac{\abs{K + t M^*(K) B_2^n}}{\abs{t M^*(K) B_2^n}} \leq \brac{\frac{M^*(K + t M^*(K) B_2^n)}{t M^*(K)}}^n = \brac{\frac{1+t}{t}}^n \leq \exp \brac{\frac{n}{t}} . \] \end{proof} \begin{lem}[Johnson--Lindenstrauss Lemma \cite{JohnsonLindenstraussLemma}] \label{lem:JL} Let $F \in G_{n,m}$ be a random $m$-dimensional subspace of Euclidean space $(\mathbb{R}^n,\abs{\cdot})$ distributed according to the Haar probability measure on $G_{n,m}$, $m=1,\ldots,n$, and let $P_F$ denote the orthogonal projection onto $F$. Then for all $x \in S^{n-1}$: \begin{enumerate} \item \[ \P\brac{ \abs{\sqrt{n/m}\abs{P_F x} - 1} \geq \epsilon } \leq C \exp(- c m \epsilon^2) \;\;\; \forall \epsilon > 0 . \] \item Let $\set{x_i}_{i=1,\ldots, M} \subset \mathbb{R}^n$ be a collection of (say distinct) points. Then: \[ \P \brac{ 1-\epsilon \leq \frac{\sqrt{n/m} \abs{P_F x_i - P_F x_j}}{\abs{x_i - x_j}} \leq 1+ \epsilon \;\;\; \forall i \neq j } \geq 1 - {M \choose 2} C \exp(- c m \epsilon^2) . \] \end{enumerate} \end{lem} \begin{proof}[Proof Sketch] The first assertion follows from concentration on the sphere and the fact that for a fixed $F_0 \in G_{n,m}$, $S^{n-1} \ni x \mapsto \abs{P_{F_0} x}$ is a $1$-Lipschitz function. The second part follows immediately from the first part, linearity, and the union-bound. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Sudakov}] When $t \in (0,C_0]$, where $C_0 \geq 1$ is a large-enough constant to be determined, the assertion follows from Lemma \ref{lem:weak-Sudakov}. When $t \geq C_0$, we proceed as follows. Set: \[ e^k := M(K, t M^*(K) B_2^n) , \] Lemma \ref{lem:weak-Sudakov} ensures that $k \leq n/C_0$, and since the packing number is an integer, we may assume that $k \geq \log 2$ (otherwise $k=0$ and there is nothing to prove). Let $\set{x_i}_{i=1,\ldots,e^k}$ denote the maximal collection of points in $K$ which are $t M^*(K) B_2^n$-separated. By the Johnson--Lindenstrauss Lemma \ref{lem:JL}, we may choose $C_0$ large enough so that setting $m := \lceil C_0 k \rceil \in [C_0 \log 2 , n]$, an orthogonal projection $P_F$ onto a randomly selected $F \in G_{n,m}$ with respect to its Haar probability measure $\sigma_{G_{n,m}}$, will satisfy with probability at least $1 - {e^k \choose 2} C\exp(-c m (1/2)^2) > 1/2$ that: \begin{equation} \label{eq:Sudakov-union1} \text{$\set{P_F(x_i)}$ are $\frac{1}{2} t M^*(K) \sqrt{\frac{m}{n}} B_2(F)$-separated} . \end{equation} In addition, since $h_{P_F K} = h_K|_ F$, note that: \begin{align*} M^*(K) = \int_{S^{n-1}} h_K(\theta) d\sigma_{S^{n-1}}(\theta) & = \int_{G_{n,m}} \int_{S(F)} h_K(\theta) d\sigma_{S(F)}(\theta) d\sigma_{G_{n,m}}(F) \\ & = \int_{G_{n,m}} M^*_F(P_F K) d\sigma_{G_{n,m}}(F) , \end{align*} and so by the Markov--Chebyshev inequality, \begin{equation} \label{eq:Sudakov-union2} M^*_F(P_F K) \leq 2 M^*(K) \end{equation} with probability at least $1/2$ (in fact, it follows by a result of Klartag--Vershynin \cite[Section 3]{Klartag-Vershynin} that this holds with much higher probability, but this is not required here). By the union bound, it follows that there exists a subspace $F \in G_{n,m}$ for which both (\ref{eq:Sudakov-union1}) and (\ref{eq:Sudakov-union2}) hold. Hence, applying Lemma \ref{lem:weak-Sudakov} to $P_F K$ in $F \in G_{n,m}$: \[ e^k \leq M\brac{P_F K , \frac{1}{4} t M^*_F(P_F K) \sqrt{\frac{m}{n}} B_2(F)} \leq \exp \brac{4 \frac{m}{t \sqrt{m/n}}} . \] Using that $m \leq C_0 k + 1 \leq (C_0 + 1/\log(2)) k$ and solving for $k$, we obtain: \[ k \leq C' \frac{n}{t^2} , \] and hence: \[ M(K, t M^*(K) B_2^n) = e^k \leq \exp\brac{C' \frac{n}{t^2}} \;\;\; \forall t \geq C_0 , \] concluding the proof. \end{proof} \section{Part 3 - the case $p \geq n$} \label{sec:part3} In this section we establish Part 3 of The Program. In fact, we will establish the following regular version thereof, in preparation for introducing the full version of The Program in the next section. \begin{thm} \label{thm:part3} Let $\mu$ denote an origin-symmetric log-concave probability measure on $\mathbb{R}^n$, and let $L \subset \mathbb{R}^n$ denote a star-body. Then for any $p \geq n$, we have: \[ M(Z_p(\mu) , C t m_q(\mu,L) L) \leq \exp(1 + q + \frac{p}{t}) \;\;\; \forall t ,q > 0 \; \] In particular, if $\mu(L) \geq e^{-p}$ with $p \geq n$ then $M(Z_p(\mu) , C L) \leq \exp(3 p)$. \end{thm} \medskip For the proof, we require a bit of preparation, emphasizing that $L$ need not be convex but only be star-shaped, which might be useful for establishing Part 1 of The Program (as indicated by some preliminary attempts we do not describe here). We start with the following variation on Talagrand's proof of the dual Sudakov Minoration (e.g. \cite[Chapter 3.3]{LedouxTalagrand-Book},\cite[Chapter 4.2]{AGA-Book-I}), which was already used by Hartzoulaki in her PhD Thesis \cite{Hartzoulaki-PhD} and subsequently employed by other authors as well (cf. \cite{LitvakMilmanPajor-QuasiConvex,GPV-ImprovedPsi2}). \medskip Recall that $\lambda_K$ denotes the uniform probability measure on the convex body $K$. \begin{prop} \label{prop:Talagrand} Let $K$ denote a convex body and let $L$ denote a star-body in $\mathbb{R}^n$. Then: \[ M(K , 2 t m_q(\lambda_K , L) L) \leq \exp(1 + q + \frac{n}{t}) \;\;\; \forall q, t > 0 . \] \end{prop} For the proof, we will utilize the following auxiliary probability measure on $\mathbb{R}^n$, which may be associated to any star-body $K \subset \mathbb{R}^n$: \[ \mu_K := \frac{1}{n! \abs{K}} e^{-\norm{x}_K} dx . \] \begin{lem} With the same assumptions as in Proposition \ref{prop:Talagrand}: \[ M(K , t \; m_q(\mu_K,L) L) \leq \exp(q + \frac{1}{t}) \;\;\; \forall q, t > 0 . \] \end{lem} \begin{proof} Let us show the following equivalent formulation: \begin{equation} \label{eq:Talagrand0} M(K , r L) \leq \frac{\exp( s / r)}{\mu_K(s L)} \;\;\; \forall r , s > 0 . \end{equation} By definition, there exist $M := M(K,r L)$ points $z_1 , \ldots, z_M \in K$ so that the sets $\set{z_i + r L}$ are mutually disjoint. Hence, for all $s > 0$, the sets $\set{\frac{s}{r} z_i + s L}$ are also mutually disjoint. In addition, by convexity of $K$: \[ \mu_K\brac{\frac{s}{r} z_i + s L} = \frac{1}{n! \abs{K}} \int_{sL} e^{-\norm{\frac{s}{r} z_i + x}_K} dx \geq \frac{1}{n! \abs{K}} e^{-\frac{s}{r} \norm{z_i}_K} \int_{s L} e^{-\norm{x}_K} dx \geq e^{-\frac{s}{r}} \mu_K(s L) . \] Consequently: \[ 1 \geq \sum_{i=1}^M \mu_K\brac{\frac{s}{r} z_i + s L} \geq M e^{-\frac{s}{r}} \mu_K(s L) , \] establishing (\ref{eq:Talagrand0}), as required. \end{proof} \begin{lem} For all star-bodies $K,L \subset \mathbb{R}^n$ and $q > 1$, we have: \[ m_{q-1}(\lambda_K , L) \geq \frac{1}{2n} m_q(\mu_K, L) . \] \end{lem} \begin{proof} For all $s > 0$: \[ \mu_K(sL) = \frac{1}{n! \abs{K}} \int_{sL} e^{-\norm{x}_K} dx = \frac{1}{n! \abs{K}} \int_0^\infty \abs{t K \cap sL} e^{-t} dt = \frac{1}{n!} \int_0^\infty t^n e^{-t} \lambda_K(\frac{s}{t} L) dt . \] Applying this to $s := m_q(\mu_K,L)$, we obtain: \[ e^{-q} = \mu_K(s L) = \frac{1}{n!} \int_0^\infty t^n e^{-t} \lambda_K\brac{\frac{s}{t} L} dt \geq \lambda_K\brac{\frac{s}{2n} L} \frac{1}{n!} \int_0^{2n} t^n e^{-t} dt \geq \frac{1}{e} \lambda_K\brac{\frac{s}{2n} L} , \] where the very rough estimate $\int_0^{2n} t^n e^{-t} dt \geq \frac{1}{e} \int_0^\infty t^n e^{-t} dt$ is standard and may be easily verified by direct calculation (or e.g. by Markov's inequality when $n \geq 4$). It follows that $\frac{s}{2n} \leq m_{q-1}(\lambda_K , L)$, as asserted. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:Talagrand}] Applying the previous two lemmas, the proof is immediate: \[ M(K , 2 t \; m_q(\lambda_K , L)) \leq M\brac{K , \frac{t}{n} m_{q+1}(\mu_K, L)} \leq \exp(1 + q + \frac{n}{t}) . \] \end{proof} \medskip One final ingredient we require for the proof of Theorem \ref{thm:part3} involves the following star-body, introduced by K.~Ball \cite{Ball-kdim-sections} (cf. \cite[Chapter 10]{AGA-Book-I}). Given a probability measure $\mu$ on $\mathbb{R}^n$ with continuous and exponentially-decaying density $f_\mu$ with $f_\mu(0) > 0$ (``non-degenerate measure"), and $p \geq 1$, denote by $K_p(\mu) \subset \mathbb{R}^n$ the star-body with radial function: \[ \rho_{K_p(\mu)}(\theta) = \brac{ \frac{p}{\max f_\mu} \int_0^\infty r^{p-1} f_\mu(r \theta) dr}^{\frac{1}{p}} ~,~ \theta \in S^{n-1} . \] Note our slightly non-standard normalization involving $\max f_\mu$ instead of $f_\mu(0)$, which seems to be more convenient. Integration in polar coordinates immediately verifies that $\abs{K_n(\mu)} = \frac{1}{\max f_\mu}$, and that (cf. \cite{Paouris-IsotropicTail}): \begin{equation} \label{eq:ZpKp} Z_p(\lambda_{K_{n+p}(\mu)}) = \brac{\frac{\abs{K_n(\mu)}}{\abs{K_{n+p}(\mu)}}}^{1/p} Z_p(\mu) . \end{equation} \begin{lem} For any star-body $L \subset \mathbb{R}^n$: \[ \mu(L) \leq \lambda_{K_n(\mu)}(L) . \] In particular, for all $q > 0$: \[ m_q(\mu, L) \geq m_q(\lambda_{K_n(\mu)},L) . \] \end{lem} \begin{proof} Simply note that for all $\theta \in S^{n-1}$: \begin{align*} & \int_0^{\rho_L(\theta)} r^{n-1} f_\mu(r \theta) dr \leq \min\brac{(\max f_\mu) \frac{\rho_L^n(\theta)}{n} , \int_0^\infty r^{n-1} f_\mu( r\theta) dr} \\ & = \frac{\max f_\mu}{n} \min(\rho_{L}^n(\theta) , \rho_{K_n(\mu)}^n(\theta)) = \frac{\max f_\mu}{n} \rho^n_{K_n(\mu) \cap L}(\theta) . \end{align*} Integrating the above ray-wise inequality on $S^{n-1}$, we obtain: \begin{align*} \mu(L) & = \int_{S^{n-1}} \int_0^{\rho_L(\theta)} r^{n-1} f_\mu(r \theta) dr d\theta \leq \max f_\mu \int_{S^{n-1}} \int_0^{\rho_{K_n(\mu) \cap L}(\theta)} r^{n-1} dr d\theta \\ & = \max f_\mu \abs{K_n(\mu) \cap L} = \lambda_{K_n(\mu)}(L) , \end{align*} as required. \end{proof} Remarkably, it was observed by K.~Ball \cite{Ball-kdim-sections} that when $\mu$ is an origin-symmetric log-concave probability measure, $K_p(\mu)$ is in fact a convex body for all $p \geq 1$; this was extended in \cite{KlartagPerturbationsWithBoundedLK} to the non-symmetric case (assuming that $f_\mu(0) > 0$). Consequently, Proposition \ref{prop:Talagrand} immediately yields the following: \begin{cor} \label{cor:Talagrand} For any log-concave probability measure $\mu$ on $\mathbb{R}^n$ so that $f_\mu(0) > 0$, star-body $L \subset \mathbb{R}^n$, and $q , t > 0$: \[ M(K_n(\mu) , 2 t m_q(\mu , L) L ) \leq M(K_n(\mu) , 2 t m_q(\lambda_{K_n(\mu)}, L) L) \leq \exp(1 + q + \frac{n}{t}) . \] \end{cor} It remains to pass from $K_n(\mu)$ to $Z_n(\mu)$ in the packing estimate above. This is standard, but for completeness, and in order to prove an additional estimate we will require later on, we provide a proof. First, it is known \cite{Paouris-Small-Diameter} that: \begin{equation} \label{eq:ZnLambda} Z_n(\lambda_K) \simeq \text{conv}(K \cup -K) , \end{equation} for any convex body $K \subset \mathbb{R}^n$. It is also known (see \cite{BarlowMarshallProschan,Ball-kdim-sections,Milman-Pajor-LK} for the even case and \cite[Lemmas 2.5,2.6]{KlartagPerturbationsWithBoundedLK} or \cite[Lemma 3.2 and (3.12)]{PaourisSmallBall} for the general one, noting our non-standard normalization) that for any log-concave measure $\mu$ on $\mathbb{R}^n$ whose barycenter is at the origin, we have: \begin{equation} \label{eq:Kpq} 1 \leq p \leq q \;\; \Rightarrow \;\; K_p(\mu) \subset K_q(\mu) \subset \frac{\Gamma(q+1)^{1/q}}{\Gamma(p+1)^{1/p}} e^{n (\frac{1}{p} - \frac{1}{q})}K_p(\mu) . \end{equation} In particular, $K_n(\mu) \simeq K_{2n}(\mu)$. Combining this with (\ref{eq:ZpKp}) and (\ref{eq:ZnLambda}), we obtain for an \emph{origin-symmetric} log-concave measure $\mu$ (for which $f_\mu(0) = \max f_\mu > 0$): \begin{equation} \label{eq:ZnKn} Z_n(\mu) = \brac{\frac{\abs{K_{2n}(\mu)}}{\abs{K_{n}(\mu)}}}^{1/n} Z_n(\lambda_{K_{2n}(\mu)}) \simeq Z_n(\lambda_{K_{2n}(\mu)}) \simeq K_{2n}(\mu) \simeq K_n(\mu) . \end{equation} Note that the origin-symmetry of $\mu$ was crucially used to ensure $Z_n(\lambda_{K_{2n}(\mu)}) \simeq K_{2n}(\mu)$. It is possible to dispose of this restriction by employing the one-sided variants $Z_n^+(\mu)$ introduced in \cite{GuedonEMilmanInterpolating}, but we do not pursue this here. \medskip Summarizing, we deduce from Corollary \ref{cor:Talagrand} and (\ref{eq:ZnKn}) that for an appropriate constant $C > 0$, we have under the assumptions of Theorem \ref{thm:part3}: \[ M(Z_n(\mu) , C t m_q(\mu,L) L ) \leq M(K_n(\mu) , 2 t m_q(\mu , L) L ) \leq \exp(1 + q + \frac{n}{t}) , \] concluding the theorem for the case $p=n$. When $p \geq n$, simply use (\ref{eq:Zpq}): \[ Z_p(\mu) \subset \frac{p}{n} Z_n(\mu) , \] and conclude: \[ M(Z_p(\mu) , C t m_q(\mu, L) L) \leq M\brac{Z_n(\mu) , C t \frac{n}{p} m_q(\mu, L) L} \leq \exp( 1 + q + \frac{p}{t}) . \] The proof of Theorem \ref{thm:part3} is complete. \medskip Before concluding this section, we also record for future use the following well-known fact (cf. \cite{KlartagMilmanLogConcave,Klartag-Psi2,KlartagCLPpolynomial,PaourisSmallBall}); as we did not find a precise reference, we provide a proof for completeness. \begin{lem} \label{lem:ZnHuge} For any log-concave probability measure $\mu$ on $\mathbb{R}^n$ with barycenter at the origin we have: \[ I_1(\mu,Z_n(\mu)) \leq I_n(\mu,Z_n(\mu)) \leq C . \] \end{lem} \begin{proof} As in the proof of (\ref{eq:ZpKp}), it is immediate to verify by polar-integration that for any non-degenerate measure $\nu$ and star-body $L$ in $\mathbb{R}^n$: \[ I_p(\nu,L) = \brac{\frac{\abs{K_{n+p}(\nu)}}{\abs{K_{n}(\nu)}}}^{1/p} I_p(\lambda_{K_{n+p}(\nu)} , L) . \] Applying this to $\nu = \mu$, $L = Z_n(\mu)$ and $p=n$, and using that $K_{2n}(\mu) \simeq K_n(\mu)$ by (\ref{eq:Kpq}), we obtain: \[ I_n(\mu,Z_n(\mu)) \leq C' I_n(\lambda_{K_{2n}(\mu)}, Z_n(\mu)) . \] It remains to use as in (\ref{eq:ZnKn}) that (without any symmetry assumptions): \[ Z_n(\mu) = \brac{\frac{\abs{K_{2n}(\mu)}}{\abs{K_{n}(\mu)}}}^{1/n} Z_n(\lambda_{K_{2n}(\mu)}) \supset Z_n(\lambda_{K_{2n}(\mu)}) \supset c \; \text{Conv}(K_{2n}(\mu) \cup - K_{2n}(\mu)) \supset c K_{2n}(\mu) . \] Consequently: \[ I_n(\mu,Z_n(\mu)) \leq \frac{C'}{c} I_n(\lambda_{K_{2n}(\mu)}, K_{2n}(\mu)) \leq \frac{C'}{c} , \] concluding the proof. \end{proof} \section{The Program - Full Version} \label{sec:full-program} We are now ready to state the full version of The Program; the full version extends the simplified one presented in the Introduction by allowing the packing number after dimension reduction to \emph{drop} by a $D$-th root and by introducing an additional scaling parameter $t > 0$. As usual, we fix an origin-symmetric convex body $K \subset \mathbb{R}^n$, origin-symmetric log-concave measure $\mu$ on $\mathbb{R}^n$ and $p \geq 1$. For all $m = 1,\ldots, n$, set as usual $\mathbb{M}_m := \set{ T_* \mu \; ; \; T : \mathbb{R}^n \rightarrow \mathbb{R}^m \text{ linear}}$, which is a family of log-concave measures on $\mathbb{R}^m$ by the Pr\'ekopa--Leindler theorem. In addition, let $\L_m$ denote some family of origin-symmetric star-bodies in $\mathbb{R}^m$, so that $K \in \L_n$. The Program for establishing the Generalized Regular Dual Sudakov estimate: \begin{equation} \label{eq:gen-regular-dual-Sudakov} \mu(K) \geq \frac{1}{e} \;\; \; \Rightarrow \;\;\; M(Z_p(\mu) , t R K) \leq \exp(C_{A,B,D,\varphi_t,t} p ) , \end{equation} consists of establishing the first 2 parts below for some constants $R,A,B,D \geq 1$ and a certain function $\varphi_t$, described below. \begin{enumerate} \item \textbf{Part 1 (Massive Partial Separation Dimension Reduction)}. \\ If $M(Z_p(\mu) , t R K) = e^k$ with $\mu(K) \geq \frac{1}{e}$ and $4 B D \leq k \leq n/A$, show that there exists $l \in [k / D , k]$ and a linear map $T: \mathbb{R}^n \rightarrow \mathbb{R}^m$ and $L \in \L_m$, with $m \leq A l$, so that: \begin{enumerate} \item $M(T Z_p(\mu), t L) \geq e^l$ (\textbf{``Partial Separation Dimension Reduction"}). \item $T_* \mu(L) \geq \exp(-q_m)$, $1 \leq q_m \leq l/2$ (``\textbf{$L$ is sufficiently massive}"). \end{enumerate} \item \textbf{Part 2 (Weak Generalized Regular Dual Sudakov)}. \\ For all $m=1,\ldots,n$, $L \in \L_m$ and $\nu \in \mathbb{M}_m$, show that: \[ 1 \leq p \leq m \;\; , \;\; \nu(L) \geq \exp(-q_m) \;\;\; \Rightarrow \;\;\; M(Z_p(\nu) , t L) \leq \exp(B + q_m + m \varphi_t(p/m)) , \] where $\varphi_t : [0,1] \rightarrow \mathbb{R}_+$ is an increasing function with $\varphi_t(0) = 0$ and $x \mapsto \varphi_t(x) / x$ non-increasing (depending only on $t$ and independent of all other parameters). \item \textbf{Part 3 (Large $p$)}. \\ For all $m=1,\ldots,n$, $L \in \L_m$ and $\nu \in \mathbb{M}_m$, Theorem \ref{thm:part3} verifies that: \[ p \geq m \;\; , \;\; \nu(L) \geq \exp(-q_m) \;\;\; \Rightarrow \;\;\; M(Z_p(\nu) , t L) \leq \exp\brac{1 + q_m + C \frac{p}{t}} , \] for some universal constant $C \geq 1$. \end{enumerate} \begin{rem} As in the Introduction, we state the following \emph{linear} version of Part 1: \begin{enumerate} \renewcommand\theenumi{(\arabic{enumi}')} \renewcommand\labelenumi{\theenumi} \item \textbf{Part 1' - Linear Version} \\ If $\set{x_i}_{i=1,\ldots,e^k} \subset \mathbb{R}^n$ is a collection of $K$-separated points with $\mu(K) \geq \frac{1}{e}$ and $4BD \leq k \leq n/A$, show that there exist $l \in [k / D , k]$, a linear map $T: \mathbb{R}^n \rightarrow \mathbb{R}^m$, and $L \in \L_m$ with $m \leq A l$, so that: \begin{enumerate} \item There exists $I \subset \set{1,\ldots,e^k}$ with $\# I \geq e^l$ so that $\set{T(x_i)}_{i \in I} \subset \mathbb{R}^m$ are $\frac{1}{R} L$-separated (\textbf{``Partial One-sided Johnson--Lindenstrauss"}). \item $T_* \mu(L) \geq \exp(-q_m)$, $1 \leq q_m \leq l/2$ (``\textbf{$L$ is sufficiently massive}"). \end{enumerate} \end{enumerate} By applying this linear version of Part 1 to the maximal collection of $K$-separated points $\set{x_i}$ in $\frac{1}{t R} Z_p(\mu)$, it is evident that establishing Part 1' is sufficient (but not necessary) for establishing Part 1 of The (full) Program. \end{rem} \begin{thm} \label{thm:full-program} Establishing The (full) Program above yields the Generalized Regular Dual Sudakov Estimate (\ref{eq:gen-regular-dual-Sudakov}) with: \begin{equation} \label{eq:C-def} C_{A,B,D,\varphi_t,t} := D \max \brac{\frac{4 \max(C, C' \frac{B}{R})}{t}, \frac{1}{A \varphi_t^{-1}(\frac{1}{4A})}} . \end{equation} \end{thm} \begin{proof} We assume that $\mu(K) \geq 1/e$. We will first show that: \begin{equation} \label{eq:goal-again} e^k := M(Z_p(\mu) , t R K) \leq \exp \brac{ D \max \brac{\frac{4C}{t} , \frac{1}{A \varphi_t^{-1}(\frac{1}{4A})}} p } , \end{equation} under the assumption that $k \geq 4 BD$. Under this assumption, there exist $l \in [k / D,k]$, a linear map $T : \mathbb{R}^n \rightarrow \mathbb{R}^m$, and $L \in \L_m$ for some $m \leq \min(n, A l)$, so that $M(T Z_p(\mu), t L) \geq e^l$ and $T_* \mu( L) \geq \exp(-q_m)$, $1 \leq q_m \leq l/2$. Indeed, if $k < n/A$ this follows from Part 1, whereas if $k \geq n/A$ this is actually trivial by using $m=n$, $l = k$, $T = Id$, $L=K$ and $q_m=1$. Denoting $\nu = T_* \mu \in \mathbb{M}_m$, note that $T Z_p(\mu) = Z_p(\nu)$. Also note that $l/4 \geq B \geq 1$. Consequently, if $p \geq m$ then by Part 3: \[ \exp(l) \leq M(Z_p(\nu) , t L) \leq \exp(1 + q_m + C \frac{p}{t} ) \leq \exp(1 + l/2 + C \frac{p}{t} ) \leq \exp( l/4 + l/2 + C \frac{p}{t} ) , \] implying that $k \leq D l \leq 4 D C \frac{p}{t}$, as required. Alternatively, if $p \leq m$ then by Part 2 and the assumption that $x \mapsto \varphi_t(x)/x$ is non-increasing: \[ \exp(l) \leq M(Z_p(\nu) , t L) \leq \exp(B + q_m + m \varphi_t(p/m)) \leq \exp(l/4 + l/2 + A l \varphi_t(p/(A l))) . \] It follows since $\varphi_t$ is increasing from $0$ that: \[ \frac{p}{A l} \geq \varphi^{-1}\brac{ \frac{1}{4 A} } > 0 , \] implying that $k \leq D l \leq D \frac{1}{A \varphi_t^{-1}(\frac{1}{4A})} p$, and establishing (\ref{eq:goal-again}) under the assumption that $k \geq 4 BD$. To complete the proof, recall that $Z_p(\mu) \subset I_p(\mu,K) K$ by Lemma \ref{lem:IpWish1}. Since $I_p(\mu,K) \leq C' p \; m_1(\mu,K)\leq C' p$ by Lemma \ref{lem:Guedon}, it follows that the left-hand-side of (\ref{eq:goal-again}) is actually $1$ (equivalently, $k=0$) for $t \geq C' p / R$, in which case there is nothing to prove. On the other hand, in the non-trivial range $t \in (0, C' p / R)$, we have $4 D C' \frac{B}{R} \frac{1}{t} p \geq 4 BD$, which leads to the definition of $C_{A,B,D,\varphi_t,t}$ in (\ref{eq:C-def}) and confirms (\ref{eq:gen-regular-dual-Sudakov}) for all $t > 0$. \end{proof} \section{Part 2 - Weak Generalized Dual Sudakov: Ellipsoids} \label{sec:part2-ellipsoids} Recall that: \[ I_q(\mu,K) := \brac{\int_{\mathbb{R}^n} \norm{x}^q_K d\mu(x)}^{1/q} . \] When $K = B_2^n$, we simply denote $I_q(\mu) = I_q(\mu,B_2^n)$. \begin{thm} \label{thm:part2-ellipsoids} Let $\mu$ denote an origin-symmetric log-concave probability measure on $\mathbb{R}^n$ and let $\mathcal{E} \subset \mathbb{R}^n$ denote an (origin-symmetric) ellipsoid. Then for any $p \in [1,n]$: \[ M(Z_p(\mu) , t I_1(\mu,\mathcal{E}) \mathcal{E}) \leq \exp \brac{ C \frac{p^{2/3} n^{1/3}}{t^{2/3}} + C \frac{\sqrt{p} \sqrt{n}}{t} } \;\;\; \forall t > 0 . \] \end{thm} Since $M(Z_p(\mu) , t I_1(\mu,\mathcal{E}) \mathcal{E})$ is invariant under simultaneously applying a linear transformation to $\mu$ and $\mathcal{E}$, we may and will reduce to the case $\mathcal{E} = B_2^n$. For the proof, our strategy will be to invoke the standard volumetric estimate on the packing numbers (see Subsection \ref{subsec:pack-cover}): \[ M(Z_p(\mu) , t B_2^n) \leq \frac{\abs{Z_p(\mu) + t B_2^n} }{ \abs{t B_2^n} } \;\;\; \forall t > 0 . \] To handle the numerator, we use Steiner's classical formula \cite[Chapter 4]{Schneider-Book}, stating that for any convex body $K \subset \mathbb{R}^n$: \[ \abs{K + t B_2^n} = \sum_{k=0}^n {n \choose k} W_k(K) t^{n-k} , \] where $W_k(K)$ denotes the $k$-th quermassintegral (or mixed-volume) of $K$; the latter is often denoted as $W_{n-k}(K)$ in the literature, but we prefer our convention which keeps track of the homogeneity in $K$. Recall that by Kubota's formula (e.g. \cite[Chapter 5]{Schneider-Book}), we have: \begin{equation} \label{eq:Kubota} W_k(K) = \frac{\abs{B_2^n}}{\abs{B_2^k}} \int_{G_{n,k}} \abs{P_F K} d\sigma_{G_{n,k}}(F) = \abs{B_2^n} \int_{G_{n,k}} \vrad(P_F K)^k d\sigma_{G_{n,k}}(F) , \end{equation} where $\sigma_{G_{n,k}}$ denotes the Haar probability measure on $G_{n,k}$ (with the interpretation when $k=0$ that $W_0(K) = \abs{B_2^n}$). \medskip To bound $W_k(Z_p(\nu))$, we will need the following averaged version of \cite[Theorem 2.4]{GiannopoulosPajorPaourisPsi2},\cite[Proposition 3.1]{EMilman-IsotropicMeanWidth}: \begin{prop} \label{prop:Zp-mixed} Let $\mu$ denote an origin-symmetric log-concave probability measure on $\mathbb{R}^n$. Then for all $p \geq 1$ and $k=1,\ldots,n$: \[ W_k(Z_p(\mu))^{\frac{1}{k}} \leq C \max(\sqrt{p} , p / \sqrt{k}) W_k(Z_2(\mu))^{\frac{1}{k}} . \] \end{prop} \begin{proof} It was shown in \cite[Theorem 6.2]{Paouris-IsotropicTail} (see also \cite[Corollary 2.2]{EMilman-IsotropicMeanWidth}) that for any (say origin-symmetric) log-concave probability measure $\eta$ on $\mathbb{R}^k$, one has: \[ {\rm vrad}(Z_p(\eta)) \leq C \sqrt{p} \; \det {\rm Cov}(\eta)^{\frac{1}{2k}} \;\;\; \forall 1 \leq p \leq k . \] When $p \geq k$, since $Z_p(\eta) \subset \frac{p}{k} Z_k(\eta)$, it follows that: \[ {\rm vrad}(Z_p(\eta)) \leq \frac{p}{k}{\rm vrad}(Z_k(\eta)) \leq C \frac{p}{\sqrt{k}} \; \det {\rm Cov}(\eta)^{\frac{1}{2k}} \;\;\; \forall p \geq k . \] Finally, noting that $\det {\rm Cov}(\eta)^{\frac{1}{2k}} = \vrad(Z_2(\eta))$ as $Z_2(\eta)$ is an ellipsoid, we obtain: \[ {\rm vrad}(Z_p(\eta)) \leq C \max(\sqrt{p} , p / \sqrt{k}) {\rm vrad}(Z_2(\eta)) \;\;\; \forall p \geq 1. \] Applying the above to $\eta = \pi_F \mu$, using that $Z_q(\pi_F \mu) = P_F Z_q(\mu)$, integrating over $F \in G_{n,k}$ and applying Kubota's formula (\ref{eq:Kubota}), the assertion readily follows. \end{proof} The quermassintegrals of the ellipsoid $Z_2(\mu)$ are particularly easy to compute using elementary linear algebra, and one may show that $(W_k(Z_2(\mu))/\abs{B_2^n})^{2/k}$ is closely related to the $k$-th root of the $k$-th symmetric elementary polynomial in the eigenvalues of ${\rm Cov}(\mu)$ (appropriately normalized). However, we will only require: \begin{lem} \label{lem:Z2-mixed} For all log-concave probability measures $\mu$ on $\mathbb{R}^n$: \[ \det \; {\rm Cov}(\mu)^{\frac{1}{2n}} \leq \brac{\frac{W_k(Z_2(\mu))}{\abs{B_2^n}}}^{\frac{1}{k}} \leq \brac{\frac{1}{n} {\rm tr} \; {\rm Cov}(\mu)}^{\frac{1}{2}} \;\;\; \forall k=1,\ldots, n . \] \end{lem} \begin{proof} By the Alexandrov inequalities for the quermassintegrals \cite[Chapter 6]{Schneider-Book}, we have: \[ \brac{\frac{W_n(Z_2(\mu))}{\abs{B_2^n}}}^{\frac{1}{n}} \leq \brac{\frac{W_k(Z_2(\mu))}{\abs{B_2^n}}}^{\frac{1}{k}} \leq \frac{W_1(Z_2(\mu))}{\abs{B_2^n}} , \] so it is enough to calculate the expressions on either side. Indeed: \begin{align*} \frac{W_1(Z_2(\mu))}{\abs{B_2^n}} & = \int_{S^{n-1}} h_{Z_2(\mu)}(\theta) d\sigma(\theta) \leq \brac{\int_{S^{n-1}} h^2_{Z_2(\mu)}(\theta) d\sigma(\theta)}^{\frac{1}{2}} \\ &= \brac{\int_{S^{n-1}} \scalar{{\rm Cov}(\mu) \theta,\theta} d\sigma(\theta)}^{\frac{1}{2}} = \brac{\frac{1}{n} {\rm tr} \; {\rm Cov}(\mu)}^{\frac{1}{2}} , \end{align*} while: \[ \brac{\frac{W_n(Z_2(\mu))}{\abs{B_2^n}}}^{\frac{1}{n}} = \brac{\frac{\abs{Z_2(\mu)}}{\abs{B_2^n}}}^{\frac{1}{n}} = \det \; {\rm Cov}(\mu)^{\frac{1}{2n}} . \] \end{proof} Finally, it is useful to note that by Lemma \ref{lem:Guedon}: \[ \brac{ {\rm tr} \; {\rm Cov}(\mu)}^{\frac{1}{2}} = \brac{\int_{\mathbb{R}^n} \abs{x}^2 d\mu(x)}^{\frac{1}{2}} = I_2(\mu) \simeq I_1(\mu) . \] We are now ready to prove Theorem \ref{thm:part2-ellipsoids}. \begin{proof}[Proof of Theorem \ref{thm:part2-ellipsoids}] \[ M(Z_p(\mu) , t I_1(\mu) B_2^n) \leq \frac{\abs{Z_p(\mu) + t I_1(\mu) B_2^n} }{ \abs{t I_1(\mu) B_2^n} } = \sum_{k=0}^n {n \choose k} \frac{W_k(Z_p(\mu))}{(t I_1(\mu))^k\abs{B_2^n}} . \] Employing Proposition \ref{prop:Zp-mixed} and Lemma \ref{lem:Z2-mixed}, we know that: \[ \brac{\frac{W_k(Z_p(\mu))}{\abs{B_2^n}}}^{1/k} \leq C \max(\sqrt{p} , p / \sqrt{k}) \frac{I_1(\mu)}{\sqrt{n}} \;\;\; \forall k=1,\ldots,n . \] Using the standard estimate ${n \choose k} \leq \brac{\frac{e n}{k}}^k$, we obtain: \begin{align*} M(Z_p(\mu) , t I_1(\mu) B_2^n) & \leq 1 + \sum_{k=1}^n \brac{C \frac{e \sqrt{n}}{t k} \max(\sqrt{p},p / \sqrt{k}) }^k \\ & = 1 + \sum_{k=1}^{\floor{p}} \brac{C \frac{e p \sqrt{n}}{t k^{3/2}} }^k + \sum_{k=\floor{p}+1}^{n} \brac{C \frac{e \sqrt{p} \sqrt{n}}{t k} }^k \\ & = 1 + \sum_{k=1}^{\floor{p}} \brac{C' \frac{p^{2/3} n^{1/3}}{t^{2/3} k} }^{\frac{3}{2} k} + \sum_{k=\floor{p}+1}^{n} \brac{C \frac{e \sqrt{p} \sqrt{n}}{t k} }^k \\ & \leq \sum_{m=0}^{\infty} \frac{1}{m!} \brac{C'' \frac{p^{2/3} n^{1/3}}{t^{2/3}} }^{m} + \sum_{k=0}^{\infty} \frac{1}{k!} \brac{C'' \frac{\sqrt{p} \sqrt{n}}{t} }^k \\ & = \exp \brac{ C'' \frac{p^{2/3} n^{1/3}}{t^{2/3}} + C'' \frac{\sqrt{p} \sqrt{n}}{t} } . \end{align*} The assertion is thus established for $\mathcal{E} = B_2^n$, and hence for arbitrary ellipsoids, as explained above. The proof is complete. \end{proof} \section{Part 1' - Separation Dimension Reduction: Ellipsoids} \label{sec:part1-ellipsoids} \subsection{The Probabilistic Approach} The following proposition decouples the question of separation dimension reduction and the massiveness requirement of Part 1' of The Program. Not surprisingly, this is achieved by introducing some randomness. \begin{prop} \label{prop:part1-prob} Let $K \subset \mathbb{R}^n$ denote a star-body, and assume that $\set{x_i}_{i=1,\ldots, M} \subset \mathbb{R}^n$ is a collection of $K$-separated points. Let $T : \mathbb{R}^n \rightarrow \mathbb{R}^m$ denote a random linear map and $L,S \subset \mathbb{R}^m$ denote two random star-bodies defined on a common probability space, so that $L = L_T$ and $S = S_T$ are measurable functions of $T$ (when equipping the family of star-bodies with the Hausdorff metric). Assume that: \begin{enumerate} \item If $x \notin K$ then $\P( T x \in L_T) \leq p_{\text{out}}$. \item If $x \in K$ then $\P(T x \in S_T) \geq p_{\text{in}}$. \end{enumerate} Then for any Borel probability measure $\mu$ on $\mathbb{R}^n$, if: \begin{equation} \label{eq:prob-condition} M^2 p_{\text{out}} \leq \mu(K) p_{\text{in}} , \end{equation} then there exist a linear map $T^0 : \mathbb{R}^n \rightarrow \mathbb{R}^m$ and star-bodies $L^0,S^0 \subset \mathbb{R}^m$ so that $\set{T^0(x_i)}_{i=1,\ldots,M}$ are $L^0$-separated and $T^0_*(\mu)(S^0) \geq \frac{1}{2} \mu(K) p_{\text{in}}$. \end{prop} \begin{proof} We may assume that $p_{\text{in}}, p_{\text{out}},\mu(K) > 0$. By linearity and the union-bound, the random set $\set{T(x_i)}_{i=1,\ldots,M}$ is clearly $L_T$-separated with probability at least: \[ 1 - {M \choose 2} p_{\text{out}} > 1 - \frac{M^2}{2} p_{\text{out}} , \] so it remains to verify the second requirement. Denoting $G_T := T_*(\mu)(S_T)$, note that: \[ \mathbb{E}(G_T) = \mathbb{E} \brac{ \int_{\mathbb{R}^n} 1_{\set{T x \in S_T}} d\mu(x) } = \int_{\mathbb{R}^n} \P(T x \in S_T) d\mu(x) \geq \int_{K} \P(T x \in S_T) d\mu(x) \geq \mu(K) p_{\text{in}} =: q . \] Since $0 \leq G_T \leq 1$ and $\mathbb{E}(G_T) \geq q$, it follows that $\P( G_T \geq q/2) \geq q/2$. Consequently, the assumption (\ref{eq:prob-condition}) guarantees that the event that $\set{T(x_i)}_{i=1,\ldots,M}$ are $L_T$-separated and the one that $G_T \geq q/2$ have non-empty intersection, yielding the claim. \end{proof} \subsection{Part 1' For Ellipsoids} In view of Proposition \ref{prop:part1-prob}, Part 1' of The Program will follow from the following one-sided variant of the Johnson--Lindenstrauss lemma, which pertains to small-ball probability, see e.g. \cite[Fact 3.2]{MilmanSzarek-GeometricLemma} or \cite[Lemma 8.1.15]{GreekBook}: \begin{lem} \label{lem:small-ball} Let $T : \mathbb{R}^n \rightarrow \mathbb{R}^m$ denote a random orthogonal projection, that is $T = P \circ U$ where $U$ is uniformly distributed on $SO(n)$ and $P$ is the canonical projection on the first $m$ coordinates, $m=1,\ldots,n$. Then for all $x \in S^{n-1}$: \[ \P\brac{ \sqrt{n/m}\abs{T x}\leq \epsilon } \leq (C' \epsilon)^m \;\;\; \forall \epsilon \in [0,1] . \] \end{lem} \begin{cor}[Part 1' for Euclidean Ball] \label{cor:part1-ellipsoids} Let $\set{x_i}_{i=1,\ldots, e^k} \subset \mathbb{R}^n$ be a collection of $B_2^n$-separated points, $k \in [1,n]$, and let $\mu$ denote a Borel probability measure with $\mu(B_2^n) \geq e^{-q} \geq e^{-k}$. Set $m = \lceil k \rceil$, and denote $L := \sqrt{2} \sqrt{m/n} B_2^m$. Then there exists an orthogonal projection $T : \mathbb{R}^n \rightarrow \mathbb{R}^m$ (as above) so that: \begin{enumerate} \item $\set{T(x_i)}_{i=1,\ldots, e^k} \subset \mathbb{R}^m$ are $L/C$-separated. \item $T_*(\mu)(L) \geq \frac{1}{4} e^{-q}$. \end{enumerate} \end{cor} \begin{proof} By appropriately choosing $c>0$, we may ensure that: \begin{enumerate} \item $\P(\abs{T x} \leq c \sqrt{m/n}) \leq p_{\text{out}} := \frac{1}{2} e^{-3m}$ for any $x \notin B_2^n$. \item $\P(\abs{T x} \leq \sqrt{2} \sqrt{m/n}) \geq p_{\text{in}} := \frac{1}{2}$ for any $x \in B_2^n$. \end{enumerate} Indeed, the first estimate is ensured by Lemma \ref{lem:small-ball}, while the second one follows by simply noting that $\mathbb{E} \abs{T x}^2 = \frac{m}{n} \abs{x}^2$ and applying the Markov--Chebyshev inequality (or by invoking the Johnson--Lindenstrauss Lemma \ref{lem:JL}, but this is actually unnecessary). The assertion then follows by Proposition \ref{prop:part1-prob} with $C = \sqrt{2} / c$, $L_T \equiv L/C$ and $S_T \equiv L$. \end{proof} \subsection{Running The Program for Ellipsoids} Running (the regular version of) The Program of Section \ref{sec:full-program}, we can finally obtain: \begin{thm}[Generalized Regular Dual Sudakov For Ellipsoids] \label{thm:RegularSudakovEllipsoids} For any origin-symmetric log-concave measure $\mu$ on $\mathbb{R}^n$ and any (origin-symmetric) ellipsoid $\mathcal{E} \subset \mathbb{R}^n$, we have: \[ M(Z_p(\mu) , t m_1(\mu, \mathcal{E}) \mathcal{E}) \leq \exp \brac{C \brac{ \frac{p}{t^2} + \frac{p}{t} } } \;\;\; \forall p \geq 1 \;\; \forall t > 0 . \] \end{thm} \begin{proof} Since the expression on the left-hand-side is invariant under simultaneously applying a linear transformation to $\mu$ and $\mathcal{E}$, it is enough to establish it for the case that $\mathcal{E} = B_2^n$. Since this expression is also invariant under scaling $\mu$, we may assume that $m_1(\mu,B_2^n) = 1$. Given $p \geq 1$ and $t > 0$, we run The Program for $K = B_2^n$, with $\L_m$ consisting of (centered) Euclidean balls in $\mathbb{R}^m$. Corollary \ref{cor:part1-ellipsoids} applied with $q=1$ verifies Part 1' of The Program regarding Massive Separation Dimension Reduction (with say $D=1$, $B=1$, $q_m=3$, $A = 5/4$ and $R \leq C'$). Part 2 of The Program regarding Weak Generalized Regular Dual Sudakov, with parameters $q_m = 3$ and $\varphi_t(x) = C'' \max((x/t)^{2/3}, \sqrt{x} / t)$, is established by Theorem \ref{thm:part2-ellipsoids} in conjunction with Lemma \ref{lem:Guedon} (which implies that $m_3(\nu,L) \simeq I_1(\nu,L)$ for all $\nu \in \mathbb{M}_m$ and $L \in \L_m$). Since $\varphi_t^{-1}(y) \simeq \min(y^{3/2} t, y^2 t^2)$, we have $\varphi_t^{-1}(1/(4A)) \simeq \min(t,t^2)$, and Theorem \ref{thm:full-program} yields the asserted estimate. \end{proof} \section{Pure Measures} \label{sec:pure} In this section, we introduce the class of pure log-concave probability measures, and study their properties. \subsection{Definitions} Recall that the isotropic constant of a probability measure $\mu$ on $\mathbb{R}^n$ having log-concave density $f_\mu$ is defined as the following affine-invariant quantity: \begin{equation} \label{eq:Lmu} L_\mu := (\max f_\mu)^\frac{1}{n} (\det \; {\rm Cov}(\mu))^{\frac{1}{2n}} ~. \end{equation} It is well-known (e.g \cite{Milman-Pajor-LK,Klartag-Psi2}) that $L_\mu \geq c$ for some universal constant $c > 0$. See Bourgain \cite{BourgainMaximalFunctionsOnConvexBodies,Bourgain-LK}, Milman--Pajor \cite{Milman-Pajor-LK}, Ball \cite{Ball-PhD} and Brazitikos--Giannopoulos--Valettas--Vritsiou \cite{GreekBook} for background on the yet unresolved Slicing Problem, which is concerned with obtaining a dimension independent upper-bound on $L_\mu$. The current best-known estimate $L_\mu \leq C n^{1/4}$ is due to B. Klartag \cite{KlartagPerturbationsWithBoundedLK}, who improved the previous estimate $L_\mu \leq C n^{1/4} \log(1+n)$ of J. Bourgain \cite{Bourgain-LK} (proven when $\mu$ is the uniform measure on an origin-symmetric convex body, but valid for general log-concave probability measures, see \cite{Ball-PhD, KlartagPerturbationsWithBoundedLK}); see also Klartag--Milman \cite{KlartagEMilmanLowerBoundsOnZp} and Vritsiou \cite{Vritsiou-ExtendingKM} for subsequent refinements. The following key estimate, which plays a fundamental role in previous groundbreaking works of Paouris \cite{Paouris-IsotropicTail,PaourisSmallBall} and Klartag \cite{Klartag-Psi2}, relates between $\abs{Z_n(\mu)}$ and $L_\mu$ (see e.g. the proof of \cite[Theorem 2.1]{EMilman-IsotropicMeanWidth}): \begin{thm}[Paouris, Klartag] \label{thm:Zn} Let $\mu$ denote a log-concave probability measure on $\mathbb{R}^n$ with barycenter at the origin. Then: \[ \vrad(Z_n(\mu)) \simeq \frac{\sqrt{n}}{L_\mu} {\rm vrad}(Z_2(\mu)) = \sqrt{n} \; \frac{\det \; {\rm Cov}(\mu)^{\frac{1}{2n}}}{L_\mu} = \frac{\sqrt{n}}{\max f_\mu^{1/n}} ~. \] In other words, if $\mu$ is isotropic then $\abs{Z_n(\mu)}^{1/n} \simeq \frac{1}{L_\mu}$. \end{thm} \begin{defn}[$h$-pure measure] Let $\mu$ denote a probability measure on $\mathbb{R}^n$ with barycenter at the origin. We will say that $\mu$ is $h$-pure ($h=1,\ldots,n$), with constants $(A,B)$, if the following two conditions hold: \begin{enumerate} \item $Z_n(\mu) \supset \frac{1}{A} \sqrt{h} Z_2(\mu)$. \item For all $E \in G_{n,m}$ with $m = h,\ldots,n$, we have that $L_{\pi_E \mu} \leq B$. \end{enumerate} When $\mu$ is $h$-pure with some universally bounded constants $A,B \leq C < \infty$, we will simply say that $\mu$ is $h$-pure (with implicitly bounded constants). \end{defn} Note that in the second condition, only the marginals of $\mu$ of dimension not smaller than $h$ are taken into account. For example, if $\mu$ is isotropic log-concave and $Z_n(\mu) \supset \frac{1}{A} \sqrt{n} B_2^n$, it follows from Theorem \ref{thm:Zn} that $L_\mu \leq C A$, and hence $\mu$ is $n$-pure (with constants $(A,CA)$). On the other hand, if all marginals of $\mu$ (of arbitrary dimension) have isotropic constant bounded by a universal constant $C>0$, since $Z_n(\mu) \supset Z_2(\mu)$ (if $n \geq 2$, and up to a constant otherwise), we see that $\mu$ is $1$-pure (with constants $(1,C)$). The Slicing Problem may be equivalently restated as asking whether all log-concave probability measures on $\mathbb{R}^n$ ($n \geq 2$) are $1$-pure with constants $(1,C)$, for some universal constant $C>0$ independent of $n$. \medskip The following is immediate from the definition: \begin{lem} If $\mu$ is both $h_1$-pure and $h_2$-pure with $1 \leq h_1 < h_2 \leq n$, then it is also $h$-pure for all $h=h_1,\ldots,h_2$. \end{lem} \subsection{Families of Pure Measures} We now provide several useful examples of families of log-concave measures which are pure. For simplicity, we restrict our attention to origin-symmetric measures. Recall that a measure is called unconditional if it is invariant under reflections with respect to all coordinate hyperplanes. A probability measure $\mu$ on $\mathbb{R}^n$ is called $\Psi_2$ or sub-Gaussian if $Z_p(\mu) \subset C \sqrt{p} Z_2(\mu)$ for some universal constant $C \geq 1$ and all $p \geq 2$. It is called super-Gaussian if $Z_p(\mu) \supset c \sqrt{p} Z_2(\mu)$ for some universal constant $c > 0$ and all $p \in [2,n]$. It is immediate to verify (see e.g. \cite{GuedonEMilmanInterpolating}) that if $\tilde{\mu}$ is an isotropic origin-symmetric log-concave probability measure and $\gamma_n$ is the standard Gaussian measure, then the convolved measure $\mu = \tilde{\mu} \ast \gamma_n$ is log-concave and super-Gaussian. Finally, a convex body $K$ is called $2$-convex (with constant $\alpha > 0$) if $1 - \norm{\frac{x+y}{2}}_K \geq \alpha \epsilon^2$ for all $\norm{x}_K,\norm{y}_K \leq 1$ with $\norm{x-y}_K \geq \epsilon > 0$. \begin{prop} The following families of log-concave measures are $n$-pure: \begin{enumerate} \item Super-Gaussian measures. \item Uniform measures on $2$-convex bodies (with fixed $2$-convexity constant $\alpha>0$). \end{enumerate} \end{prop} \begin{proof} If $\mu$ is super-Gaussian we have by definition that $Z_n(\mu) \supset c \sqrt{n} Z_2(\mu)$. If $\mu$ is the uniform measure on a $2$-convex body $K$, it was shown in \cite{KlartagEMilman-2-Convex} that $Z_n(\mu) \supset c_\alpha \sqrt{n} Z_2(\mu)$ for some constant $c_\alpha > 0$ (depending only on $\alpha$, which we assume fixed). In either case, we deduce by Theorem \ref{thm:Zn} that $L_\mu \leq C$, establishing both properties of an $n$-pure measure. \end{proof} \begin{prop} \label{prop:pure-1} The following families of log-concave measures are $1$-pure: \begin{enumerate} \item Super-Gaussian measures. \item Sub-Gaussian ($\Psi_2$) measures. \item Unconditional measures. \end{enumerate} Furthermore, the class of $1$-pure measures is closed under taking marginals. \end{prop} \begin{proof} We may assume that all measures in question are isotropic. \begin{enumerate} \item If $\mu$ is super-Gaussian, we have for all $E \in G_{n,m}$: \[ Z_m(\pi_E \mu) = P_E Z_m(\mu) \supset c \sqrt{m} B_2(E) , \] so that $\frac{1}{L_{\pi_E \mu}} \simeq \abs{Z_m(\pi_E \mu)}^{1/m} \geq c' > 0$, as asserted. \item Similarly, if $\mu$ is $\Psi_2$ then for all $E \in G_{n,m}$ and $p \geq 2$: \[ Z_p(\pi_E \mu) = P_E Z_p(\mu) \subset C \sqrt{p} B_2(E) , \] confirming that $\pi_E \mu$ is also $\Psi_2$ (with the same universal bound $C$ on its $\Psi_2$ constant). It is well-known \cite{Bourgain-Psi-2-Bodies,KlartagEMilmanLowerBoundsOnZp} that a $\Psi_2$ measure has bounded isotropic constant, confirming the assertion in this case. \item If $\mu$ is an isotropic unconditional measure and $\chi$ is the uniform measure on the cube $[-1,1]^n$, it was noted in \cite[p. 2829]{DafnisGiannopoulosTsolomitis-RandomPolytopes} following Lata{\l}a that $Z_p(\mu) \supset c Z_p(\chi)$ for all $p \geq 1$. It follows as before that if $E \in G_{n,m}$ then: \[ Z_m(\pi_E \mu) = P_E Z_m(\mu) \supset c P_E Z_m(\chi) = c Z_m(\pi_E \chi) . \] Taking volumes, we deduce: \[ \frac{1}{L_{\pi_E \mu}} \simeq \abs{Z_m(\pi_E \mu)}^{1/m} \geq c \abs{Z_m(\pi_E \chi)}^{1/m} \simeq \frac{1}{L_{\pi_E \chi}} . \] But since $\chi$ is a $\Psi_2$ measure, we know that $L_{\pi_E \chi}$ is universally bounded above, establishing (3). \end{enumerate} The closure under marginals is immediate from the definition and the fact that $Z_m(\mu) \supset c Z_2(\mu)$ for all $m \geq 1$ by (\ref{eq:Zpq}). \end{proof} \begin{rem} It was shown by Paouris in \cite{Paouris-MarginalsOfProducts} that product measures (having arbitrarily many factors) of sub-Gaussian or super-Gaussian log-concave measures are $1$-pure. \end{rem} \begin{rem} A well-known argument due to V.~Milman involving the M-position \cite{AGA-Book-I,Pisier-Book}, in combination with K.~Ball's observation that the isotropic position is an M-position if the isotropic constant is bounded \cite{Ball-PhD}, shows that if $\mu$ is an origin-symmetric isotropic log-concave probability measure with $L_\mu \leq C$, then with high-probability, a random marginal $\pi_F \mu$ with $F \in G_{n,n/2}$ is $n/2$-pure with universal constants $(A,B)$ depending solely on $C$. Moreover, with high-probability, a random marginal $\pi_F \mu$ with $F \in G_{n,n/4}$ is super-Gaussian, and therefore $h$-pure for all $h=1,\ldots,n/4$ with universal constants $(A,B)$ depending solely on $C$; we briefly sketch the proof. Let $ {\bar{sg}}(\nu)$ denote the supergaussian constant of a probability measure $\nu$ in $\mathbb{R}^m$, i.e. the minimum $t>0$ such that $ \frac{1}{ t} \sqrt{p} Z_{2}(\nu) \subseteq Z_{p}(\nu)$ for all $2\leq p \leq m$. It is easy to check that if $\mu_{s}$ denotes the conditioning of $\mu$ onto $s \sqrt{n} B_2^n$ for a suitably chosen constant $s \simeq 1$, then ${\bar{sg}}(\pi_{F}(\mu))\leq C {\bar{sg}}(\pi_{F}(\mu_{s}))$ for all subspaces $F$, since $ Z_{p} (\mu_{s}) \subset e \; Z_{p} (\mu)$ for all $2\leq p \leq n$. Moreover, one may check (e.g. by using \cite[Proposition 5.5]{PaourisSmallBall}) that $M^*(Z_{p}(\mu_{s}))\leq C' \sqrt{p}$ and (since $L_{\mu_{s}} \simeq L_{\mu} \leq C$) ${\rm vrad}(Z_{p}(\mu_{s})) \geq c \sqrt{p}$, for all $2\leq p \leq n$. Applying \cite[Proposition 3.1]{Klartag-Vershynin} and the Bourgain--Milman reverse Santal\'o inequality \cite[Theorem 8.2.2]{AGA-Book-I}, it follows that for $k\leq n/4$, with probability at least $1- e^{ -c n}$ over $F\in G_{n,k}$ (with respect to the corresponding Haar probability measure), one has that the inradius of $P_{F}Z_{p}(\mu_s)$ is at least $c^{\prime} \sqrt{p}$. Using this fact for all $p=2^{m}$ with $m=1, \cdots ,[\log_{2}{n}]$, and invoking the identity $P_{F} Z_{p}(\mu_{s}) = Z_{p} ( \pi_{F} (\mu_{s}))$, an immediate application of the union bound yields that with high-probability on $F\in G_{n,k}$: \[ c\sqrt{p} Z_{2} (\pi_{F} (\mu_{s})) \subset Z_{p} (\pi_{F} (\mu_{s})) \;\;\; \forall p=2^{m} ~,~ m=1, \cdots , [\log_{2} {n}]. \] This shows that $ {\bar{sg}}(\pi_{F}(\mu)) \leq C_1 {\bar{sg}}(\pi_{F}(\mu_{s}))\leq C_2$ with probability at least $1- e^{-c n}$ over $F \in G_{n,k}$, completing the proof. \end{rem} \subsection{Properties of Pure Measures} Given a convex body $K \subset \mathbb{R}^n$ and $m=1,\ldots,n$, we use the following notation: \begin{align*} v_m^-(K) & := \inf \set{ {\rm vrad}(P_E (K)) ; E \in G_{n,m} }, \\ e_m(B_2^n,K) & := \inf \set{t > 0 \; ;\; N(B_2^n, tK) \leq 2^m} . \end{align*} We will need the following crucial estimate on the regularity of dual covering numbers of pure isotropic log-concave measures, essentially established by Giannopoulos and Milman in \cite{GiannopoulosEMilman-IsotropicM}: \begin{thm} \label{thm:pure-regular} Let $\mu$ denote a $h$-pure isotropic log-concave probability measure (with constants $(A,B)$). Then for all $k=1,\ldots,n$: \begin{equation} \label{eq:pure-prop1} v_k^-(Z_n(\mu)) \geq \max \brac{\frac{1}{A} \sqrt{h} , \frac{c}{B} \sqrt{k}} . \end{equation} Furthermore, for all $k=1,\ldots,n$ we have: \begin{equation} \label{eq:pure-prop2} e_k(B_2^n,Z_n(\mu)) \leq \min \brac{ \frac{A}{\sqrt{h}} , C_{A,B} \frac{1}{\sqrt{k}} \frac{n}{k} \log( e + \frac{n}{k}) } , \end{equation} or equivalently, we have for all $t > 0$: \begin{equation} \label{eq:pure-prop3} N(\sqrt{n} B_2^n , t Z_n(\mu)) \leq \begin{cases} \exp \brac{C'_{A,B} \; n \brac{\frac{\log(e+t)}{t}}^{\frac{2}{3}}} & t \leq A \sqrt{n/h} \\ 1 & \text{otherwise} \end{cases} . \end{equation} \end{thm} \begin{proof}[Proof Sketch] Since $\mu$ is assumed isotropic we have $Z_2(\mu) = B_2^n$. The case $h=1$ appears explicitly in \cite[Lemma 12 and Theorem 16]{GiannopoulosEMilman-IsotropicM}. For the general case, an inspection of the proof of \cite[Theorem 16]{GiannopoulosEMilman-IsotropicM} reveals that the only ingredient required to obtain an estimate on $e_k(B_2^n,Z_n(\mu))$ is a lower bound on $v_m^-(Z_n(\mu))$ for $m=1,\ldots, k$. When $m \leq h$, we may simply use $Z_n(\mu) \supset \frac{1}{A} \sqrt{h} B_2^n$ and conclude that: \[ v_m^-(Z_n(\mu)) \geq \frac{1}{A} \sqrt{h} . \] When $m = h,\ldots,n$, \cite[Lemma 12]{GiannopoulosEMilman-IsotropicM} ensures that: \[ v_m^-(Z_n(\mu)) \geq \frac{c}{\sup \set{ L_{\pi_E \mu} ; E \in G_{n,m} }} \sqrt{m} , \] for some universal constant $c > 0$, and so we see that one only needs to control the isotropic constants of marginals of $\mu$ of dimension not smaller than $h$. Combining these two estimates, (\ref{eq:pure-prop1}) follows for a $h$-pure isotropic measure with constants $(A,B)$. Now, according to \cite[Corollary 9 and Remark 6]{GiannopoulosEMilman-IsotropicM}, one has for any $\alpha > 0$: \[ e_k(B_2^n,K) \leq C_\alpha \sup_{m=1,\ldots,k} \brac{\frac{m}{k}}^{\alpha} \frac{n}{m} \log\Big(e + \frac{n}{m}\Big) \frac{1}{v_{m}^-(K)}. \] Applying this to $K = Z_n(\mu)$ (with, say, $\alpha = 2$), and plugging the estimate (\ref{eq:pure-prop1}) on $v_{m}^-(K)$, we obtain: \[ e_k(B_2^n, Z_n(\mu)) \leq C_{A,B} \frac{1}{\sqrt{\max(h,k)}} \frac{n}{k} \log( e + \frac{n}{k}) \] Combining this with the trivial estimate: \[ e_k(B_2^n,Z_n(\mu)) \leq \frac{A}{\sqrt{h}} \] (since $Z_n(\mu) \supset \frac{1}{A} \sqrt{h} B_2^n$), the asserted (\ref{eq:pure-prop2}) follows (note that we replaced $\max(h,k)$ by the looser $k$ since we do not care here about the dependence of $C_{A,B}$ on $(A,B)$). The equivalent (\ref{eq:pure-prop3}) is obtained in the range $t \geq 1$ by direct inspection of (\ref{eq:pure-prop2}), and extended to all $t > 0$ by Lemma \ref{lem:extend} after adjusting the constant $C_{A,B}'$. \end{proof} \section{Part 2 - Weak Generalized Dual Sudakov: Pure Measures and Regular Small-Diameter Bodies} \label{sec:part2} It is naturally of interest to establish the Weak Generalized Dual Sudakov estimate for general (say origin-symmetric) log-concave measures $\mu$ and convex bodies $K$. Unfortunately, we have not been able to accomplish this in that generality. In this section, we establish a Weak Generalized Dual Sudakov estimate when the log-concave measure $\mu$ is assumed to be $h$-pure, or when $K$ is assumed to have $\alpha$-regular small-diameter (defined below). \subsection{Part 2 for Pure Measures} \begin{thm} \label{thm:part2-pure} Let $\mu$ be a $h$-pure log-concave measure on $\mathbb{R}^n$ (with constants $(A,B)$), and let $p \in [1,n]$. Then for any $q > 0$ and star-body $L \subset \mathbb{R}^n$: \[ M\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} m_q(\mu,L) L} \leq \exp \brac{1 + q + C_{A,B} n \brac{\frac{\log(e+t)}{t}}^{\frac{1}{3}} } \;\;\; \forall t > 0. \] In particular, if $\mu(L) \geq \exp(-p)$ with $p \in [1,n]$ then: \[ M(Z_p(\mu) , L) \leq \exp( C'_{A,B} n^{5/6} p^{1/6} \log^{1/3}(1+n/p)) . \] \end{thm} This will follow from Part 3 of The Program together with the following: \begin{thm} \label{thm:Zpn} Let $\mu$ be a $h$-pure log-concave measure (with constants $(A,B)$) and let $p \in [1,n]$. Then: \[ N\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} Z_n(\mu)} \leq \exp \brac{ C_{A,B} n \brac{\frac{\log(e+t)}{t}}^{\frac{1}{2}}} \;\;\; \forall t > 0 . \] \end{thm} \begin{proof} Since the statement is invariant under linear transformations, we may assume that $\mu$ is isotropic. By the triangle inequality for covering numbers, we have for all $s > 0$: \[ N\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} Z_n(\mu)} \leq N\brac{Z_p(\mu) , \frac{t}{s} \sqrt{p} B_2^n} N\brac{\sqrt{n} B_2^n , s Z_n(\mu)} . \] Since $I_1(\mu,B_2^n) \simeq \sqrt{n}$ for isotropic $\mu$, the following estimate is a particular case of the Generalized Dual Sudakov estimate for ellipsoids we obtained in Theorem \ref{thm:RegularSudakovEllipsoids} for general (not necessarily isotropic) origin-symmetric log-concave measures: \[ N(Z_p(\mu) , r \sqrt{p} B_2^n) \leq \exp\brac{ C_2 \frac{n}{r^2} + C_3 \frac{\sqrt{n} \sqrt{p}}{r} } \;\;\; \forall r > 0. \] For isotropic log-concave measures, this estimate was first established in \cite[Proposition 5.1]{GPV-ImprovedPsi2} -- see Subsection \ref{subsec:conclude-ell} for more details. Next, by Theorem \ref{thm:pure-regular} on the regularity of dual covering numbers for pure measures, we have: \[ N(\sqrt{n} B_2^n , s Z_n(\mu)) \leq \exp \brac{ C_{A,B} n \brac{\frac{\log(e+s)}{s}}^{\frac{2}{3}} } \;\;\; \forall s >0 . \] Setting $r = t/s$ above and combining both estimates, we obtain for all $s,t > 0$: \[ N(Z_p(\mu) , t \sqrt{p/n} Z_n(\mu)) \leq \exp \brac{ C_2 \frac{n}{t^2} s^2 + C_3 \frac{\sqrt{n}{\sqrt{p}}}{t} s + C_{A,B} n \brac{\frac{\log(e+s)}{s}}^{\frac{2}{3}} } . \] Optimizing on $s$, we set $s = t^{3/4} \log^{1/4}(e+t)$, yielding: \[ N(Z_p(\mu) , t \sqrt{p/n} Z_n(\mu)) \leq \exp \brac{ C'_{A,B} n \brac{\frac{\log(e+t)}{t}}^{1/2} + C_3 \sqrt{n} \sqrt{p} \brac{\frac{\log(e+t)}{t}}^{1/4}} \;\;\; \forall t > 0 . \] However, note that since $Z_p(\mu) \subset Z_n(\mu)$, the left-hand-side is exactly $1$ for all $t \geq \sqrt{n/p}$, and that in the non-trivial range $t \in (0 , \sqrt{n/p}]$, the first term on the right-hand-side always dominates the second one. Adjusting constants, the assertion is thus established. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:part2-pure}] By the triangle inequality for packing numbers, we have for every $s > 0$: \[ M\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} m_q(\mu,L) L} \leq N\brac{Z_p(\mu) , s \sqrt{\frac{p}{n}} Z_n(\mu)} M\brac{Z_n(\mu) , \frac{t}{s} m_q(\mu,L) L} . \] Invoking Theorems \ref{thm:Zpn} and \ref{thm:part3} to estimate the terms on the right-hand-side, we obtain: \[ M\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} m_q(\mu,L) L} \leq \exp \brac{ C_{A,B} n \brac{\frac{\log(e+s)}{s}}^{1/2} + 1 + q + C' n \frac{s}{t}} . \] Optimizing on $s > 0$, we set $s := t^{2/3} \log^{1/3} (e+t)$. Adjusting constants, the assertion is established. \end{proof} \subsection{Part 2 for Regular Small-Diameter Bodies} The only property we will require for the ensuing proof is encapsulated in the following: \begin{dfn*}[Regular Small-Diameter] An origin-symmetric convex body $K \subset \mathbb{R}^n$ is called $\alpha$-regular ($\alpha \in (0,2]$) small-diameter (with constant $R\geq 1$) if there exists $T \in GL_n$ so that, denoting $K_0 = T(K)$: \begin{enumerate} \item $K_0 \subset R B_2^n$ (``small-diameter"). \item $N(B_2^n,t K_0) \leq \exp( n / t^\alpha)$ for all $t > 0$ (``$\alpha$-regular"). \end{enumerate} \end{dfn*} Note that if $K$ is $\alpha$-regular small-diameter (with constant $R$), then it is also $\beta$-regular small-diameter (with constant depending on $R$ and $\beta$) for all $\beta \in (0,\alpha]$ (as follows for instance from Lemma \ref{lem:extend}). Also note that simple examples such as the unit-cube show that one cannot expect a general origin-symmetric convex $K \subset \mathbb{R}^n$ to be $\alpha$-regular small-diameter with $R \leq C$ and $\alpha > 2$ (independently of $n$). \subsubsection{Examples of Regular Small-Diameter Bodies} \begin{prop} \label{prop:psi2} Assume that $K \subset \mathbb{R}^n$ is an origin-symmetric convex body so that $\lambda_K$, the uniform measure on $K$, is $h$-pure (with constants $(A,B)$). Assume in addition that $K \subset D \sqrt{n} Z_2(\lambda_K)$. Then $K$ is $\alpha$-regular small-diameter with constant $R_{A,B,\alpha} D$ for all $\alpha \in (0,2/3)$. In particular, the following families are $\frac{1}{2}$-regular small-diameter: \begin{enumerate} \item Sub-Gaussian ($\Psi_2$) Convex-Bodies are $\frac{1}{2}$-regular small-diameter with constant $R \leq C$. \item Unconditional Convex Bodies $K$ satisfying $K \subset D \sqrt{n} Z_2(\lambda_K)$ are $\frac{1}{2}$-regular small-diameter with constant $R \leq C D$. \end{enumerate} \end{prop} \begin{proof} We may assume that $\lambda_K$ is isotropic, so that $Z_2(\lambda_K) = B_2^n$. Since $Z_n(\lambda_K) \simeq K$ by origin-symmetry, if we define $K_1 = K / \sqrt{n}$, Theorem \ref{thm:pure-regular} ensures that: \[ N(B_2^n, t K_1) \leq \exp(C_{A,B,\alpha} n / t^\alpha) \;\;\; \forall t > 0 , \] for any $\alpha \in (0,2/3)$, while we are given that $K_1 \subset D B_2^n$. It follows that $K$ is $\alpha$-regular with constant $R = C_{A,B,\alpha}^{1/\alpha} D$. \\ In particular, if $K$ is sub-Gaussian, Proposition \ref{prop:pure-1} ensures that $\lambda_K$ is $1$-pure, while it is also well-known (e.g. \cite{PaourisPsi2Behaviour}) that $K \simeq Z_n(\lambda_K) \subset D \sqrt{n} Z_2(\lambda_K)$ where $D$ is the $\Psi_2$ constant of $K$, which is assumed to be bounded by a universal constant. In addition, Proposition \ref{prop:pure-1} ensures that $\lambda_K$ is $1$-pure if $K$ is unconditional, and so if in addition $K \subset D \sqrt{n} Z_2(\lambda_K)$, then it is $\frac{1}{2}$-regular with constant $R = C D$. \end{proof} To describe another important class of regular small-diameter bodies, recall that the (Gaussian) type-2 constant of a normed space $(X,\norm{\cdot})$ over $\mathbb{R}$, denoted $T_2(X)$, is the minimal $T>0$ for which: \[ \brac{\mathbb{E} \snorm{\sum_{i=1}^m G_i x_i}^2}^{\frac{1}{2}} \leq T \brac{\sum_{i=1}^m \norm{x_i}^2}^{\frac{1}{2}} \] for any $m \geq 1$ and any $x_1,\ldots,x_m \in X$, where $G_1,\ldots,G_m$ denote independent real-valued standard Gaussian random variables. We will often identify between a normed space and its unit-ball, and given an origin-symmetric convex body $K \subset \mathbb{R}^n$, refer to the type-2 constant $T_2(X_K)$ of the normed space $X_K$ whose unit-ball is $K$. We will not distinguish between the Gaussian and the Rademacher type-2 constants, since it is well known that the former constant is always majorized by the latter one (e.g. \cite{Milman-Schechtman-Book}), and all our results will involve upper bounds in terms of the Gaussian type-2 constant. Note that a Hilbert-space has type-2 constant exactly $1$. It is also well-known (e.g. \cite{Milman-Schechtman-Book}) that subspaces of $L_p$ for $p \geq 2$ have type-2 constant of the order of $\sqrt{p}$. In a finite dimensional setting, it is clear by John's theorem that $T_2(X_K) \leq \sqrt{n}$ for all origin-symmetric $K \subset \mathbb{R}^n$. Since $\ell_\infty^n$ is isomorphic to a subspace of $L_{\log n}$, it similarly follows that $T_2(\ell_\infty^n) \leq C \sqrt{\log n}$, and in fact this is the correct order. \begin{prop} \label{prop:type-2} Every origin-symmetric convex body $K \subset \mathbb{R}^n$ is $2$-regular small-diameter with constant $C T_2(X_K)$, for some universal constant $C \geq 1$. \end{prop} \begin{proof} It was shown by W.~J.~ Davis, V.~Milman and N.~Tomczak-Jaegermann \cite{Davis-etal-Lemma} using operator-theoretic notation, and in \cite[Corollary 3.5]{EMilman-DualMixedVolumes} using a geometric argument, that when $B_2^n$ is the minimal volume ellipsoid containing $K$ (the Lowner position), then: \[ M(K) \leq M_2(K) := \brac{\int_{S^{n-1}} \norm{\theta}_K^2 d\sigma(\theta)}^{1/2} \leq T_2(X_K) . \] Setting $K_0 := R K$ with $R = C T_2(X_K)$ for an appropriate constant $C \geq 1$, we have that $M(K_0) \leq \frac{1}{C}$, and hence by the Dual Sudakov Minoration: \[ N(B_2^n, t K_0) \leq \exp(n / t^2) \;\;\; \forall t > 0 . \] Since $K_0 \subset R B_2^n$, the assertion is established. \end{proof} \begin{rem} \label{rem:lq} Applying Proposition \ref{prop:psi2}, we may conclude that the unit-balls $B_q^n$ of $\ell_q^n$, which for all $q \in [2,\infty]$ are both $\Psi_2$ (see e.g. \cite{BGMN}) and small-diameter unconditional, are $1/2$-regular with uniformly bounded universal constant $C > 0$. Note that by the previous remarks, we cannot get a uniform estimate for the type-2 constant of $\ell_q^n$ in the range $q \in [2,\infty]$, and the precise covering estimates due to C.~Sch\"utt \cite[Theorem 1]{Schutt-Entropy-numbers} imply that $\ell_\infty^n$ is not 2-regular small-diameter with dimension-independent constant. However, Sch\"utt's estimates yield that for all $q \in [2,\infty]$ and $\epsilon > 0$, $B_q^n$ is in fact $(2-\epsilon)$-regular small-diameter with constant $C_\epsilon>0$ depending only on $\epsilon > 0$. For simplicity, as this is not crucial for any of our ensuing estimates, we will only use below that they are all $1$-regular small-diameter with a uniformly bounded universal constant $C>0$ for all $q \in [2,\infty]$. \end{rem} \subsubsection{Weak Generalized Regular Dual Sudakov} \begin{thm} \label{thm:part2-smalldiam} Let $\mu$ denote an origin-symmetric log-concave probability measure on $\mathbb{R}^n$, and let $K \subset \mathbb{R}^n$ denote an $\alpha$-regular small-diameter convex body (with constants $\alpha \in (0,2]$ and $R\geq 1$). Then for any $p \in [1,n]$: \begin{equation} \label{eq:type-2-gen} N(Z_p(\mu) , t m_1(\mu, K) K) \leq \exp \brac{ C (n (R / t)^\alpha)^{\frac{2}{2+\alpha}} p^{\frac{\alpha}{2+\alpha}} + C (n (R / t)^\alpha)^{\frac{1}{1+\alpha}} p^{\frac{\alpha}{1+\alpha}}} \;\;\; \forall t > 0 . \end{equation} In particular: \[ N(Z_p(\mu) , T_2(X_K) m_1(\mu, K) K) \leq \exp \brac{ C' \sqrt{n} \sqrt{p}} , \] and for all $q \in [2,\infty]$: \[ N(Z_p(\mu) , t m_1(\mu,B_q^n) B_q^n) \leq \exp \brac{ C'' \frac{p^{1/3} n^{2/3}}{t^{2/3}} + C'' \frac{\sqrt{p} \sqrt{n}}{\sqrt{t}} } \;\;\; \forall t > 0 . \] \end{thm} \begin{proof} Since the statement is clearly invariant under applying (non-degenerate) linear transformations on both $\mu$ and $K$, we may and will assume that $K \subset R B_2^n$ and: \[ N(B_2^n, t K) \leq \exp(n / t^\alpha) \;\;\; \forall t > 0 . \] The small-diameter property ensures that: \[ I_1(\mu) = \int \abs{x} d\mu(x) \leq R \int \norm{x}_K d\mu(x) = R I_1(\mu,K) . \] Together with the Generalized Regular Dual Sudakov estimate for ellipsoids (Theorem \ref{thm:RegularSudakovEllipsoids}), we obtain for any $s > 0$: \begin{align*} N(Z_p(\mu) , t I_1(\mu,K) K) & \leq N(Z_p(\mu) , s I_1(\mu) B_2^n) N(I_1(\mu) B_2^n , (t/s) I_1(\mu,K) K ) \\ & \leq N(Z_p(\mu) , s I_1(\mu) B_2^n) N(B_2^n , t/(R s) K ) \\ & \leq \exp\brac{ C \brac{\frac{p}{s^2} + \frac{p}{s}} + n \frac{R^\alpha s^{\alpha}}{t^{\alpha}} } . \end{align*} Optimizing on $s > 0$, we set $s = C' \max(s_{1+\alpha}, s_{2+\alpha})$ where $s_\beta := \brac{\frac{p}{n} \brac{\frac{t}{R}}^\alpha}^{1/\beta}$. Recalling that $I_1(\mu,K) \simeq m_1(\mu,K)$ by Lemma \ref{lem:Guedon}, we obtain the first assertion. Applying the first part with $\alpha=2$, $t = C'' T_2(X_K)$ and invoking Proposition \ref{prop:type-2}, the second assertion follows after an adjustment of constants. The last assertion follows in view of Remark \ref{rem:lq}. \end{proof} Note that by Lemma \ref{lem:ZnHuge} we always have: \[ m_1(\mu,Z_n(\mu)) \simeq I_1(\mu,Z_n(\mu)) \leq C , \] and in addition $N(Z_p(\mu) , t Z_n(\mu)) = 1$ for all $p \in [1,n]$ and $t \geq 1$. Consequently, when $K = Z_n(\mu)$, only the first term on the right-hand-side of (\ref{eq:type-2-gen}) is relevant, and we obtain: \begin{cor} \label{cor:type-2} Let $\mu$ denote an origin-symmetric log-concave probability measure on $\mathbb{R}^n$, so that $Z_n(\mu)$ is $\alpha$-regular small-diameter (with constants $\alpha \in (0,2]$ and $R\geq 1$). Then for $p \in [1,n]$: \[ N\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} Z_n(\mu)} \leq \exp \brac{ C n \frac{R^{\frac{2\alpha}{2+\alpha}}}{t^{\frac{2\alpha}{2+\alpha}}} } \;\;\; \forall t > 0 . \] In particular: \[ N\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} Z_n(\mu)} \leq \exp \brac{ C n \frac{T_2(X_{Z_n(\mu)})}{t} } \;\;\; \forall t > 0 . \] \end{cor} \medskip In analogy with the previous subsection, we deduce: \begin{thm} Let $\mu$ denote an origin-symmetric log-concave probability measure on $\mathbb{R}^n$, so that $Z_n(\mu)$ is $\alpha$-regular small-diameter (with constants $\alpha \in (0,2]$ and $R\geq 1$). Then for $p \in [1,n]$, $q > 0$ and star-body $L \subset \mathbb{R}^n$: \[ M\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} m_q(\mu,L) L} \leq \exp \brac{1 + q + C n \brac{\frac{R}{t}}^{\frac{2\alpha}{3\alpha+2}} } \;\;\; \forall t > 0. \] In particular: \[ M(Z_p(\mu) , m_p(\mu, L) L) \leq \exp \brac{ C' \sqrt{T_2(X_{Z_n(\mu)})} \sqrt{n} \sqrt{p}} . \] \end{thm} \begin{proof} By the triangle inequality for packing numbers, we have for every $s > 0$: \[ M\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} m_q(\mu,L) L} \leq N\brac{Z_p(\mu) , s \sqrt{\frac{p}{n}} Z_n(\mu)} M\brac{Z_n(\mu) , \frac{t}{s} m_q(\mu,L) L} . \] Invoking Corollary \ref{cor:type-2} and Theorem \ref{thm:part3} to estimate the terms on the right-hand-side, we obtain: \[ M\brac{Z_p(\mu) , t \sqrt{\frac{p}{n}} m_q(\mu,L) L} \leq \exp \brac{ C n \frac{R^{\frac{2\alpha}{2+\alpha}}}{s^{\frac{2\alpha}{2+\alpha}}} + 1 + q + C' n \frac{s}{t}} . \] Optimizing on $s>0$, we set $s := t^{\frac{\alpha+2}{3\alpha+2}} R^{\frac{2\alpha}{3\alpha+2}}$, establishing the assertion after adjustment of constants. The last part follows by Proposition \ref{prop:type-2}. \end{proof} \section{Part 1 - Combinatorial Dimension Reduction: Cube} \label{sec:part1-cubes} In this section, we establish Part 1 of The (full) Program for the case that $K = B_\infty^n$, the $n$-dimensional cube, albeit with $D = C \log(e+n)$ and $R = C \log \log (e+n)$. Contrary to the linear ``One-Sided Johnson--Lindenstrauss" approach that worked well for $K = B_2^n$, we employ a non-linear combinatorial dimension reduction, based on the fundamental work of M.~Rudelson and R.~Vershynin \cite{RudelsonVershynin-CombDim} on the combinatorial dimension, extending the work of Mendelson and Vershynin from \cite{MendelsonVershynin-CombDim}. \subsection{Part 1 via Cell Content and Combinatorial Dimension} Denote by $G_{\text{crd}}$ the collection of all $2^n$ coordinate subspaces of $\mathbb{R}^n$ (of arbitrary dimension $m=0,1,\ldots,n$). Given a convex body $K \subset \mathbb{R}^n$, its cell content $\Sigma(K)$ is defined as: \[ \Sigma(K) := \sum_{F \in G_{\text{crd}}} \text{number of integer cells contained in $P_F K$} , \] where an integer cell is defined as a unit-cube with integer coordinates, i.e. $x + [0,1]^m$ with $x \in \mathbb{Z}^m$. When $F = \set{0}$, the number of integer cells contained in $P_F K$ is defined to be $1$. The combinatorial dimension $v(K)$ is defined to be: \[ v(K) := \max \set{ \text{dim}(F) \; ; \; F \in G_{\text{crd}} \text{ and $P_F K$ contains at least one integer cell} }. \] \medskip Recall that $B_\infty^n := [-1,1]^n$. The combinatorial information we will require is summarized in the following theorem, which is a particular case of \cite[Theorem 4.2]{RudelsonVershynin-CombDim}: \begin{thm}[Rudelson--Vershynin] \label{thm:RV} Let $K \subset \mathbb{R}^n$ denote a convex body so that $N(K , B_\infty^n) \geq \exp(a n)$, $a > 0$. Then for all $\epsilon > 0$: \[ N(K,B_\infty^n) \leq \brac{\Sigma \brac{\frac{C}{\epsilon} K }}^{M_\epsilon} ~,~ M_\epsilon := 4 \log^{\epsilon}(e + 1/a) . \] \end{thm} We will also require an additional standard combinatorial lemma (see \cite[Lemma 4.6]{RudelsonVershynin-CombDim}), which may be seen as an integer-valued extension of the Sauer-Shelah lemma: \begin{lem} \label{lem:SS} If $K \subset a B_\infty^n$ then: \[ \Sigma(K) \leq \brac{\frac{C a n}{v(K)}}^{v(K)} . \] \end{lem} We can now state: \begin{thm}[Part 1 for $K = B_\infty^n$ with logarithmic factors] \label{thm:part1-cubes} Let $\mu$ be an origin-symmetric log-concave measure on $\mathbb{R}^n$, let $p \in [1,n]$ and $t \geq 1/n$. Set: \[ D := C_1 \log(e+n) ~,~ R:= C_2 \log \log (e+n) , \] for appropriate universal constants $C_1,C_2 \geq 1$. Assume that $M(Z_p(\mu) , t R B_\infty^n) = e^k$ with $\mu(B_\infty^n) \geq \frac{1}{e}$ and $1 \leq k \leq n$. Then there exists $F \in G_{\text{crd}}$ of $\text{dim}(F) = m \in [k/D,k]$, so that: \begin{enumerate} \item $M(P_F Z_p(\mu), t P_F B_\infty^n) \geq e^m$ (\textbf{``Partial Separation Dimension Reduction"}). \item $\pi_F \mu(P_F B_\infty^n) \geq \mu(B_\infty^n) \geq \frac{1}{e}$ (``\textbf{$P_F B_\infty^n$ is sufficiently massive}"). \end{enumerate} \end{thm} \begin{proof} We know that: \[ N\brac{\frac{1}{t R} Z_p(\mu) , B_\infty^n} \geq M(Z_p(\mu) , t R B_\infty^n) = e^k , \] and so by Theorem \ref{thm:RV}, we have for any $\epsilon > 0$: \begin{equation} \label{eq:inter1} k \leq 4 \log^{\epsilon}\brac{e + \frac{n}{k}} \log \Sigma\brac{ \frac{C}{t R \epsilon} Z_p(\mu) } . \end{equation} By Lemma \ref{lem:IpWish1} and \ref{lem:Guedon}, $\mu(B_\infty^n) \geq 1/e$ implies that: \[ Z_p(\mu) \subset I_p(\mu,B_\infty^n) B_\infty^n \subset C' p \; m_1(\mu,B_\infty^n) B_\infty^n \subset C' p B_\infty^n , \] and so $\frac{C}{t R \epsilon} Z_p(\mu) \subset \frac{C'' p}{t R \epsilon} B_\infty^n$. Applying Lemma \ref{lem:SS}, we deduce that: \begin{equation} \label{eq:inter2} \log \Sigma\brac{ \frac{C}{t R \epsilon} Z_p(\mu) } \leq m_\epsilon \log \brac{\frac{C_3 p n}{t R \epsilon m_\epsilon} } ~,~ m_\epsilon := v \brac{\frac{C}{t R \epsilon} Z_p(\mu)} . \end{equation} Setting $\epsilon = 1 / \log \log (e+n)$ and $C_2 = 8 C$, we ensure by (\ref{eq:inter1}) and (\ref{eq:inter2}) that $m := v (\frac{1}{8t} Z_p(\mu))$ satisfies: \[ k \leq 4 e m \log\brac{\frac{C_3 p n}{t C_2 m} } . \] Since $p \in [1,n]$ and $t \geq 1/n$, by appropriately selecting $C_1$ we may ensure that: \[ m \geq k / D . \] This means that there exists $F \in G_{\text{crd}}$ of $\text{dim}(F) = m \geq k/D$ so that $\frac{1}{8t} P_F Z_p(\mu)$ contains an integer cell. In particular (as $M([0,1] , [0,1/4]) \geq e$): \[ M(P_F Z_p(\mu) , t P_F B_\infty^n) = M( \frac{1}{8t} P_F Z_p(\mu) , \frac{1}{8} P_F B_\infty^n ) \geq M(\frac{1}{2} P_F B_\infty^n , \frac{1}{8} P_F B_\infty^n) \geq e^m . \] Of course, by decreasing $m$ if necessary, we may also always ensure that $m \leq k$. This concludes the proof of the first assertion. The second assertion is obvious since $\pi_F\mu( P_F B_\infty^n ) = \mu(P_F^{-1} P_F B_\infty^n) \geq \mu(B_\infty^n)$. \end{proof} \subsection{Running The Program For Cubes} Running The (full) Program, we finally obtain: \begin{thm}[Generalized Regular Dual Sudakov For Cubes with Logarithmic Terms] \label{thm:RegularSudakovCubes} For any origin-symmetric log-concave measure $\mu$ on $\mathbb{R}^n$, we have: \[ M(Z_p(\mu) , t C \log \log (e+n) m_1(\mu, B_\infty^n) B_\infty^n) \leq \exp \brac{C \log(e+n) \brac{ \frac{p}{t^2} + \frac{p}{t} } } \;\;\; \forall p \geq 1 \;\; \forall t > 0 . \] \end{thm} \begin{proof} Since the expression on the left-hand-side is invariant under scaling of $\mu$, we may assume that $m_1(\mu,B_\infty^n) = 1$. Given $p \geq 1$ and $t \geq 1/n$, we run The Program for $K = B_\infty^n$, with $\L_m = \set{B_\infty^m}$. Theorem \ref{thm:part1-cubes} verifies Part 1 of The Program with $D=C_1 \log(e+n)$, $R = C_2 \log \log(e+n)$, $A=1$, $B=1$ and $q_m=1$. Part 2 of The Program regarding Weak Generalized Regular Dual Sudakov, with parameters $q_m = 1$ and $\varphi_t(x) = C' \max(x^{1/3}/t^{2/3}, \sqrt{x} / \sqrt{t})$, is established in Theorem \ref{thm:part2-smalldiam} (recalling (\ref{eq:cover-pack})). Since $\varphi_t^{-1}(y) \simeq \min(y^{3} t^2, y^2 t)$, we have $\varphi_t^{-1}(1/(4A)) \simeq \min(t,t^2)$, and Theorem \ref{thm:full-program} yields the asserted estimate in the range $t \geq 1/n$. The estimate remains valid after adjustment of constants (and in fact can be significantly improved) in the remaining uninteresting range $t \in (0,1/n)$ by Lemma \ref{lem:extend} and (\ref{eq:cover-pack}). This concludes the proof. \end{proof} \begin{cor}[Generalized Regular Dual Sudakov For Polytopes with Few Facets and Logarithmic Terms] \label{cor:RegularSudakovPolytopes} For any origin-symmetric log-concave measure $\mu$ on $\mathbb{R}^n$, and any origin-symmetric polytope $K \subset \mathbb{R}^n$ with $2N$ facets, we have: \[ M(Z_p(\mu) , t C \log \log (e+N) m_1(\mu, K) K) \leq \exp \brac{C \log(e+N) \brac{ \frac{p}{t^2} + \frac{p}{t} } } \;\;\; \forall p \geq 1 \;\; \forall t > 0 . \] \end{cor} \begin{proof} Any $K \subset \mathbb{R}^n$ as in the assertion is the unit-ball of an $n$-dimensional subspace $E$ (which we identify with $\mathbb{R}^n$) of $\ell_\infty^N$ (in an appropriate basis), so that $K = B_\infty^N \cap E$. Let $\nu$ denote a compactly supported origin-symmetric log-concave probability measure on $E^\perp$, and let $\set{\nu_k}$ denote rescaled copies of $\nu$ which weakly converge to the delta-measure at the origin of $E^\perp$. Let $\mu$ be an origin-symmetric log-concave measure on $E$, which we may assume by approximation is compactly supported as well. Denote the product measure $\mu_k := \mu \otimes \nu_k$, which clearly has even log-concave density on $\mathbb{R}^N$. By Theorem \ref{thm:RegularSudakovCubes} and Lemma \ref{lem:Guedon} applied to $\mu_k$ on $\mathbb{R}^N$, we have for fixed $p \geq 1$ and $t > 0$: \begin{equation} \label{eq:polytope-proof} M(Z_p(\mu_k) , t C \log \log (e+N) I_1(\mu_k, B_\infty^N) B_\infty^N) \leq \exp \brac{C \log(e+N) \brac{ \frac{p}{t^2} + \frac{p}{t} } } . \end{equation} Note that by integrating against a bounded continuous function on $\mathbb{R}^N$ and applying the Fubini and Lebesgue Dominant Convergence theorems, it follows that $\mu_k$ weakly converge to $\mu$. As all measures are uniformly compactly supported, it follows that $I_1(\mu_k,B_\infty^N)$ converges to $I_1(\mu,B_\infty^N) = I_1(\mu,K)$. In addition, since $Z_p(\mu) \times \set{0} \subset Z_p(\mu_k)$, it follows by definition and monotonicity of the packing numbers that for any $s > 0$: \[ M(Z_p(\mu), s K) = M(Z_p(\mu) \times \set{0} , s B_\infty^N \cap E) = M(Z_p(\mu) \times \set{0} , s B_\infty^N) \leq M(Z_p(\mu_k) , s B_\infty^N) . \] Combining the above observations, we obtain for large enough $k$: \begin{align*} & M(Z_p(\mu) , 2 t C \log \log (e+N) I_1(\mu, K) K) \leq M(Z_p(\mu) , t C \log \log (e+N) I_1(\mu_k,B_\infty^N) K) \\ & \leq M(Z_p(\mu_k) , t C \log \log (e+N) I_1(\mu_k,B_\infty^N) B_\infty^N) . \end{align*} Together with (\ref{eq:polytope-proof}) and another application of Lemma \ref{lem:Guedon}, the assertion follows after a possible readjustment of constants. \end{proof} \section{Concluding Remarks} \label{sec:conclude} \subsection{Generalized Regular Sudakov Minoration: Ellipsoids} \label{subsec:conclude-ell} Recall that the following estimate was established in Theorem \ref{thm:RegularSudakovEllipsoids}: \begin{equation} \label{eq:conclude-ellipsoids} M(Z_p(\mu) , t m_1(\mu, \mathcal{E}) \mathcal{E}) \leq \exp \brac{C \brac{ \frac{p}{t^2} + \frac{p}{t} } } \;\;\; \forall p \geq 1 \;\; \forall t > 0 , \end{equation} for any origin-symmetric log-concave measure $\mu$ on $\mathbb{R}^n$ and any (origin-symmetric) ellipsoid $\mathcal{E} \subset \mathbb{R}^n$. Let us expand on some of the comments regarding this estimate given in the Introduction. \medskip In terms of sharpness, first recall that $Z_p(\mu) \subset p I_1(\mu,\mathcal{E}) \mathcal{E}$, and so the left-hand-side is $1$ for $t \geq C' p$ and the estimate is of the correct order (up to the value of $C > 0$) in that range. Moreover, our estimate yields the correct worst-case behavior in the range $t \in [1,C' p]$ as well. This is easily seen by degenerating $\mu$ to a one-dimensional (two-sided) exponential measure, in which case $Z_p(\mu)$ approximates an interval of length of order $p$, and $m_1(\mu,B_2^n)$ is of the order of $1$. \smallskip We now claim that (\ref{eq:conclude-ellipsoids}) yields the correct worst-case behavior for all $t \in [\sqrt{p/n} , 1]$. To see this, set $\mu$ to be the standard Gaussian measure $\gamma_n$ so that $Z_p(\gamma_n) \simeq \sqrt{p} B_2^n$, and let $\mathcal{E}$ denote the cylinder $\sqrt{k} B_2^k \times \mathbb{R}^{n-k}$ (which we think of as a degenerate ellipsoid, as it can obviously be approximated by proper ones). Clearly $m_1(\gamma_n,\mathcal{E}) \simeq I_2(\gamma_n,\mathcal{E}) = 1$, and we have by the volumetric estimate (\ref{eq:volumetric}): \[ M(\sqrt{p} B_2^n , t_0 \mathcal{E}) = M(\sqrt{p} B_2^k , t_0 \sqrt{k} B_2^k) \geq e^k \] for $t_0 = \frac{1}{2 e} \sqrt{p/k}$. Consequently, we confirm that for an appropriate constant $c >0$: \[ M(Z_p(\gamma_n) , c \sqrt{p/k} \; m_1(\gamma_n, \mathcal{E}) \mathcal{E}) \geq e^k , \] and letting $k$ range from $\lceil p \rceil$ to $n$, the sharpness of (\ref{eq:conclude-ellipsoids}) for all $t \in [\sqrt{p/n} , 1]$ is established. When $t \in (0,\sqrt{p/n})$ the estimate is definitely loose, as simply seen by a volumetric argument (as explained in Lemma \ref{lem:extend}); however, we do not try to improve the estimate in this uninteresting range. \medskip As already mentioned in the Introduction, when $\mu$ is isotropic and $\mathcal{E} = B_2^n$ (and hence $m_1(\mu,B_2^n) \simeq \sqrt{n}$), the estimate (\ref{eq:conclude-ellipsoids}) was already obtained by Giannopoulos--Paouris--Valettas in \cite[Proposition 5.1]{GPV-ImprovedPsi2} (for $t \geq \sqrt{p/n}$, but the same estimate remains valid for all $t >0$ by Lemma \ref{lem:extend}) using a delicate refinement of Talagrand's approach for proving the (dual) Sudakov Minoration. It is possible to extend the latter approach to the non-isotropic setting, yielding the following estimate (we refrain from providing the details): \begin{equation} \label{eq:conclude-ellipsoids-paper1} M(Z_p(\mu), t m_p(\mu, \mathcal{E}) \mathcal{E}) \leq \exp \brac{C \frac{p}{t^2}} \;\;\; \forall t \in [0,1] . \end{equation} While (\ref{eq:conclude-ellipsoids}) improves upon (\ref{eq:conclude-ellipsoids-paper1}) in the range $t \geq 1$, note that (\ref{eq:conclude-ellipsoids-paper1}) involves the smaller quantile $m_p(\mu,\mathcal{E}) \leq m_1(\mu,\mathcal{E})$, so these two estimates are ultimately incomparable. An alternative proof of (\ref{eq:conclude-ellipsoids}) in the isotropic case was obtained in \cite{GPV-DistributionOfPsi2} using a very similar approach to the one we employ in this work. For $p \geq \sqrt{n} \log^2(1+n)$, improved covering estimates in the range $t \in [\log^2(1+n) , p/\sqrt{n}]$ have been obtained for the isotropic case in \cite[Subsection 3.3]{EMilman-IsotropicMeanWidth}. In the non-isotropic case, a general formula in terms of the eigenvalues $\set{\lambda_i^2}_{i=1}^n$ of ${\rm Cov}(\mu)$ may also be obtained by employing Theorem \ref{thm:Sudakov} (improved Sudakov Minoration) and the estimate on $M^*(Z_p(\mu))$ from \cite[Theorem 1.3]{EMilman-IsotropicMeanWidth}; as $m_1(\mu,B_2^n) \simeq I_2(\mu,B_2^n) = \sum_{i=1}^n \lambda_i^2$, this results in possible improvements over (\ref{eq:conclude-ellipsoids}) in a certain range of the parameters $\set{\lambda_i} , p , t$; we leave the details to the interested reader. \subsection{Generalized Regular Sudakov Minoration: Cubes} We now turn to the estimate established in Theorem \ref{thm:RegularSudakovCubes}: \begin{equation} \label{eq:conclude-cubes} M(Z_p(\mu) , t C \log \log (e+n) m_1(\mu, B_\infty^n) B_\infty^n) \leq \exp \brac{C \log(e+n) \brac{ \frac{p}{t^2} + \frac{p}{t} } } \;\;\; \forall p \geq 1 \;\; \forall t > 0 \end{equation} for any origin-symmetric log-concave measure $\mu$ on $\mathbb{R}^n$. Up to the logarithmic terms above, this estimate is again seen to be sharp in the range $t \geq 1$, exactly as in the preceding analysis for ellipsoids. \smallskip In the range $t \in [\sqrt{p/n} , 1]$, the estimate (\ref{eq:conclude-cubes}) remains sharp up to logarithmic terms in the dimension. To see this, set again $\mu$ to be the Gaussian measure $\gamma_n$, for which it is well-known that $m_1(\gamma_n , B_\infty^n) \simeq \sqrt{\log(1+n)}$. Applying the precise covering estimates of Sch\"utt \cite[Theorem 1]{Schutt-Entropy-numbers}, we have: \[ M\brac{Z_p(\gamma_n) , C \frac{\sqrt{\log (1+n/k)}}{\sqrt{\log (1+n)}} \frac{\sqrt{p}}{\sqrt{k}} m_1(\gamma_n , B_\infty^n) B_\infty^n} \geq M\brac{ B_2^n , C' \frac{\sqrt{\log (1+n/k)}}{\sqrt{k}} B_\infty^n} \geq e^k , \] for all $\log(1+n) \leq k \leq n$. This confirms the sharpness of (\ref{eq:conclude-cubes}) up to the logarithmic terms there for all $t \in [\sqrt{p/n^\alpha},1]$ for any fixed $\alpha \in (0,1)$, and up to an additional $\log(e+n)$ term in the range $t \in [\sqrt{p/n},\sqrt{p/n^\alpha}]$. Curiously, in the latter range, this additional term yields a packing estimate for $M(Z_p(\gamma_n) , t m_1(\gamma_n, B_\infty^n) B_\infty^n)$ which is even better than the expected $\exp(C \frac{p}{t^2})$, and we do not know whether this is indeed the worst-possible expected behaviour for a general $\mu$. As in the case of ellipsoids, the estimate is definitely loose in the range $t \in (0,\sqrt{p/n})$ by a simple volumetric estimate. \subsection{Completing The Program} The results we obtain in this work completely resolve Part 3 of The Program, and almost entirely Part 2 as well. For instance, if the initial log-concave probability measure $\mu$ is assumed $1$-pure (e.g. super-Gaussian, sub-Gaussian or unconditional), then by Proposition \ref{prop:pure-1}, so will be all of its marginals $\nu \in \mathbb{M}_m$, for which we have a Weak Sudakov Minoration result by Theorem \ref{thm:part2-pure}. In particular, up to the Slicing Problem, Part 2 is completely established. \smallskip Consequently, it is clear that the main remaining challenge in completing The Program lies in establishing Part 1 of The Program. This is a significant challenge even for some specific convex bodies $K$ besides ellipsoids, such as for $K = B_1^n$. To carry out this Separation Dimension-Reduction step, it seems that we would need to employ other measures on the Grassmannian $G_{n,m}$ besides the uniform Haar measure, upon which most of the (Euclidean) Asymptotic Geometric Analysis theory is built. In our opinion, this is a fascinating challenge, which we plan to explore in a future work. \setlength{\bibspacing}{2pt} \bibliographystyle{plain}
2,869,038,155,754
arxiv
\section{Introduction} Network coding is an exciting new technique promising to improve the limits of transferring information in wireless networks. The basic idea is to combine packets that travel along similar paths in order to achieve the multicast capacity of networks. The network coding scheme called \textit{local network coding} was one of the first practical implementations able to showcase throughput benefits, see COPE in \cite{cope}. The idea of local network coding is to encode packets belonging to different flows whenever it is possible for these packets to be decoded at the next hop. The simplicity of the idea gave hopes for its efficient application in a real-world wireless router. In the simple Alice-relay-Bob scenario, the relay XORs outgoing packets while Alice and Bob use their own packets as keys for decoding. The whole procedure offers a throughput improvement of 4/3 by eliminating one unnecessary transmission. Local network coding has been enhanced with the functionality of \textit{opportunistic listen}. The wireless terminals are exposed to information traversing the channel, and \cite{cope} proposed a smart way to make the best of this inherent broadcast property of the wireless channel. Particularly, each terminal operates in always-on mode, overhearing constantly the channel and storing all overheard packets. The reception of these packets is explicitly announced to an intermediate node, called the relay, which makes the encoding decisions. Finally, the relay can arbitrarily combine packets of different flows as long as the recipients have the necessary keys for decoding. Using the idea of opportunistic listen, an infinite wheel topology, where everyone listens to everyone except from the intended receiver, can benefit by an order of 2 in aggregate throughput by diminishing the downlink into a single transmission, see \cite{proutiere}. The wheel is a particular symmetric topology that is expected to appear rarely in real settings. Also, the above calculations take into account that all links have the same transmission rates, thus it takes the same amount of time to deliver a native (non-coded) packet or an encoded one. In addition, all possible flows are conveniently assumed to exist. This, however, is not expected to be a frequent setting in a real world network. A natural question reads: what is the expected throughput gain in an arbitrary wireless ad hoc network? The maximum gain does not come at no cost either. Deciding which packets to group together in an encoded packet is not a trivial matter as explained in \cite{proutiere}, in ER \cite{er} and in CLONE \cite{clone}. In the latter case, the medium is assumed to be lossy, and the goal is to find the optimal pattern of retransmissions in order to maximize throughput. In the first case, a queue-length based algorithm is proposed for end-to-end encoding of symmetric flows (i.e. flows that one's sender is the other's destination and the other way around.). All these decision-making problems are formulated as follows. Denote $N(f_i)$ the set of nodes in need of a packet belonging to flow $f_i$ and $H(f_i)$ the set of nodes having it. Then the encoded combination of two packets belonging to flows $f_i$ and $f_j$ can be decoded successfully if and only if $N(f_i)\subseteq H(f_j)$ and $N(f_j)\subseteq H(f_i)$. If this condition is true we draw an edge on the \textit{coding graph} with vertices all the possible packets. Then finding the optimal encoding scheme is reduced to finding a minimum clique partition of the coding graph, a commonly known NP-hard problem, \cite{er}. Moreover, the same complexity appears when the relay node makes scheduling decisions, i.e., selecting which packets to serve and with what combinations. Work related to index coding has shown that this problem can be reduced to the boolean satifyability problem (SAT problem), \cite{chaundry:08}. Thus a second question arises: what is the loss in throughput gain if instead of searching over all possible encoded packet combinations, we restrict our search in combinations of size at most $m$? In this paper we are interested in showing that, for a real ad hoc wireless network, opportunities for large encoding combinations rarely appear. To show this, we consider regular topologies like grids as well as random ones. We calculate the maximum encoding number in these scenarios in the mean sense and we consider small as well as large networks. To capture the behaviour of large (or dense) networks, we examine the scaling laws of maximum encoding number. Scaling laws are of extreme interest for the network community in general. Although they hold asymptotically, they provide valuable insights to the system designers. In this direction, the authors in \cite{WG09} study the wireless networks scaling capacity in a Gupta-Kumar way taking into account complex field NC. \cite{JGT07} also examines the use of NC for scaling capacity of wireless networks. They find that NC cannot improve the order of throughput, i.e. the $O\left(\frac{1}{\sqrt{n}}\right)$ law prevails. \cite{AEMO07} discusses the issue of scaling NC gain in terms of delays while \cite{HR08} identifies the energy benefits of NC both for single multicast session as well as for multiple unicast sessions. In \cite{GK08}, NC is used instead of power control and the benefits are characterized. In a similar spirit, \cite{KV09} investigates the use of rate adaptation for wireless networks with intersession NC. Utilizing rate adaptation, it is possible to change the connectivity and increase or decrease the number of neighbors per node. They identify domains of throughput benefits for such case. The most relevant work in the field is \cite{howmany}. The authors analyze the maximum coding number, i.e., the maximum number of packets that can be encoded together such that the corresponding receivers are able to decode. They show that this number scales with the ratio $\frac{R}{\delta}$ where $\delta$ is a region outside the communication region and inside the interference region. Note however, that this work does not yield any geometric property for the frequency of large combinations since it relies only on specific protocol characteristics. In networks with small $\delta$, e.g., whenever a hard decoding rule is applied, there is no bound for the maximum coding number. In this paper we study the problem from a totally different point of view, showing that there exist inherent geometric properties bounding the maximum coding number below a number relative to the population or density of nodes. Moreover, we apply the Boolean connectivity model for which $\delta=0$, and thus the previous result does not provide any bound at all. We show that the upper bound of the maximum coding number is related to a convexity property that any valid combination has. We start by considering a fixed separation distance network, like a square grid, and show that in such networks, the maximum coding number is $O(\sqrt{N})$ and $\Omega(\sqrt[4]{N})$,\footnote{ The symbol $O()$ denotes that the function is bounded above by some linear function of the expression in the brackets whereas $\Omega()$ denotes that the expression is bounded from below.} where $N$ is the number of nodes in transmission range of the relay. This implies that, even in networks with canonically placed nodes, the maximum coding combination is line-shaped even though the set of all nodes live in 2-dimensions. Next we study a random network where the locations of the nodes follow a Poisson point process on the plane. In this case, the maximum encoding number is found to be bounded in probability by $O(\lambda^{\frac{1}{2}+\epsilon})$, where $\lambda$ is the node density and $\epsilon>0$ arbitrary. Finally we consider the case where the encoder searches for combinations of at most size $m<N$. We show that the throughput efficiency loss in this case depends on the size of the network, and for small networks the loss can be negligible. This way we motivate heuristic algorithms that avoid the high complexity arising in encoding selection. Through extensive simulations we show that all the derived results hold in general even for small networks. \begin{figure}[tb] \centering \includegraphics[width=2.5in]{fig1} \caption{The set of valid nodes $\mathcal{V}$ is selected inside the disk of radius $R$ and origin the location of $v_0$.} \label{fig:example} \end{figure} The paper is organized as follows. In Section \ref{sec:model}, the model is described and some basic properties are given. In Section \ref{sec:deterministic}, the main results for the case of grid-like networks are derived. Then in Section \ref{sec:stochastic} the case of randomly positioned networks is considered. A rate analysis is provided in Section \ref{sec:efficiency} and simulation results are shown in Section \ref{sec:numerical}. The paper is concluded in Section \ref{sec:conclusion}. \section{Communication model} \label{sec:model} We assume a set of nodes $\mathcal{V}$, positioned on the plane. Communications between these nodes are established via the Boolean interference model (see, e.g., \cite{franceschetti}). In this model, a link between two nodes $\{v_i,v_j\}$ is realized if and only if $\left|\mathbf{X}(v_i)-\mathbf{X}(v_j)\right|\leq R$. In this case, we say that $v_i$ is connected with $v_j$ and vice versa. Note that the Boolean interference model is an undirected graph in the sense that only bi-directional links appear. \subsection{Information flow} Each node $v_i$ having degree $\deg\left(v_i\right)>1$, apart from transmitting and receiving, relays information. In this process, it is possible to avoid unnecessary transmissions by employing local network coding. To simplify the analysis, we will consider only one cell, i.e., we will focus on a given node $v_0$, and all its neighbors, and calculate the network coding gain on the downlink of this node. A similar result, then, holds for any such node serving as a relay. Thus we restrict $\mathcal{V}$ to contain all neighbors of $v_0$, with $\mathcal{V}=\{v_1,v_2,\dots,v_N\}$ and $N\doteq \left|\mathcal{V}\right|$ the number of nodes under consideration. For a network determined by a Poisson point process with density $\lambda$, we use correspondingly the mean number of points which is given by $\mean{N}=\lambda \pi R^2$. The main objective of this paper is to find how the \emph{maximum coding number} and the \emph{maximum network coding gain} scale with the number of nodes. Also we will provide bounds for the scaling constants which are useful for determining the behaviour in small networks. Apart from the number of neighbors, the gain analysis depends also on the activated flows. In the simple Alice--relay--Bob topology, it is possible that only the flow going from Alice to Bob is activated, in which case the gain is zero. In this paper we are interested in determining an upper bound for the efficiency loss when the relay is constrained on combinations of size $m<N$ (e.g. if $m=2$ the system is constrained to pairwise XORing). For this reason, we consider the maximum gain scenario. For each node designated as a relay, we assume that all possible two--hop flows traversing this relay are activated. This means that each node designated as a relay, has all possible different packets from which to select an XOR combination to send to the neighbors. Since not all of those combinations are valid, finding the maximum valid combination that corresponds to the maximum coding number is a non-trivial task and will be the goal of this paper. The resulting bound will help characterize the efficiency loss due to resorting to $m$-wise encoding. In real systems, some flows might not be active in which case the resulting efficiency loss from $m$-wise encoding will be even smaller. To make this more precise, similar to \cite{er}, we define source-destination pairs designating 2--hop flows that cross the relay. Each flow $f\in\mathcal{F}$ has a source $S(f)$, a destination $D(f)$, a set of nodes having it $H(f)\subset \mathcal{V}$ (either by overhearing or ownership) and a set of nodes needing it $N(f)\subset \mathcal{V}$. We write $\subset$ because at least one node, the destination $D(f)$ or the source $S(f)$, is not part of $H(f)$ and $N(f)$ correspondingly. Two flows $f_1, f_2$ are called symmetric when they satisfy the property $S(f_1)=D(f_2)$ and $D(f_1)=S(f_2)$. \subsection{Constraints} Here we summarize the previous subsection in the form of constraints. We will focus on network coding opportunities appearing in the aforementioned arbitrary network around the relay $v_0$. \eat{Without loss of generality, designate $\mathbf{X}\left(v_0\right)$ as the origin.} \petteri{iff is not used in definitions} \begin{definition} \emph{(valid node):} A node $v_i\in \mathcal{V}$ is a \textit{valid node} if $\left|\mathbf{X}(v_i)-\mathbf{X}(v_0)\right|\leq R$. \end{definition} \begin{definition} \emph{(valid flow):} A flow $f\in \mathcal{F}$ is a \textit{valid flow} if $S(f)$ and $D(f)$ are valid nodes not neighboring with each other, i.e., $\left|\mathbf{X}(S(f))-\mathbf{X}(D(f))\right|> R$. \end{definition} \begin{definition} \emph{(valid combination):} A subet of flows $\mathcal{C}\subseteq \mathcal{F}$ with $\mathcal{C}=\{f_1,f_2, \dots, f_C \}$, where $C=\left|\mathcal{C}\right|$, is a \textit{valid combination} if \begin{itemize} \item each flow $f_i\in\mathcal{C}$ is a valid flow, \item every pair of flows $f_i,f_j\in \mathcal{C}, f_i \neq f_j$ satisfies $N(f_i)\subseteq H(f_j)$ and $N(f_j)\subseteq H(f_i)$ or equivalently, $S(f_i)$ is connected with $D(f_j)$ while $S(f_j)$ is connected with $D(f_i)$. \end{itemize} \end{definition} We define the maximum coding number $C_{\max}$ as the greatest cardinality among all valid sets $\mathcal{C}$. If the positions of the network are random, $C_{\max}$ is evidently a random variable. Note that we could impose additional constraints. For example, if a flow can be routed more efficiently by a node other than $v_0$, then this flow should be excluded from the set of valid flows. This would restrict further the set of valid combinations and thus by omitting this constraint we derive an upper bound for $C_{\max}$. Next, we state some fundamental properties of the valid combinations\eat{some of which arise due to the bidirectional properties of the connectivity model}. For each flow $f$ belonging to a valid combination $\mathcal{C}$ we have \begin{itemize} \item $D(f)\in N(f)$, \item $D(f) \in H(j)$ for all $j\in \mathcal{C}\setminus \{f\}$, \end{itemize} which leads us to the following properties. \begin{remark} The destination node of $f_i\in \mathcal{C}$ is different from the destination node of any other flow $f_j\in \mathcal{C}\setminus \{f_i\}$. \end{remark} \begin{remark} The source node of $f_i\in \mathcal{C}$ is different from the source node of any other flow $f_j\in \mathcal{C}\setminus \{f_i\}$. \end{remark} Next, we provide a result on the topology for a valid combination. Let $\mathcal{X}_{\mathcal{C}}$ represent the set of locations of all nodes being the source or destination of a flow belonging to a combination $\mathcal{C}$. \petteri{Changed the notation and modified the proof to hold arbitrary combination} \begin{lemma} Any valid combination of size 3 or larger corresponds to a convex polygon (the polygon is formed using the set $\mathcal{X}_{\mathcal{C}}$ as edges). \label{lemma:convex} \end{lemma} \begin{proof} Consider a valid combination defined by flows \[ \mathcal{C}=\set{f_i, i=1,\ldots, C}, \] where $C\geq 3$. Consider also the set of nodes that are sources and/or destinations in $\mathcal{C}$ \[ \mathcal{V}_{\mathcal{C}}=\cup_{i} S(f_i) \cup_{i} D(f_i) \] and the induced set of locations $\mathcal{X}_{\mathcal{C}}$ such that we have a bijective mapping for each element $v_j\in \mathcal{V}_{\mathcal{C}}$ with an element $\mathbf{X}(v_j) \in \mathcal{X}_{\mathcal{C}}$. Assume that there is a node $v_j\in \mathcal{V}_{\mathcal{C}}$ which is an interior point of the convex hull\footnote{The convex hull of points $\mathcal{X}$ is the minimal convex set containing $\mathcal{X}$.} of $\mathcal{X}_{\mathcal{C}}$. Thus its location $X_j=\mathbf{X}(v_j)$ can be written as $X_j=\sum_{i\not=j} \alpha_i X_i$ where $\sum_{i\not=j} \alpha_i=1$ and $\alpha_i\geq 0$ for all $i$. On the other hand, there is a unique $v_{j^*}\in V_{\mathcal{C}}$, which is the communicating pair (source or destination) of $v_j$ in at least one flow, so that \begin{equation} |X_j-X_{j^*}|>R. \label{eq:proximity} \end{equation} All the other nodes (destinations or sources) in $\mathcal{V}_{\mathcal{C}}$ should be able to reach the node $v_{j^*}$ directly. Thus, \[ |X_j-X_{j^*}| \leq \sum_{i\not=j}\alpha_i |X_i-X_{j^*}| \leq \sum_{i\not=j}\alpha_i R \leq R, \] which is a contradiction to (\ref{eq:proximity}). Consequently the node $v_j$, as well as all other nodes of the combination, necessarily lie on the perimeter of the convex hull. Thus, the nodes of a valid combination are the vertices of a convex polygon. \end{proof} When the set of sources is identical to the set of destinations, the combination consists of symmetric flows only and $C=2(\left|\mathcal{V'}\right|-1)$, $\mathcal{V'}\subseteq \mathcal{V}$. In order to calculate an upper bound of the network coding combination size, it is enough to resort to the case of symmetric flows. \begin{lemma} For any valid combination there exists at least one combination of the same or larger size that contains only symmetric flows. \end{lemma} \begin{proof} We will show that for any flow we can add the symmetric one without invalidating the combination as long as it is not already counted. In a bipartite graph with all the nodes $\mathcal{V}$ on one side and the destinations of $\mathcal{C}$ on the other, consider a directional link $\ell_f$, between the source of flow $f$ and its destination, for each $f\in \mathcal{C}$. Note now that the nodes having out-degree one, i.e., the active sources in $\mathcal{C}$, may or may not be identical to one of the destination nodes. We can make a partition of the set of active sources by assigning those with the above property to the set $\mathcal{T}_{\textsl{sym}}$ and the rest to the complementary set $\overline{\mathcal{T}_{\textsl{sym}}}$. If $\overline{\mathcal{T}_{\textsl{sym}}}=\emptyset$, then the Lemma is proved since $\mathcal{C}$ is a valid combination with symmetric flows only. If not, then we can create a new combination $\mathcal{C}'$ which has more flows than the original one using the following process. For each transmitter in $\overline{\mathcal{T}_{\textsl{sym}}}$, say $S(f_i)$ the transmitter of flow $f_i$, add one extra flow $f_i'$ with $S(f_i')=D(f_i)$ and $D(f_i')=S(f_i)$. This flow does not belong to $\mathcal{C}$ (because $S(f_i)\in \overline{\mathcal{T}_{\textsl{sym}}}$) and it does not invalidate the combination due to the bidirectional properties of the model. Note that $f_i'$ is a valid flow because $S(f_i')$ cannot be connected to $D(f_i')$ due to validity of $f_i$. Note also that $S(f_i')$ is connected to $D(f)$ for all $f\in \mathcal{C}$ since this is again required for the decoding of the original flows. Thus, for any flow we can add the symmetric one without invalidating the combination. \end{proof} \begin{remark} If a valid combination consists of symmetric flows only, its size must be even. \end{remark} In graph theory terms, a valid combination with symmetric flows can be thought of as a graph created by a clique of $C+1$ nodes, minus a matching with $\frac{C}{2}$ edges, with all symmetric flows defined by this matching activated. This graph is called in \cite{cope} \textit{wheel topology}. \section{Analysis in grid-like topologies} \label{sec:deterministic} In this section we focus on positioning the nodes on a grid. Grid topologies often offer an insightful first step approach towards the random positioning behaviour. Also, the investigation of grids answers the question whether it is possible to achieve high NC gain by arranging the locations of the nodes. We therefore assume a network with the additional property $\left|\mathbf{X}(v_i)-\mathbf{X}(v_j)\right|\geq d$, for any pair of nodes $v_i, v_j \in \mathcal{V}$. This condition pertains to regular grids such as the square, the triangular and the hexagonal grid as well as other grids with non-uniform geometry. We impose nevertheless the property that the node density is the same over all cells and thus the geometry should be somehow homogeneous. The number of nodes inside a disk or radius $R$ will be $N=O\left((\frac{R}{d})^2\right)$ for these networks and the corresponding node density $\lambda^{\mathrm{grid}}=O\left((\frac{1}{d})^2\right)$. \begin{theorem} \emph{(Upper bound)} The maximum coding number in fixed-separation networks is $O\left(\sqrt{N}\right)$ where $N$ is the number of nodes or equivalently $O\left(\sqrt{\lambda^{\mathrm{grid}}}\right)$. \end{theorem} \begin{proof} From Lemma \ref{lemma:convex} we know that the nodes belonging to the maximum combination form a convex polygon. Any such polygon fitting inside the disk of radius $R$ must have perimeter smaller than $2\pi R$. Since the nodes on the perimeter should be at least $d$ away from each other, we conclude that the maximum coding number is \[ C_{\max}<\frac{2\pi R}{d}. \] This combined with $N=O\left(\frac{R}{d}^2\right)$ or respectively $\lambda^{\mathrm{grid}}=O\left((\frac{1}{d})^2\right)$ yields the result. \end{proof} A particular case of the above bound is the square grid. The number of nodes inside the disk is $N=\pi \left(\frac{R}{d}\right)^2 +e\left(\frac{R}{d}\right)$ where $e\left(\frac{R}{d}\right)\leq 2\sqrt{2}\pi \frac{R}{d}$ is an error decreasing linearly with $d$. Thus we obtain an upper bound \[ C_{\max}^{\textsl{square}}<\sqrt{4\pi N}. \] So far we have shown that any network with fixed separation distance $d$ and uniform density, will have maximum coding number of $O(\sqrt{N})$, where $N$ is the number of nodes connected to the relay. In particular, the constant can be determined for any given grid and for the square grid is $2\sqrt{\pi}$. The simulations show that the actual maximum encoding number is approximately half of that calculated above. The reason for that is basically that the valid polygon is always smaller than the disk of radius $R$ and often close to the size of a disk of radius $\frac{R}{2}$. It is interesting to bound the achievable maximum coding number from below as well. To obtain intuition about this bound we start with a non-homogeneous topology, the \textit{cyclic grid}. We construct concentric cyclic groups of radius $R_i=id$, $i=0,1,\dots,\lfloor \frac{R}{d} \rfloor $ that fall inside the disk of radius $R$. Each cyclic group has as many nodes as possible such that the fixed separation distance condition is not violated. Such a topology exhibits different behavior depending on the selected origin (it is not homogeneous), nevertheless it helps identify a particular behavior of the achievable maximum coding number. The cyclic group at $R_i$ has $\left\lfloor \frac{2\pi}{\arccos\left(1-\frac{1}{2i^2}\right)}\right\rfloor$ nodes. Thus the grid of radius $R$ will have \[ N=1+\sum_{i=1}^{\left\lfloor \frac{R}{d}\right\rfloor} \left\lfloor \frac{2\pi}{\arccos\left(1-\frac{1}{2i^2}\right)}\right\rfloor. \] A very good approximation is, \begin{equation} \begin{split} N&\approx 1+6\sum_{i=1}^{\left\lfloor \frac{R}{d}\right\rfloor}i\\ &=1+\left\lfloor \frac{R}{d}\right\rfloor+\left\lfloor \frac{R}{d}\right\rfloor^2\\ &\approx 3\frac{R^2}{d^2}. \end{split} \end{equation} \begin{theorem} \emph{(Lower bound)} \label{th:lower} In networks with nodes $d$ away from each other and cell radius $R$, an achievable maximum coding number is \begin{enumerate} \item $C_{\max}^{\textsl{cyclic}}=\Omega(\sqrt{N})$ for cyclic grid with $\mathrm{rem}\left( R,2d\right) \rightarrow 0$, \item $C_{\max}^{\textsl{cyclic}}=\Omega(\sqrt[4]{N})$ for cyclic grid with $\mathrm{rem}\left( R,2d\right) \rightarrow d$ and \item $C_{\max}^{\textsl{square}}=\Omega(\sqrt[4]{N})$ for the square grid of $d$, \end{enumerate} where $\mathrm{rem}(x,y)$ is the remainder of the division $x/y$. \end{theorem} \begin{proof} \underline{For the cyclic grid when $\mathrm{rem}\left( R,2d\right) \rightarrow 0$:} By focusing on the cyclic group with $\inf\{i:R_i>\frac{R}{2}\}$, note that each node is $\frac{R}{2}+\epsilon$ from the center and thus the desired connectivity properties are satisfied for all nodes on the cyclic group. In this case, we can calculate the number of nodes in the group as \[ C_{\max}^{\textsl{cyclic}}=\left\lfloor \frac{2\pi}{\arccos\left(1-\frac{2d^2}{R^2}\right)}\right\rfloor, \] which for large $N$ is bounded from below by some linear function of $\sqrt{N}$. \underline{For the cyclic grid when $\mathrm{rem}\left(R,2d\right) \rightarrow d$:} Now each node is $\frac{R}{2}+d-\epsilon$ away from the center and thus we need to select those nodes satisfying the property of valid combination. For this it is enough that we leave an empty angle $\phi$ such that if $AOB$ is a diameter and $AOC$ this angle, then $CB\leq R$. By solving this for the maximum number of points satisfying this property we get \[ C_{\max}^{\textsl{cyclic}}=\left\lfloor \frac{2\pi}{\arccos\left(\frac{R^2}{2\left(\frac{R}{2}+d\right)^2}-1\right)}\right\rfloor, \] which for large $N$ is bounded from below by some linear function of $\sqrt[4]{N}$. \begin{figure}[tb] \centering \includegraphics[width=0.6\columnwidth]{fig2} \caption{Sketch for the proof of Theorem \ref{th:lower}.} \label{fig:squarecalc} \end{figure} \underline{For the square grid:} we construct a ring around the circle of $\frac{R}{2}$ radius. The width of the ring is $\delta$ wide enough to fit a whole squre of dimensions $d \times d$. Such a square is bound to contain exactly one node of the grid. Using Figure \ref{fig:squarecalc}, and the triangles relative to the small square, we calculate $\delta$ as \[ \delta=\frac{\sqrt{R^2+d(5d+4R)}-R}{2}. \] Thus we can show that $d\leq\delta\leq \frac{3d}{2}$. If we use the largest possible value that guarantees that the ring contains one node at each step, namely $\delta=1.5d$, we can compute the angle that contains at least one node, which is of the order of $d$: \[ \phi(\delta)=\arcsin\left(\frac{\sqrt{2}d}{\sqrt{d^2+R^2}}\right). \] Finally, we compute the angle which should be left empty in the valid combination such that any node outside this angle is reachable by the most distant node (the one at the bottom). \[ \omega(\delta)=\arcsin\left(\frac{R}{\sqrt{2}(R+1.5d)}\right). \] This angle is of the order of $\sqrt{d}$. Finally an achievable combination is obtained if we alternate $\phi$ and $\omega$ until we fill the circle. \[ C_{\max}^{\textsl{square}}=\left\lfloor \frac{2\pi}{\phi(\delta)+\omega(\delta)}\right\rfloor, \] which is bounded from below by a linear function of $\sqrt[4]{N}$. Note that the sparseness of the combination is due to $\omega(\delta)$ and a possible reasoning is that the bound is constructed to cover all the cases, thus also the case that the uncomfortable positioning of nodes matches the second case of the cyclic grid above. \end{proof} In \cite{alon}, relative results on convex polygons in constrained sets guarantee the existence of convex polygons of size $\Omega(\sqrt[4]{N})$ when the $N$ thrown nodes are kept seperated by some distance. \section{Stochastic analysis} \label{sec:stochastic} Assume that the locations of the nodes are determined by a Poisson point process with density $\lambda$. The connectivity properties of this model are well studied in the literature (see e.g.~\cite{meester,DousseThiranHassler:02}). In our work, we assume that the network is percolating, i.e. the nodes are dense enough to ensure multihop communications. As in the deterministic case, we assume that a relay is located at the origin. For a Poisson point process this assumption does not change the distribution of the other points. The main result is an upper bound in probability for the maximum coding number. \begin{theorem} In a random network determined by a Poisson point process with density $\lambda$, the maximum coding number corresponding to combinations having the relay at the origin satisfies \[ \lim_{\lambda\to\infty}\pr{ C_{\max} (\lambda)=O(\lambda^{1/2+\epsilon})}=1, \] for any $\epsilon>0$. \end{theorem} \begin{proof} Cover the disk of radius $R$ around the origin by disjoint boxes of size $\frac{1}{\sqrt{\lambda}}\times \frac{1}{\sqrt{\lambda}}$. The number of nodes inside the boxes is denoted by $N_i$, $i=1,\ldots,n(\lambda)$. The $N_i$ are identically and independently $\mathrm{Poisson}(1)$ distributed and thus there is a sequence $I_n$ such that \[ \lim_{n\to\infty}\pr{\max_{1\leq i \leq n} N_i = \mbox{$I_n$ or $I_n+1$}} =1, \] where $I_n = O\left( \frac{\log n}{\log\log n}\right)$ (see \cite{Anderson:70,Kimber:83}). Since $n(\lambda)=O(\lambda)$, \begin{equation} \lim_{\lambda\to\infty}\pr{\max_{i=1,\ldots,n(\lambda)} N_i=O\left(\frac{\log \lambda}{\log\log \lambda}\right)}=1. \label{eq:Nlimit} \end{equation} Next consider a valid combination. By Lemma \ref{lemma:convex}, the nodes of the combination form a convex polygon. The perimeter of any convex polygon is at most $2 \pi R$ because it is located inside the disk of radius $R$. Since at most $O(\sqrt{\lambda})$ boxes of size $\frac{1}{\sqrt{\lambda}}\times \frac{1}{\sqrt{\lambda}}$ are needed to cover the perimeter of any convex polygon, \[ C_{\max}(\lambda) \leq \max_{i=1,\ldots,n(\lambda)}N_i O(\sqrt{\lambda}) \quad \mbox{a.s.} \] This implies that \[ \pr{C_{\max}(\lambda) \leq O(\sqrt{\lambda} g(\lambda))} \geq \pr{ \max_{i=1,\ldots,n(\lambda)} N_i \leq O\left(g(\lambda)\right)}. \] Setting $g(\lambda)=\log \lambda / \log\log \lambda$, applying equation (\ref{eq:Nlimit}) and finally noticing that $\log \lambda /\log\log \lambda = O\left(\lambda^\epsilon\right)$ for any $\epsilon>0$ completes the proof. \end{proof} \section{Rate efficiency of a network coding combination} \label{sec:efficiency} We will focus on the downlink of a valid combination of size $C$. Without loss of generality, assume that the rate vector $\mathbf{r}=\{r_i\}_{i=1,2,\dots,C}$ is ordered, i.e., $r_1<r_2<\dots<r_C$, and that the flow set is permuted accordingly so that over the link $\left(v_0,D(f_i)\right)$ packets are transfered at a rate $r_i$. The data rate is computed as the number of packets of size $P$ transmitted in a virtual frame over the time needed for these transmissions. Since an encoded packet is always transmitted at the lowest rate decodable by all receivers and assuming max-min fair allocation\footnote{This condition of fairness provides the best network coding opportunities and it is usually the balance point where network coding gain is computed in multiclass networks.}, we can deduce the maximum throughput rate with network coding as \[ r_{\textsl{NC}}(C)=\frac{CP}{\frac{P}{\min\{\mathbf{r}\}}}=Cr_1. \] The rate without NC would be \[ r_{\textsl{w}}(C)=\frac{CP}{\frac{P}{r_1}+\frac{P}{r_2}+\dots+ \frac{P}{r_C}}=C\left(\sum_{i=1}^{C}\frac{1}{r_i}\right)^{-1}=r_h, \] where $r_h$ is the harmonic mean of $\mathbf{r}$. Choosing any $m\leq C$ and allowing for combinations of size $m$ at most, it is easy to see that if the criteria for valid combinations are fulfilled for combination of size $C$ then they are fulfilled for all subsets. The corresponding achievable rate is \[ \begin{split} r_m(C)&=\frac{CP} {\sum_{i=1}^{\left\lfloor \frac{C}{m} \right\rfloor}\frac{P}{r_{m(i-1)+1}}+\indic{\mathrm{rem}\left(C,m\right)>0}\frac{P}{r_{C-\mathrm{rem}(C,m)+1}}}\\ &=C\left(\sum_{i=1}^{\left\lfloor \frac{C}{m} \right\rfloor+\indic{\mathrm{rem}\left(C,m\right)>0}}\frac{1}{r_{m(i-1)+1}}\right)^{-1}, \end{split} \] Next, we derive the network coding gain for the maximum combination ($C$) and for the constrained group ($m\leq C$). \begin{equation*} g(C) \doteq \frac{r_{\textsl{NC}}(C)}{r_{\textsl{w}}(C)} = C \frac{r_1}{r_h}. \end{equation*} Note that the gain is a linear function of $C$ and depends on the particularities of the rate vector. Also, \begin{equation*} \begin{split} g_m(C) & \doteq \frac{r_m(C)}{r_{\textsl{w}}(C)} = \\ &=\frac{C}{r_h} \left(\sum_{i=1}^{\left\lfloor \frac{C}{m} \right\rfloor+\indic{\mathrm{rem}\left(C,m\right)>0}} \frac{1}{r_{m(i-1)+1}}\right)^{-1}\\ & \geq \frac{C}{r_h} \left(\frac{\left\lceil \frac{C}{m}\right\rceil}{r_1}\right)^{-1} \geq (m-1) \frac{r_1}{r_h}, \end{split} \end{equation*} where in the first inequality we have used that $r_1$ is the minimum rate, and in the second we have used $m-1 \leq \frac{C}{\left\lceil \frac{C}{m} \right\rceil} \leq m$. If we choose equal rates, then we readily get $g(C)=C$ and $\max\left\{g_m(C)\right\}=m$ as the maximum gain for both. Finally we can symbolize that $g(C)=\Theta(C)$ and moreover the difference $g(C)-g_m(C)=O(C-m)$. Therefore, the efficiency loss is of the order of $\sqrt{N}-m$ which means that for carefully chosen $m$, the loss can be kept small. \begin{figure}[t] \centering \includegraphics[width=0.49\columnwidth]{fig3} \includegraphics[width=0.49\columnwidth]{fig4} \caption{Coding combination examples for $C_{\max}=6$ and $C_{\max}=8$ in a square grid with $N=81$.} \label{fig:grid_examples} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{fig5} \caption{Simulation of maximum coding number of a square grid inside a disk with $N$ nodes. $C_{\max}$ is the upper bound and $C_{\min}$ is the achievable lower bound provided in the theoretical analysis.} \label{fig:grid_maxcomb} \end{figure} \section{Numerical results} \label{sec:numerical} In this section we present some simulation results that provide further evidence and insight for our work. For simulation purposes we consider a disk of radius $R=1$ and a node $v_0$ serving as a relay situated at the center of the disk. Initially we consider a square grid of nodes over this disk and we investigate the maximum coding number, i.e., a set of nodes that satisfies the constraints of section \ref{sec:model}. Then, the scenario of uniformly random thrown nodes is considered. \subsection{Experiments with square grids} \begin{figure}[tb] \centering \includegraphics[width=0.46\columnwidth]{fig6} \includegraphics[width=0.46\columnwidth]{fig7} \caption{Maximum coding combination examples for $C_{\max}=6$ and $C_{\max}=8$ in a uniformly thrown network with $N=21$ and $N=70$ respectively.} \label{fig:random_examples} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.95\columnwidth]{fig8} \caption{Mean maximum coding number in a network of $N$ uniformly thrown nodes.} \label{fig:random_maxcomb} \end{figure} Figure \ref{fig:grid_examples} showcases examples of combination size $C=6$ and $C=8$ (the maximum is $8$ in this case). During these experiments we identified an interesting property. We noticed that the maximum coding number depends only on the number of nodes inside the disk and not on the actual $d$ used. In particular, for all $\{d: N \text{ is fixed}\}$, $C_{\max}$ is constant. In Figure \ref{fig:grid_maxcomb} we present the results for the square grid. The actual values seem to be closer to the lower bound than to the upper, leading to an order closer to $\sqrt[4]{N}$ (notice the logarithmic scale of the figure). Nevertheless, the oscillating effect due to the interplay between the radius and the number of nodes is evident. \subsection{Experiments with randomly positioned nodes} Next, we throw $N$ nodes uniformly inside the disk of radius $R=1$. Examples of maximum coding number are showcased in Figure \ref{fig:random_examples}. It is noted from these examples that large combinations tend to appear in a $\delta$--ring form where the inner side of the ring is a disk of radius $\frac{R}{2}$ and the outer side is a disk of radius $\frac{R}{2}+\delta$. In Figure \ref{fig:random_maxcomb}, we present the mean maximum coding number, for different number of nodes $N$. In each sample, the maximum coding number is calculated and the mean is obtained by averaging over $1000$ random samples. The $O(\sqrt{N})$ behavior is depicted in this picture. Figure \ref{fig:random_prob} shows the probability of existence of at least one coding combination of size $C$ in a network of $N$ uniformly thrown nodes. For example, the maximum component size for $20\leq N\leq 50$ is either 4 or 6 in the majority of cases. The simulation results show that in real networks of moderate size the usual combination size is quite small. The multiplicative constant seems to be close to 1. In this context, the focus should be on developing efficient algorithms that opportunistically exploit local network coding over a wide span of topologies using small XOR combinations rather than attempting to solve complex combinatorial problems in order to find the best combinations available. \begin{figure}[tb] \centering \includegraphics[width=0.95\columnwidth]{fig9} \caption{Probability of existence of at least one coding combination of size $C$ in a network of $N$ uniformly thrown nodes.} \label{fig:random_prob} \end{figure} \subsection{A realistic scenario} Next, we set up a realistic experiment to be run in simulation environment. A relay node is positioned at the origin, willing to forward any traffic required and apply NC if beneficial. Then, we throw $\frac{N}{2}$ pairs of nodes randomly inside the disk defined by the relay and the communication distance $R$; note that all nodes are valid nodes. Each pair constitutes a symmetric flow. Each flow may be valid or invalid depending on the distance between the two nodes, see definition in section \ref{sec:model}. Whenever the flow is invalid, the nodes communicate directly by exchanging two packets over two slots (one for each direction). If the flow is valid, the relay is utilized to form a 2-hop linear network. Again, 2 packets are uploaded towards the relay using two slots while the downlink part is left for the end of the frame. Finally, the relay has collected a number of packets which may be combined in several ways using NC. To identify the minimum number of slots required to transmit those packets to the intended receivers, we solve the problem of minimum clique partition with the constraint of using cliques of size up to $\frac{m}{2}$ (equivalently, combining up to $m$ packets together). In the above we have assumed that all links have equal transmission rates and that the arrival rates of the flows are all equal (symmetric fair point of operation). The network coding gain is calculated by dividing the number of slots used without NC by the number of slots using NC. Figure \ref{fig:realistic} depicts results from simulated random experiments. Evidently, it is enough to combine up to two packets per time in order to enjoy approximately the maximum NC gain. This example supports the intuition that in practice the network coding gain from large combinations is expected to be negligible. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{fig10} \caption{A realistic scenario. Pairs of nodes forming flows are randomly thrown inside the disk $(0,r)$. The relay node (situated at the origin) assists the flows that cannot communicate directly. We restrict NC combinations to size $m$ where $m=2,4,\infty$.} \label{fig:realistic} \end{figure} \section{Conclusion} \label{sec:conclusion} By considering the Boolean connectivity model and applying the basic properties required for correct decoding, we showed that for the local network coding there are certain geometric constraints bounding the maximum number of packets that can be encoded together. Particularly, due to the convexity of any valid combination, the sizes of combinations are at most order of $\sqrt{N}$ for all studied network topologies. The fact that the number of packets is limited gives rise to approximate algorithms for local coding. Instead of attempting to solve the hard problem of calculating all possible coding combinations, we showed that an algorithm considering smaller combinations does not lose too much.
2,869,038,155,755
arxiv
\section{Introduction} \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{Figure1.pdf} \caption{The coarse-grained annotation is generated to achieve salient instance results by the proposed framework. (b) shows the combination of bounding box and salient region annotations. (c) exhibits the coarse-grained labels for inexact supervised learning. The final result predicted by CGCNet is showed in (d).} \label{pipelinemethod} \vspace{-0.1in} \end{figure*} \IEEEPARstart{S}{alient} object detection (SOD) is known as a classic research field for highlighting the most sensitive and informative regions in a scene \cite{han2015background, liu2010learning, zhao2019egnet}. Originating from cognitive and psychology research communities, salient object detection is applied to various areas, such as video surveillance \cite{shao2020surveillance}, video summarization \cite{paul2019spatial} and content-aware image editing \cite{GaoDatabase}. With the rapid development of current image acquisition equipment and 5G communication technology, the traditional binary mask of salient object detection is inadequate to meet the needs of high-resolution image segmentation. Albeit salient object detection task provides the salient region labels compared to the background, they do not explore instance-level cue for salient information. The next generation of salient object detection methods need to showcase more detailed parsing and identify individual instances in salient regions \cite{li2017instance}. In addition, instance-level salient information is more consistent with human perception and offers better image understanding \cite{hsu2019deepco3}. In this paper, we concentrate on the new challenging task salient instance segmentation (SIS) for improving the intelligence level of monitoring systems. Visual saliency has gained significant progress owing to the rapid development of deep convolutional neural networks (CNNs) \cite{tu2021edge, li2015visual, guo2020motion}. Driven by the strong capability of multi-level feature extraction, CNN models are widely used in the computer vision area \cite{tian2019fcos, pinheiro2016learning}, especially focusing on estimating the bounding boxes of salient instances \cite{zhang2016unconstrained}. Different from salient object detection, salient instance segmentation fosters a more detailed information by labeling each instance with a precise pixel-wise mask and promotes the saliency maps from region-level to instance-level for more detailed analysis. In contrast to instance segmentation, salient instance segmentation only predicts salient instances based on the salient regions. Moreover, segmenting salient instances is class-agnostic compared to the class-specific instance segmentation task. However, the saliency models of CNNs are usually required to the pixel-level fully-supervised train data \cite{zhu2020aggregate, wang2021deep}. Up to now, the existing SIS dataset is seriously inadequate and the amount of pixel-wise ground-truths is insufficient in a single dataset. The quality and quantity of pixel-level annotations is the bottleneck because the labeling task is strenuous and time-consuming. To alleviate the effectiveness of lacking fully-supervised data, weakly supervised learning is viewed as the alternative training method attracting more attention. This strategy not only avoids user-intensive labeling, but also encourages the models to receive enough training samples. Inspired by this consideration, in this paper, we aim to integrate the bounding boxes and binary salient regions for training the SIS frameworks. The bounding box annotation contains location information for each salient instance. Meanwhile, salient regions provide salient region information which is a ready-made source generated from the existing SOD datasets. Both box-level and region-level annotations are inexact for salient instance segmentation \cite{zhou2018brief}. As shown in \figref{pipelinemethod}, the bounding boxes determine the location and number of salient instances which have labeled in the DUT-OMRON dataset \cite{yang2013saliency}. We use the bounding box and salient region to assign salient regions to each bounding box of salient instance. It is essential to combine these two supervision sources because the bounding box annotation lacks the pixel-level labels and salient region cannot distinguish different salient instances in the coarse-grained labels. To ensure one instance corresponds to one bounding box and hold the consistency of salient instances and regions, we also exploit some priors to prevent the different object regions trapped into the same box. In this case, the network can utilize more training samples with the lowest labeling cost. We will elaborate the generation steps of the coarse-grained annotations in Section \ref{weaksource}. For segmenting salient instances, we design a cyclic global context SIS network (CGCNet) supervised by the above coarse-grained labels. \figref{figure2} shows the overview of our CGCNet. The proposed model is an end-to-end two-stage SIS framework, which first detects salient proposals and then predict the pixel-level salient instance masks. When extracting features for salient mask prediction, the performance of convolutional layer depends heavily on global context. Considering obtaining stronger feature representation, we extend the scope of feature extraction from the local proposal to the global features. Inspired by enter-surround contrast derived from saliency detection mechanism \cite {klein2011center, xia2016bottom, perazzi2012saliency}, a global feature refining module (GFR) is designed to make full use of background features and suppress disturbance from other salient instance features \cite{long2015fully}. Different from the ROIAlign layer that limits the receptive field in Mask R-CNN \cite{he2017mask}, the proposed GFR module is sensitive to global contrast in order to capture more detailed edge information. Moreover, the CGCNet is designed to iteratively update the coarse-grained annotations by using the forward prediction masks combining with a conditional random field (CRF) \cite{krahenbuhl2011efficient}. It is beneficial to refine the coarse-grained annotations sequentially. The input training samples and the corresponding results are shown in \figref{pipelinemethod}. We evaluate the results on the test set of Dataset1K \cite{li2017instance} and show that our method compares favourably against even some fully supervised methods. In summary, the main contributions of this paper are as follows: \begin{itemize} \item We propose a novel inexact supervision salient instance segmentation framework called cyclic global context network (CGCNet), which is supervised by the combination of the region-level bounding boxes and salient regions. \item We design a global feature refining (GFR) layer that extends the receptive field of each instance to the global context and suppress the features of other salient instances simultaneously. \item We embed an update scheme in CGCNet that can optimize the coarse-grained labels continuously to improve the accuracy. \end{itemize} The remainder of this paper is organized as follows. Section \uppercase\expandafter{\romannumeral2} presents the related works. Section \uppercase\expandafter{\romannumeral3} describes the architecture and the details of the proposed framework. Section \uppercase\expandafter{\romannumeral4} discusses the experimental settings and comparions with the {state-of-the-art~} methods. Finally, Section \uppercase\expandafter{\romannumeral5} concludes the paper. \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{Figure2.pdf} \caption{An overview of the proposed framework. The detail of the GFR module is shown in the upper right corner. The coarse-grained annotations updating criteria is illustrated in Section \ref{SISbranch}. At the training time, the salient instance result return to update the coarse-grained annotation in next iteration.} \label{figure2} \vspace{-0.1in} \end{figure*} \section{RELATED WORK} \subsection{Salient Object Detection} Thanks to the fast development of deep learning techniques, salient object detection has gone through a transformation from traditional machine learning to deep CNNs \cite{feng2019attentive}. Driven by the multi-level features extracted from convolution network, the performance of SOD models boost significantly. Fortunately, rich pixel-level salient datasets can be poured into various CNN models to detect salient regions \cite{feng2019attentive, bai2018sod}. Li {\em et al.~} \cite{li2016deep} proposed a multi-scale deep contrast network to overcome the limitations of overlap and redundancy. Hou {\em et al.~} \cite{hou2017deeply} designed short connections to the skip-layer structures based on the VGGNet for better supervision. Qin {\em et al.~} \cite{qin2019basnet} produced a predict-refine SOD network which is composed of a densely supervised encoder-decoder network and a residual refinement module. Although these SOD methods achieved outstanding performance, the saliency map is viewed as the region-level binary mask which may not accomplish instance-level salient object segmentation. \subsection{Salient Instance Segmentation} Proceed from SOD, salient instance segmentation propels the problem into an instance-level phase. Unlike instance segmentation \cite{lee2020centermask, xie2020polarmask, chen2020blendmask}, salient instance is category-independent and it is concentrate on salient regions. Therefore, the frameworks and datasets of instance segmentation are incompatible with segmenting salient instances. Zhang {\em et al.~} \cite{zhang2016unconstrained} generated salient region-level proposals by CNNs and optimized the bounding boxes based on the Maximum a Posteriori principle. The method is the first to raise saliency detection from the region level to the instance level. Subsequently, Li {\em et al.~} \cite{li2017instance} formally proposed the instance-level salient object detection task. They drove the prediction results from proposals to pixel-level, and produced the first SIS dataset containing 1,000 samples. Pei {\em et al.~} \cite{pei2020salient} proposed a multi-task model to predict salient regions and subitizing, and then applied a spectral clustering algorithm to segment salient instances. Recently, Fan {\em et al.~} \cite{fan2019s4net} proposed an end-to-end single-shot salient instance segmentation framework to segment salient instances. The proposed ROIMasking layer allows more detailed information to be detected accurately, and meanwhile remains the context information around the regions of interest. As a new challenging task, however, the lacking of fully-supervised label is the main problem to limit the performance of deep learning models. To avoid making the high cost of pixel-level annotations, we take advantage of the inexact supervision to train our model. \subsection{Weakly Supervised Learning} Most neural networks require full supervision in the form of handcrafted pixel-level masks, which limits their application on large-scale datasets with weaker forms of labeling \cite{zhu2019learning}. To reduce the cost of hand-labelling, weakly supervised learning has attracted a great deal of attention in recent years \cite{bilen2016weakly, cinbis2016weakly, diba2017weakly}. Many weakly supervised principles have been introduced in machine vision area, including object detection, instance segmentation and saliency detection \cite{tang2018weakly, oh2017exploiting}. Weakly supervised learning reveals that the network purposed for one supervision source can resort to another source or incomplete labels. Li {\em et al.~} \cite{li2018weakly} utilized a coarse activation map from the classification network and saliency maps generated from unsupervised methods as pixel-level annotation to detect salient objects. Zheng {\em et al.~} \cite{zheng2021weakly} take advantage of salient subitizing as the weak supervision to generate the initial saliency maps, and then propose a saliency updating module (SUM) to refine the saliency maps iteratively. Moreover, Zeng {\em et al.~} \cite{zeng2019multi} incorporated with diverse supervision sources to train saliency detection models. They designed three networks that learn from category labels, captions and noisy labels, respectively. Inspired by the above contributions, we build an inexact label which embraces the existing binary salient regions and bounding boxes for better training the SIS network. \section{The CGCNet Architecture} \subsection{Motivation} The motivation of the proposed method is handled with segmenting class-agnostic salient instances under lacking fully-supervised annotations. We tend to utilize sufficient training samples with the lowest labeling cost. Therefore, in this paper, the coarse-grained label is proposed that is composed of bounding boxes and binary salient regions. On one hand, the salient proposals provide positional information of salient instance. On the other hand, binary salient regions can provide approximate salient area information for salient instances. Additionally, they can be easily achieved from existing SOD datasets. For training by the coarse-grained labels, we design a cyclic global context neural network (CGCNet) to predict salient instances and update the coarse-grained labels recurrently. \subsection{Overall Framework} As shown in \figref{figure2}, the framework of our proposed CGCNet consists of three main components. Firstly, The RPN head is viewed as a salient proposal detector that detects the bounding boxes of salient instance to capture the location and number of salient instances. Then, the GFR module provides the global feature representation to predict salient masks. Moreover, the resulting salient instances update the coarse-grained ground-truth added with the fully connected CRF operation for the next iteration. We combine pre-trained ResNet-101 \cite{he2016deep} with FPN \cite{lin2017feature} as the backbone. According to the order of downsampling in ResNet-101, we extract the 4-th stage feature map followed by a 1$\times$1 convolutional layer with the lateral connections in multi-level FPN prediction \cite{he2017mask}. Followed by FPN, we utilize five levels of feature maps to detect different sizes of objects on different levels to maximize the gains in accuracy. The feature maps produced by the backbone are extracted from the entire input image. Both salient proposal detector and salient instance segmentation branch are feed with the 256 channel feature maps. Similar to Faster R-CNN \cite{ren2015faster}, the RPN head is merged into CGCNet for predicting the bounding boxes of each instance in one image. Considering the category-independent characteristic, each ROI feature is assigned to two classes, denoted as $B_c(c\in \left\{ 0,1\right\})$. The two classifications correspond to the background and the salient object in foreground. RPN works on the input features and predicts a pile of salient proposals. Followed by ROIAlign \cite{he2017mask} and two 1024-D Fully Connected layer (FC), the resulting coordinates of salient proposals are generated attached with a confidence score of saliency degree. Then, non-maximum suppression (NMS) \cite{neubeck2006efficient} is embedded to suppress the negative proposals that the saliency score behind the threshold $0.7$ for refining the bounding box of each instance. The output salient proposals relabel on the feature maps produced by the backbone as input to our GFR module. In this phase, the GFR module extends the ROI feature to the global feature. In addition, this layer retains the feature of the current instance while suppressing the feature of other ROI features. The features processed by the GFR module are injected into a pixel-to-pixel fully convolutional block. The Fully convolutional fashion preserves the spatial consistency of each pixel involved in corresponding salient instances. Moreover, taking the resulting salient instances predicted by the SIS branch, the updating scheme is produced to update the coarse-grained ground-truth recurrently in training phase. In the following subsection, we will describe the SIS branch and the GFR module in detail. \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{Figure3.pdf} \caption{Visualization of the GFR module in segmentation branch and comparison of our local feature refining module (LFR module) and Mask R-CNN \cite{he2017mask}.} \label{figure3} \end{figure*} \subsection{Inexact Supervision Sources}\label{weaksource} We implement the coarse-grained annotations to handle the problem of lacking sufficient labels for the SIS task. Considering the characteristics of salient instances, it is essential to embrace both the salient region and the number of salient instances. Inspired by salient object detection and instance segmentation tasks, the coarse-grained labels are composed of salient regions and the bounding boxes of salient instances. To train the proposed CGCNet model, we utilize the largest number of SOD dataset called DUT-OMRON \cite{yang2013saliency}, which contains about 5,000 salient object labels and the bounding boxes. We select 4,500 images from the training set of the DUT-OMRON SOD dataset. Despite combining salient regions and bounding boxes, the coarse-grained labels still have some general issues. First, salient regions from different bounding boxes have shared patches. Second, some small instances are enclosed into the bounding boxes of larger instances. To reduce the negative influence of these obstacles, we provide two priors to deal with ambiguous samples. On one hand, we restrict that each bounding box can contain only one enclosed salient region. On the other hand, if there are multiple closed areas in one bounding box, we only keep the maximal area as its regression target. Given a binary salient map $S$, the bounding box corresponding to each salient instance is denoted as $W_{i} (i=1,2,…,n)$. In addition, we set the patches discarded by priors in each window as $\varphi_{i}$. The final coarse-grained label $I$ is defined by: \begin{equation} I=\sum_{i=1}^{n}[S(x,y)\cap W_{i}-\varphi_{i} (\hat{x},\hat{y})],\,\,\,\,i=1,2,...,n, \end{equation} where $(x,y)$ presents salient region pixels in the image $S$ and $ (\hat{x},\hat{y})$ denotes the set of pixels excluded by our priors in each window. $n$ is the number of salient instances in an image. The final example can refer to \figref{pipelinemethod}. \subsection{The Salient Instance Segmentation Branch}\label{SISbranch} The salient instance segmentation branch aims to segment each salient instance in virtue of the global cues. By achieving the ROI features from the RPN head, we can determine the location and number of salient instances. However, features of each region just contain local spatial information, which is insufficient to segment explicit pixel-level labels. This barrier drives us to explore the broader feature for segmentation. Inspired by center-surround contrast based on the SOD task, we seek to extend the ROI feature to the global feature map. Resorting to increasing receptive field and ensuring the resolution of instances, we utilize global features extracted from the backbone instead of the ROI feature. Meanwhile, each feature map produced from the GFR module only contains the feature of current salient instance proposal and background while suppressing the features of other salient instance proposals. \textbf{The GFR module.} The goal of the proposed global feature refining module (GFR) is to obtain global context information and limit the disturbance of other instance features.For the ROIAlign module, it only pay attention to the ROI feature and resize the original resolution of ROI \cite{he2017mask}. In S4Net \cite{fan2019s4net}, the ROIMasking extend the receptive field and use of the information around the ROI contrasting ROI features. Differ from the ROIMasking, our GFR module expand each ROI directly to the global feature map and maximize the center-surround contrast for segmenting salient instance. The internal process in the GFR module is shown in the top right corner of \figref{figure2}. Given the feature maps produced from FPN, the GFR module transfers the coordinates of all proposals from different scales of features to the aspect ratio of original feature map. Tasking $F^{(H\times W \times C)}$ as the input feature map, we assume that the number of proposals is {\em n}. To explain the module more facilitatively, the number of proposals is set to 3. Let $R_{i}^{(H\times W \times C)}(i=1,2,…,n)$ as the feature map includes i-th features of proposals. To maintain the consistency of resolution between {\em F} and $R_{i}$, global average pooling is used to fill in the background area. The output of GFR module $G_{i} (i=1,2,…,n)$ is defined by: \begin{equation} G_{i}= F-\sum_{i=1}^{n}R_{i}+R_{i},\,\,\,\,i=1,2,...,n \end{equation} Each feature map $G_{i}$ contains the corresponding feature of proposals and the feature of background. To constrain other features of proposals, the operation of our GFR module first digs out all regions of salient proposals in input feature map and then sticks the corresponding ROI feature on {\em F} according to the coordinates of proposal. This operation also avoids missing the shared pixels from different proposals and reserves the occlusion parts. \figref{figure3} visualizes the process of GFR module and compares with other similar modules. We also introduce the local feature refining module (LFR). Compared to the GFR module, the LFR module extends the receptive field based on the ROI feature while limiting other salient proposal features rather than covering global features. Assume that the size of salient proposal is ($H_{r}$, $W_{r}$), the size of extended bounding box is set to ($H_{r}+h$, $W_{r}+w$), where $h$ and $w$ is $H_{r}/5$ and $W_{r}/5$, respectively. The other setting of LFR module is same as GFR module. Additionally, the corresponding process in Mask R-CNN \cite{he2017mask} is exhibited in the top branch in \figref{figure3}. The experiment results demonstrate that the GFR module outperforms the other two modules for SIS task, which is discussed in detail in Section \ref{Ablation}. After adding with our GFR module, each target instance not only contains the features inside the proposal but also takes advantage of the global context information to highlight the instance region. The mask head is efficient to use the contrast of foreground and background features to segment salient instances. For each output feature map from GFR layer, SIS branch stack four consecutive convolutional layers followed on dilated convolutional layer with stride 2 and RELU function \cite{nair2010rectified}. All the convolutional layers have a kernel size 3$\times$3 and stride 1. \textbf{Coarse-grained Annotations Updating Scheme.} Considering the initial training samples are coarse-grained annotations, we produce an updating scheme to optimize coarse-grained annotations continuously. The fundamental flaw of the coarse-grained labels is that boundary information of each instance is not detailed enough, and different instances in one image have overlap and occlusion. If only training on the original samples, the predicted salient instances would contain some small redundant patches that belong to background or other instances. To further improve the performance of CGCNet, we insert the fully connected conditional random field (CRF) \cite{krahenbuhl2011efficient} after the salient instance maps in the SIS branch because the CRF operation has significant progress on refining the edge of objects. The fully connected CRF model employs the following energy function: \begin{equation} E(M)=-\sum_{i}log P(m_i )+\sum_{i,j}\varphi_{p}(m_i,m_j), \end{equation} where $M$ presents a binary mask assignment for all pixels, and $P(m_i)$ is the label assignment probability at pixel $i$ belonging to the salient instance. For each binary salient instance mask, the pairwise potential $\varphi_p(m_i,m_j)$ for two labels $m_i$ and $m_j$ is defined by: \begin{equation}\label{crfcost} \begin{aligned} \varphi _{p}\left( m_{i},m_{j}\right) =\omega _{1}\exp \left( -\dfrac {\left| p_{i}-p_{j}\right| ^{2}}{2\theta ^{2}_{\alpha}}-\dfrac {\left| I_{i}-I_{j}\right| ^{2}}{2\theta ^{2}_{\beta }}\right) +\\ w_{2}\exp \left( -\dfrac {\left| p_{i}-p_{j}\right| }{2\theta ^{2}_{\gamma}}\right) ^{2}, \end{aligned} \end{equation} where the first kernel depends on pixel positions $p$ and pixel intensities $I$. The kernel encourages nearby pixels with similar features to take consistent salient instance labels \cite{li2016deep}. The second kernel quantifies the smoothness kernel which only depends on pixel positions for removing small isolated regions \cite{ShottonTextonBoost}. $\omega_{1}$ and $\omega_{2}$ indicate the weighted values to balance the two parts. The hyper parameters $\theta_{\alpha}$, $\theta_{\beta}$ and $\theta_{\gamma}$ control the degree of the Gaussian kernels. In this paper, we adopt the publicly available implementation of \cite{krahenbuhl2011efficient} to optimize these parameters. Specifically, we cross-validate the hyperparameters $\omega_{1}$, $\omega_{2}$, $\theta_{\alpha}$, $\theta_{\beta}$ and $\theta_{\gamma}$ for the best performance of CRF. The coarse-to-fine scheme is applied on the subset of validation set (about 100 images) in DUT-ORMON dataset. The default value of $\omega_{2}$ and $\theta_{\gamma}$ are set to 3 and 1, and the initial search range of the parameters are $\omega_{1}\in[1$:$1$:$10]$, $\theta_{\alpha}\in[50$:$5$:$100]$ and $\theta_{\beta}\in[5$:$1$:$15]$. These parameters are fixed through 10 iterations of the average field to achieve the best value. In our experiments, the values of $\omega_{1}$, $\omega_{2}$, $\theta_{\alpha}$, $\theta_{\beta}$ and $\theta_{\gamma}$ are set to 4, 3, 70, 13, 1, respectively. We denote the salient instance map as {\em R} and the map processed by CRF as $R_{f}$. The coarse-grained annotation is labeled as {\em C}. According to Algorithm 1, we propose a strategy based on the KL-Divergence \cite{cornia2018predicting} to update the {\em C} for the next iteration. KL-Divergence is defined as a dissimilarity metric and a lower value indicates a better approximation between the predicting salient instance maps and the ground-truth. Due to ground-truth of CGCNet is noisy, the updating prediction map should have more dissimilar patches with coarse-grained annotation as well as the larger value of KL-Divergence between them. Our strategy compares the prediction map {\em R} and $R_{f}$ to the coarse-grained annotation {\em C}, which is designed as: \begin{equation}\label{equk1} K_{1}(R,C)=\frac{1}{H\times W}\sum_{i=1}^{H\times W}C_ilog(\frac{C_i}{R_i+\sigma }+\sigma ) \end{equation} \begin{equation}\label{equk2} K_{2}(R_f,C)=\frac{1}{H\times W}\sum_{i=1}^{H\times W}C_ilog(\frac{C_i}{{R_f}_i+\sigma }+\sigma ), \end{equation} where $K_{1}$ and $K_{2}$ denote the mean KL-Divergence value of {\em R} and $R_{f}$ to {\em C}, respectively. The index of {\em i} is set as the {\em i-th} pixel and $\sigma$ is a regularization constant. In Algorithm 1, $C_{n}$ represents the ground-truth to be used for the next iteration. It is observed that we set $\varphi$ as the threshold to determine whether to update with the existing coarse-grained annotation $C$. The value of $\varphi$ is set to 0.05. The strategy can eliminate redundant replacements and alleviate the impact of excessive erosion of the CRF on the prediction map. Using the updating scheme to the inexact supervised learning, the network achieved more accurate results at the training phase. \begin{algorithm} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Ensure:}} \caption{Coarse-grained annotations updating} \label{alg:1} \begin{algorithmic}[1] \REQUIRE Coarse-grained annotation {\em C}, salient instance map {\em R} and salient instance map with CRF $R_{f}$. \ENSURE The updated coarse-grained annotation $C_{n}$ \STATE \textbf{if} $K_{2}(R_f,C)-K_{1}(R,C)\geq \varphi$ \STATE \textbf{then} $C_{n}=C$ \STATE \textbf{else} $C_{n}=R_{f}$ \STATE \textbf{end if} \end{algorithmic} \end{algorithm} \textbf{Loss Function.} The proposed CGCNet need to trained salient proposal branch and SIS branch simultaneously. Therefore, we use ground-truth proposals to supervise the RPN head and the pixel-level coarse-grained labels to train SIS branch. The loss function of CGCNet is defined as a two-stage fashion: \begin{equation} L= L_{bb}+L_{seg}+L_{upd} \end{equation} Where the $L_{bb}$ function includes a classification loss which is log loss over two classes including saliency or background and a bounding box loss which is similar with $L_{loc}$ in Fast R-CNN \cite{girshick2015fast}. The SIS branch loss $L_{seg}$ is defined by the cross-entropy loss, which is followed by: \begin{equation} L_{seg}=-\frac{1}{N}\sum_{i=1}^{N}(g_{i}logp_{i}+(1-g_{i})log(1-p_{i})) \end{equation} where $p_{i}$ denotes the probability of pixel $i$ belonging to class $c={0,1}$, and $g_{i}$ indicates the ground truth label for pixel $i$. Inspired by the updating criterion from \equref{equk1} and (\equref{equk2}, the loss function $L_{upd}$ for updating SIS branch for pixel-level salient instance prediction is: \begin{equation} L_{upd}=K_{2}(R_f,C)-K_{1}(R,C) \end{equation} In the training phase, the weights of the backbone are frozen. The entire procedure is repeated iteratively for training. \section{Experimental Results} In this section, we elaborate on the results of the proposed CGCNet framework for the SIS task in detail. We perform ablation experiments on various components of our approach. Besides, we use different metrics to compare with the experimental results of other {state-of-the-art~} methods. Since the proposed method accomplishes the SIS task by inexact supervised learning, we will maintain maximum fairness in comparison. \subsection{Implementation Details} As described in the section above, the end-to-end CGCNet is trained by our inexact labels which select 4,500 images from DUT-OMRON dataset \cite{yang2013saliency} without ambiguous samples. During training, the salient bounding box ground-truths are used to supervise the salient proposal detector while combining with SOD annotations to train the mask branch. Meanwhile, we utilize 500 images as the same as training data for validation. For training salient proposals, the bounding boxes are considered as a positive sample if the IOU is more than 0.7 or a negative sample below 0.3. In addition, the NMS threshold used on the proposal detector is set to 0.7. At inference time, we only use 300 images from the testing set in the dataset proposed in \cite{li2017instance} due to shortage of datasets. We input the number of top 80 scoring proposals from the proposal prediction branch after applying NMS to the GFR module. Additionally, the SIS branch directly outputs the resulting images without the updating scheme. Our proposed framework is implemented in PyTorch framework on 2 NVIDIA GeForce GTX 1080Ti GPUs with 22 GB of memory. To speed up training convergence, we initialize the CGCNet with a pre-trained model over the ImageNet dataset \cite{deng2009imagenet} from Mask R-CNN \cite{he2017mask}. The CGCNet is fine-tuned by flipping the training sets horizontally at a probability of 0.5. In our experiments, we train our network with a learning rate of 0.0025 which is decreased by 10 at the 8K iteration. The training process totally iterates 16K times by using the batch size of 4. The weight decay is empirically set to 0.0001 and the momentum is 0.9. \begin{table}[!t] \centering \scriptsize \renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{6mm} \caption{Comparison of different backbones used in the CGCNet on DUT-ORMON validation set. In this experiment, we keep the rest part of the framework in line.}\label{Table1} \begin{tabular}{cccc} \toprule Backbone & AP & AP$^{r}$0.5 & AP$^{r}$0.7 \\ \hline VGG16 \cite{simonyan2014very} & 50.79 & 79.28 & 60.38 \\ ResNet-50 \cite{li2018weakly} & 57.13 & 85.6 & 71.02 \\ ResNet-101 \cite{wang2017learning} & 57.69 & 86.04 & 71.72 \\ ResNeXt-101 \cite{xie2017aggregated} & \textbf{58.28} & \textbf{86.91} & \textbf{72.69} \\ \bottomrule \end{tabular} \end{table} \subsection{Evaluation Metrics} For a brand new task, salient instance segmentation has few evaluation metrics to measure its performance quantitatively. Different from SOD and instance segmentation, The SIS task distinguishes pixel-level instances based on salient regions without classification. Therefore, we adopt the $AP$ metric to calculate the average of maximum precision value at IoU scores of 0.5 and 0.7 instead of MAP metric \cite{hariharan2014simultaneous}. The precision value of one image is computed by the predicted number of salient instances (IoU $>$0.5 or 0.7) divided by the real number of salient instances in the image. So, the $AP^{r}$ metric is defined by the summation of precision value divided by the number of all images in testing set, which is formulated as: \begin{equation} AP^{r}\alpha =\dfrac {1}{N}\sum _{j}\dfrac {1}{n}\sum _{i}precision,\,\,\,\,IoU(i)\geq \alpha \end{equation} \begin{equation} precision=\begin{cases}1,\,\,\,\,if\,\,IoU(i)\geq \alpha \\ 0,\,\,\,\,if\,\,IoU(i)< \alpha \end{cases}, \end{equation} where $\alpha$ is the threshold of IoU. $N$ is the number of instances in one image and $n$ is the total instances in the dataset. Moreover, the $AP$ metric is used to measure the effectiveness of salient instance segmentation according to the $AP^{r}$ metric. The metric average the $AP^{r}$ metric with the threshold of IoU from 0.5 to 0.95 by step 0.05, which is calculated by : \begin{equation} AP=\frac{1}{10}\sum _{\alpha }AP^{r}|\alpha ,\,\,\,\,\alpha=0.5,0.55,...,0.95 \end{equation} Compared with the $AP^{r}$ metric, the $AP$ value is adopted to measure the overall performance of SIS methods. In this section, the experimental results are evaluated mainly based on the above-mentioned two metrics. \begin{table}[!t] \centering \scriptsize \renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{2mm} \caption{Ablation study for different modules in SIS branch. The experiment is evaluated on DUT-ORMON validation set.}\label{Table2} \begin{tabular}{ccccc} \toprule Modules & LFR module & GFR module & ROIAlign \cite{he2017mask} & ROIMasking \cite{fan2019s4net} \\ \hline AP$^{r}$0.5 & 85.45 & \textbf{86.04} & 85.25 & 85.73 \\ AP$^{r}$0.7 & 70.2 & \textbf{71.72} & 70.28 & 70.46 \\ \bottomrule \end{tabular} \end{table} \subsection{Ablation Studies}\label{Ablation} We analyze the effectiveness of the proposed CGCNet on DUT-OMRON validation set \cite{yang2013saliency}. The ablation studies contain four parts: performance of four different backbones, performance of GFR module versus three related structures, hyper-parameter of the updating scheme and contributions of each component of our framework. \textbf{Backbone:} To ensure fairness and the effects of the different backbones on the experimental results, we verify various backbones working on CGCNet which stay in the same settings. \tabref{Table1} shows the effectiveness of these base models working on the framework. It demonstrates that the backbone of combining ResNeXt-101 achieves the best performance whether $AP$ or $AP^{r}$ metric \cite{xie2017aggregated}. The widely used ResNet-101 has also achieved good results slightly behind ResNeXt-101. Due to insufficient depth of the network, VGGNet obtained relatively low accuracy, but is slightly faster than ResNet \cite{simonyan2014very}. \textbf{The GFR Module:} The proposed GFR module is viewed as the core layer in SIS branch to refine features. In this section, we try to evaluate the feature refining layer containing local and global cues, respectively. \tabref{Table2} lists the performance of the LFR module and GFR module. Meanwhile, we also compare similar methods embedded in the segmentation branch based on CGCNet, including ROIAlign in Mask R-CNN \cite{he2017mask} and ROIMasking in S4Net \cite{fan2019s4net}. As shown in \tabref{Table2}, the experimental results based on GFR module outperforms other modules. ROIAlign only concentrates on the ROI features. Albeit the LFR module extended the scale of features around ROI, it still slightly behind the ROIMasking by reason of its ternary masking. It indicates that treatment of refining features play an important role in segmenting salient instances. Finally, we adopt the GFR module embedded in our framework. \textbf{Hyper-parameter in updating scheme:} The threshold $\varphi$ of updating scheme is essential for the quality of inexact supervised annotations to train our framework. In our experiment, we find the appropriate threshold to ensure the efficiency at the training time. According to the formulation of KL-Divergence \cite{krahenbuhl2011efficient}, we empirically provide several default values for determining its influence in this experiment, which is shown in \tabref{Table3}. The performance of different values of $\varphi$ is relatively average. The best result is obtained when the value of $\varphi$ was set to 0.05, it can balance the optimal quantity and quality of replacement. \begin{figure*}[!t] \centering \includegraphics[width=0.8\linewidth]{Figure4.pdf} \caption{Qualitative analysis of experimental results by the proposed method and S4Net \cite{fan2019s4net}.} \label{figure4} \end{figure*} \begin{table}[!t] \centering \scriptsize \renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{4mm} \caption{The threshold $\varphi$ of updating scheme performance of CGCNet. The highest scores in each row are labeled in bold.}\label{Table3} \begin{tabular}{cccccc} \toprule $\varphi$ & 0.01 & 0.05 & 0.1 & 0.15 & 0.2 \\ \hline AP$^{r}$0.5 & 85.89 & \textbf{86.04} & 84.85 & 84.66 & 84.13 \\ AP$^{r}$0.7 & 71.34 & \textbf{71.72} & 71.16 & 70.68 & 70.19 \\ \bottomrule \end{tabular} \end{table} \textbf{The component in CGCNet:} We conducted extensive experiments to discover contributions of each innovative module under the same settings. These parts of CGCNet include the prior criteria (Standardized coarse-grained labels), the updating scheme and the GFR module. As shown in \tabref{Table4}, the various parts of our framework have various degrees of contribution for segmenting salient instances. Particularly, the updating scheme has more contributions that improved the $AP$ metric about 2 percent compared to without it. It can be attributed to the insertion of CRF and the revision of the coarse-grained annotations at the training time. With the help of the prior criteria, the performance significantly improved in terms of $AP^{r}$0.5 and $AP^{r}$0.7 metrics. Overall, each module has an indispensable contribution to the entire framework. \begin{table}[!t] \centering \scriptsize \renewcommand{\arraystretch}{1.3} \renewcommand{\tabcolsep}{3mm} \caption{Ablation analysis of effects of various components from our model on SIS task. PC, GFR and US means the prior criteria, the GFR module and the updating scheme, respectively. The experiment is evaluated on DUT-ORMON validation set.}\label{Table4} \begin{tabular}{llll} \toprule Models & AP & AP$^{r}$0.5 & AP$^{r}$0.7 \\ \hline The basic model & 53.93 & 83.44 & 66.86 \\ The basic model + PC & 54.67 & 85.81 & 68.43 \\ The basic model + PC + GFR & 55.84 & 85.15 & 70.54 \\ The basic model + PC + GFR + US & \textbf{57.69} & \textbf{86.04} & \textbf{71.72} \\ \bottomrule \end{tabular} \end{table} \begin{table}[!t] \centering \scriptsize \renewcommand{\arraystretch}{2.0} \renewcommand{\tabcolsep}{2.9mm} \caption{Quantitative comparisons with existing methods on the training set of our inexact labels and Dataset1K \cite{li2017instance}, respectively. The results are evaluated on the test set of Dataset1K \cite{li2017instance}. For a fair comparison, both our method and S4Net \cite{fan2019s4net} use ResNet-50 as backbone. We keep the rest part of the framework in line. '-' indicates unacquirable value.}\label{Table5} \begin{tabular}{c|c|c|c|c} \hline Method & Training Set & AP & AP$^{r}$0.5 & AP$^{r}$0.7 \\ \hline S4Net \cite{fan2019s4net} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}DUT-ORMON\\ (Inexact labels)\end{tabular}} & 50.9 & 84.9 & 60.8 \\ \cline{1-1} \cline{3-5} CGCNet (Ours) & & \textbf{58.3} & \textbf{88.4} & \textbf{71.0} \\ \hline MSRNet \cite{li2017instance} & \multirow{4}{*}{Dataset1K \cite{li2017instance}} & - & 65.3 & 52.3 \\ \cline{1-1} \cline{3-5} SCNet \cite{pei2020salient} & & 56.8 & 84.6 & 67.4 \\ \cline{1-1} \cline{3-5} S4Net \cite{fan2019s4net} & & 52.3 & \textbf{86.7} & 63.6 \\ \cline{1-1} \cline{3-5} CGCNet (Ours) & & \textbf{57.1} & 85.8 & \textbf{69.0} \\ \hline \end{tabular} \end{table} \begin{figure*}[!t] \centering \includegraphics[width=\linewidth]{Figure5.pdf} \caption{The attributes-based performance of the CGCNet on the instance-level SOC test set. The left of histogram shows the accuracy of $AP$ metric. The histograms in the middle and right show the accuracy of $AP^{r}$0.5 and $AP^{r}$0.7 metric under nine attributes.} \label{figure5} \vspace{-0.1in} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=0.8\linewidth]{Figure6.pdf} \caption{Representative experimental results for each attribute produced by S4Net and the proposed method. Both frameworks are fine-tuned on the Dataset1k training set \cite{li2017instance} and tested on the SOC test set \cite{fan2018SOC}. We select a most representative sample in each attribute-based test subset. Each row displays one attribute. We keep the setting of two framework in line.} \label{figure6} \end{figure*} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{Figure7.pdf} \caption{Example of failure modes generated by our method. Samples are selected from the Dataset1k test set \cite{li2017instance}} \label{failureexamples} \vspace{-0.1in} \end{figure} \subsection{Comparison with the {state-of-the-art~} Methods} There are three existing methods related to the salient instance segmentation task: MSRNet \cite{li2017instance}, S4Net \cite{fan2019s4net} and SCNet \cite{pei2020salient}. In contrast to these previous works, we are the first to make use of inexact supervised learning for the new challenging task. All methods are evaluated on the test set of Dataset1K \cite{li2017instance} and SOC dataset \cite{fan2018SOC}, respectively. For fair comparison, we compare the existing salient instance segmentation methods qualitatively and quantitatively on the only two datasets. \textbf{Evaluation on the Dataset1K:} The Dataset1K \cite{li2017instance} is the first salient instance dataset, which contains 500 images for training, 200 images for validation and 300 images for testing. Considering that all existing methods are fully supervised and our method is supervised by inexact labels, we train all methods on the training set of Dataset1K and our coarse-grained annotations of DUT-OMRON dataset, respectively. Then, we evaluate these models on the test set of Dataset1K \cite{li2017instance}. Since our inexact labels are not applicable to MSRNet and SCNet, we only compare with S4Net by using inexact labels for training. The proposed CGCNet use ResNet-50 as backbone to stay the same with S4Net. Other settings also maintain relative consistency and fairness in this experiment. \tabref{Table5} lists the value of $AP$, $AP^{r}$0.5 and $AP^{r}$0.7 metric achieved by different training set. Due to the related code of \cite{li2017instance} is not available, we cannot obtain its whole results. In the case of training on the inexact labels, our method achieves the best result compared to all other methods. As an inexact supervised method, the CGCNet improves the value of $AP$ metric to the highest 58.3\%. Additionally, we also exhibit the results of our framework and other fully-supervised methods on the training set of Dataset1K \cite{li2017instance}. As shown in the bottom of \tabref{Table5}, the value of $AP$ achieved by our CGCNet also outperforms SCNet and S4Net. While the value of $AP^{r}$0.5 metric is slightly lower than S4Net, our framework has demonstrated its robustness whether trained on inexact labels or not. We also qualitatively analyzed the experimental results produced by CGCNet and S4Net. \figref{figure4} displays some results from the testing set in Dataset1K \cite{li2017instance}. It shows that our method produces high quality results which is very close to the ground-truth. The first two input images contain two instances, which have similar internal features and relatively simple backgrounds. Our method can easily segment salient instances from the background. The middle images in \figref{figure4} have multiple instances and each instance is close together. Our model can still predict the number of instances accurately and segments them effectively. The last two samples have chaotic backgrounds, and the internal features of salient instances are also very messy. In this complex case, the CGCNet also distinguish obstructed instances satisfactorily. In comparison, the S4Net determine the number of salient instances inaccurately in some cases. The antepenult sample demonstrated that the S4Net is insensitive to smaller salient instances. In addition, our method is better than S4Net in smoothing the edge of salient instances. It indicates that the lack of fully supervised data limits the performance of S4Net. By and large, the proposed framework has high accuracy and robustness for salient instance segmentation. \textbf{Evaluation on the SOC}: Recently, Fan {\em et al.~} \cite{fan2018SOC} introduced a Salient Object in Clutter dataset called SOC, which contains both binary mask and instance-level salient ground-truth. Considering that the dataset labels salient instances in clutter, the difficulty of input images is relatively high. Therefore, the experiment results will be lower than other datasets. In this experiment, we analyze the proposed CGCNet in terms of image attributes on the test set of SOC dataset. The instance-level test set is divided into nine attributes: Appearance Change (AC), Big Object (BO), Clutter (CL), Heterogeneous Object (HO), Motion Blur (MB), Occlusion (OC), Out-of-View (OV), Shape Complexity (SC) and Small Object (SO) \cite{fan2018SOC}. We compare the experimental results of S4Net according to the attributes. For fair comparison, both methods are trained on Dataset1K training set \cite{li2017instance}, and then directly tested on the SOC test set. The histograms in \figref{figure5} show the performance of the CGCNet and S4Net on different attribute test subsets. Although these two methods achieve approximate scores in terms of $AP^{r}$0.5 metric, CGCNet performs significantly in $AP$ values. It can be attributed to the better suppression of complex background by the GFR module. The right histogram demonstrates that the proposed method is more generalized for images with different attributes. Moreover, our framework excels at dealing with the image containing heterogeneous object (HO) compared to other attributes. Thanks to the global features of the GFR module, CGCNet process the image with AC attribute effectively. The $AP$ value of OC attribute is lowest because the occluded part of object is difficult to detect. Overall, our method is robust for processing images with different attributes. \figref{figure6} exhibits some typical results generated by S4Net and our framework according to different attributes. Compared to the Dataset1K, the test set in SOC contains more different kinds of images and the complexity of background is higher. Our method also shows great performance on the SOC dataset against the S4Net. For example, the sample in the first row has the obvious illumination change in salient instance area combining with messy background, the proposed method can easily dig out salient instances from background. The Clutter-based (CL) image has several small salient instances, and the foreground and background regions around instances have similar color. The proposed CGCNet can still accurately locate each instance and segment them out. Refer to the last two rows of \figref{figure6}, salient instances in images with SC and SO attributes have complex boundaries and are relatively small. Although it is not easy to split the slender legs of the giraffe, the overall result is satisfying. \textbf{Limitations}: \figref{failureexamples} displays some typical failure cases. According to the first row, our method is insensitive to the tenuous local features. Due to the two-stage framework, it is inefficient to suppress the number of proposals in the second row. This strategy tends to result in a greater number of predicted salient instances than the ground-truth. The third row shows that the detail of the boundary is terrible when two salient instances overlap. It is due to the inexact annotations consisting of bounding boxes and salient regions that cause the edge of the salient instance to become the edge of boxes. The bottom two cases demonstrate that our approach fails to predict the salient regions. The problem is very common in saliency detection tasks. Generally, it is beneficial to use coarse-grained labels based on the proposed CGCNet. \section{Conclusion} In this paper, we propose an end-to-end cyclic global context neural network (CGCNet) for salient instance segmentation. Due to lack of dataset for the new challenging task, we used inexact supervised learning to train our framework. More importantly, adding with the GFR module and the updating scheme in CGCNet, our framework shows excellent performance for salient instance segmentation, which compares favorably against even some fully supervised methods. Due to dependence on the post processing of NMS, the framework sometimes predicts the number of salient instances inaccurately. In the future work, we will attempt to exploit one-stage single network and further improve the effectiveness of the framework for applying to video surveillance. \section*{Acknowledgments} This research was supported by the National Natural Science Foundation of China Grant 61902139. \bibliographystyle{IEEEtran}
2,869,038,155,756
arxiv
\section{Introduction} \label{sec:introduction} We study certain numerical approximations of the eigenspace associated to a cluster of eigenvalues of a reaction-diffusion operator, namely the unbounded operator $ A=-\Delta -\nu$ in $L^2(\varOmega),$ whose domain is $H^1_0(\Omega)$. Here $ \nu\in L^\infty(\Omega)$ and $\Omega\subset \mathbb{R}^n$ is an open bounded set with Lipschitz boundary. The eigenvalue cluster of interest is assumed to be contained inside a finite contour $\Gamma$ in the complex plane $ \mathbb{C}$. The \revv{computational} technique is the FEAST algorithm~\cite{Poliz09}, which is now well known as a subspace iteration, applied to an approximation of an operator-valued contour integral over $\Gamma$. This technique requires one to approximate the resolvent function $z\mapsto R(z)=(z-A)^{-1}$ at a few points along the contour. The specific focus of this paper is the discretization error in the final spectral approximations when the discontinuous Petrov Galerkin (DPG) method~\cite{DemkoGopal11} is used to approximate the resolvent. Contour integral methods \cite{Beyn2012, Poliz09,GuttePolizTang15, SakurSugiu03}, such as FEAST, have been gaining popularity in numerical linear algebra. When used as an algorithm for matrix eigenvalues, discretization errors are irrelevant, which explains the dearth of studies on discretization errors within such algorithms. However, in this paper, like in~\cite{GopalGrubiOvall18, HorniTowns19}, we are interested in the eigenvalues of a partial differential operator on an infinite-dimensional space. In these cases, practical computations can proceed only after discretizing the resolvent of the partial differential operator by some numerical strategy, such as the finite element method. We specifically focus on the DPG method, a least-squares type of finite element method. One of our motivations for considering the DPG discretization is that it allows us to approximate $R(z)$ by solving a sparse Hermitian positive definite system (even when $z - A$ is indefinite) using efficient iterative solvers. Another practical reason is that it offers a built-in (a posteriori) error estimator in the resolvent approximation \revv{(see~\cite{CarstDemkoGopal14})}, thus immediately suggesting a straightforward algorithmic avenue for eigenspace error control. The exploitation of these advantages, including the design of preconditioners and adaptive algorithms, are postponed to future work. The focus of this paper is limited to obtaining {\it a priori} error bounds and convergence rates for the computed eigenspace and accompanying Ritz values. According to~\cite{GopalGrubiOvall18}, bounds on spectral errors can be obtained from bounds on the approximation of the resolvent $z \mapsto R(z)$. This function maps complex numbers to bounded operators. In~\cite{GopalGrubiOvall18}, certain finite-rank computable approximations to $R(z)$, denoted by $R_h(z)$, were considered and certain abstract sufficient conditions were laid out for bounding the resulting spectral errors. (Here $h$ represents some discretization parameter like the mesh size.) This framework is summarized in Section~\ref{AF}. Our approach to the analysis in this paper is to verify the conditions of this abstract framework when $R_h(z)$ is obtained using the DPG discretization. One of our applications of interest is the fast and accurate computation of the guided modes of optical fibers. In \revv{the} design and optimization of new optical fibers, such as the emerging microstructured fibers, one often needs to compute such modes many hundreds of times for varying parameters. FEAST appears to offer a well-suited method for this purpose. The Helmholtz operator arising from the fiber eigenproblem is of the above-mentioned type (wherein $\nu$ is related to the fiber's refractive index). In Section~\ref{fib}, we will show the efficacy of the FEAST algorithm, combined with the DPG resolvent discretization, by computing the modes of a commercially marketed step-index fiber. The outline of the paper is as follows. In Section~\ref{AF} we present the abstract theory from~\cite{GopalGrubiOvall18} pertaining to FEAST iterations using discretized resolvents of unbounded operators. In Section~\ref{DPG} we derive new estimates for discretizations of a resolvent by the DPG method. In Section~\ref{NS} we present benchmark results on problems with well-known solutions which serve as a validation of the method. Finally, in Section~\ref{fib} we apply the method to compute the modes of a ytterbium-doped optical fiber. \section{The abstract framework}\label{AF} In this section, we summarize the abstract framework of \cite{GopalGrubiOvall18} for analyzing spectral discretization errors of the FEAST algorithm when applied to general selfadjoint operators. Accordingly, in this section, $A$ is not restricted to the reaction-diffusion operator mentioned in Section~\ref{sec:introduction}. Here we let $A$ be a linear, closed, selfadjoint (possibly unbounded) operator $A: \mathop{\mathrm{dom}}(A) \subseteq {\mathcal H} \to{\mathcal H}$ in a complex Hilbert space ${\mathcal H}$, whose real spectrum is denoted by $\Sigma(A)$. We are interested in approximating a subset $\Lambda\subset \Sigma(A)$ that consists of a finite collection of eigenvalues of finite multiplicity, as well as its associated eigenspace $E$ (the span of all the eigenvectors associated with elements of $\Lambda$). The FEAST iteration uses a rational function \begin{align}\label{ContourQuadrature} r_N(\xi)=w_N + \sum_{k=0}^{N-1} w_k(z_k-\xi)^{-1}~. \end{align} Here the choices of $w_k, z_k \in \mathbb{C}$ are typically motivated by quadrature approximations of the Dunford-Taylor integral \begin{equation} \label{eq:DunfodTaylor} \revv{S = \frac{1}{2 \pi \mathrm{i}}\oint_\Gamma R(z)\,dz,} \end{equation} where $R(z) = (z - A)^{-1}$ denotes the resolvent of $A$ at $z$. Above, $\Gamma$ is a positively oriented, simple, closed contour $\Gamma$ that encloses $\Lambda$ and excludes $\Sigma(A)\setminus\Lambda$, so that $S$ is the exact spectral projector onto $E$. Define \[ S_N = r_N(A) = w_N + \sum_{k=0}^{N-1}w_k R(z_k). \] More details on examples of $r_N$ and their properties can be found in~\cite{GuttePolizTang15, Poliz09}. We are particularly interested in a further approximation of $S_N$ given by \begin{align}\label{eq:SNh} S^h_N&=w_N + \sum_{k=0}^{N-1}w_k R_h(z_k). \end{align} Here $R_h(z):{\mathcal H} \to{\mathcal V}_h$ is a finite-rank \revv{approximation of} the resolvent $R(z)$, ${\mathcal V}_h$ is a finite-dimensional subspace of a \revv{complex Hilbert} space~${\mathcal V}$ embedded in ${\mathcal H}$, and $h$ is a parameter inversely related to $\dim({\mathcal V}_h)$ such as a mesh size parameter. Note that there is no requirement that these resolvent approximations are such that $S_N^h$ is selfadjoint. In fact, as we shall see later (see Remark~\ref{rem:nonselfadj}), the $S_N^h$ generated by the DPG approximation of the resolvent is not generally selfadjoint. We consider a version of the FEAST iterations that use the above approximations. Namely, starting with a subspace $\Eh 0 \subseteq {\mathcal V}_h$, compute \begin{equation} \label{eq:1} \Eh\ell = S^h_N \Eh {\ell -1}, \qquad \text{ for } \ell=1,2,\ldots. \end{equation} If $A$ is a selfadjoint operator on a finite-dimensional space ${\mathcal H}$ (such as the one given by a Hermitian matrix), then one may directly use $S_N$ instead of $S_N^h$ in~\eqref{eq:1}. This case is the well-studied FEAST iteration for Hermitian matrices, which can approximate spectral clusters of $A$ that are strictly separated from the remainder of the spectrum. In our abstract framework for discretization error analysis, we place a similar separation assumption on the exact undiscretized spectral parts $\Lambda $ and $\Sigma(A) \setminus \Lambda$. Consider the following strictly separated sets $I_\gamma^y = \{ x \in \mathbb{R}: | x- y| \le \gamma\},$ and $O_{\delta,\gamma}^y = \{ x \in \mathbb{R}: |x - y| \ge (1+\delta)\gamma\}$, for some $y\in\mathbb{R}$, $\delta>0$ and $\gamma>0$. Using these sets and the quantities \begin{align}\label{ContractionFactor2} W=\sum_{k=0}^{N}|w_k|,\quad\quad \hat\kappa= \frac{ \displaystyle\sup_{x \in O_{\delta,\gamma}^y} |r_N(x)|} { \displaystyle \inf_{x \in I_\gamma^y}|r_N(x)|}. \end{align} we formulate \revv{a spectral separation assumption below.} \begin{assumption} \label{asm:rN} There are $y\in\mathbb{R}$, $\delta>0$ and $\gamma>0$ such that \begin{align}\label{SpectralGap} \Lambda\subset I_\gamma^y, \qquad \Sigma(A)\setminus\Lambda \subset O_{\delta,\gamma}^y, \end{align} and that $r_N$ is a rational function of the form~\eqref{ContourQuadrature} with the following properties: \[ z_k \notin\overline{\Sigma(A)},\quad W<\infty, \quad \revv{\text{and}} \quad \hat \kappa <1. \] \end{assumption} \begin{assumption} \label{asm:Vshort} The Hilbert space ${\mathcal V}$ is such that $E \subseteq {\mathcal V} \subseteq {\mathcal H}$, there is a $C_{\mathcal V}>0 $ such that for all $u \in {\mathcal V}$, $ \| u \|_{\mathcal H} \le C_{\mathcal V} \| u \|_{\mathcal V}$, and ${\mathcal V}$ is an invariant subspace of $R(z)$ \revv{for all $z$ in the resolvent set of $A$, i.e., $R(z) {\mathcal V} \subseteq {\mathcal V}$.} (We allow ${\mathcal V} = {\mathcal H}$, \revv{and further examples where ${\mathcal V} \ne {\mathcal H}$ can be found in \cite[\S2]{GopalGrubiOvall18}.)} \end{assumption} \begin{assumption} \label{asm:Rlim} The operators $R_h(z_k)$ and $R(z_k)$ are bounded in ${\mathcal V}$ and satisfy \begin{equation} \label{eq:Rh-R} \lim_{h \to 0}\; \revv{\max_{k=0,\ldots, {N-1}}} \| R_h(z_k) - R(z_k) \|_{{\mathcal V}} = 0. \end{equation} \end{assumption} \begin{assumption} \label{asm:Vh_in_dom(a)} Assume that ${\mathcal V}_h$ is contained in $\mathop{\mathrm{dom}}(a)$, where $a(\cdot, \cdot)$ denotes the symmetric (possibly unbounded) sesquilinear form associated to the operator $A$ (as described in, say, \cite[\S10.2]{Schmu12} or \cite[\S5]{GopalGrubiOvall18}). \end{assumption} Various examples of situations where one or more of these assumptions hold can be found in \cite{GopalGrubiOvall18}. Next, we proceed to describe the main consequences of these assumptions of interest here. Let $\Lambda = \{ \lambda_1, \ldots, \lambda_m\},$ counting multiplicities, so that $m = \dim(E)$. By the strict separation of Assumption~\ref{asm:rN}, we can find a curve $\Theta$ that encloses $\mu_i = r_N(\lambda_i)$ and no other eigenvalues of $S_N$. By Assumption~\ref{asm:Rlim}, $S_N^h$ converges to $S_N$ in norm, so for sufficiently small $h$, the integral \[ P_h = \frac{1}{2 \pi \mathrm{i}} \oint_\Theta (z - S_N^h)^{-1} \;dz \] is well defined and equals the spectral projector of $S_N^h$ associated with the contour $\Theta$. Let $E_h$ denote the range of $P_h$. Now, let us turn to the \revv{iteration~\eqref{eq:1}. We shall} tacitly assume throughout this paper that $\Eh 0 \subseteq {\mathcal V}_h$ is chosen so that $\dim \Eh 0 = \dim (P_h \Eh 0) = m$. In practice, this is not restrictive: we usually start with a larger than necessary $\Eh 0$ and truncate it to dimension $m$ as the iteration progresses. In order to describe convergence of spaces, we need to measure the distance between two linear subspaces $M$ and $L$ of ${\mathcal V}$. For this, we use the standard notion of gap \cite{Kato95} defined by \begin{equation} \label{eq:gapV} \gap_{\mathcal V}( M, L ) = \max \left[ \sup_{m \in U_M^{\mathcal V}} \dist{{\mathcal V}} ( m, L), \; \sup_{l \in U_L^{\mathcal V}} \dist{{\mathcal V}} ( l, M) \right], \end{equation} where $\dist{{\mathcal V}}(x, S) = \inf_{s \in S} \| x - s\|_{\mathcal V}$ and $U_M^{\mathcal V}$ denotes the unit ball $ \{ w \in M: \; \| w\|_{\mathcal V} = 1\}$ of $M$. The set of approximations to $\Lambda$ is defined by \[ \Lambda_h = \{ \lambda_h \in \mathbb{R}: \;\exists 0 \ne u_h \in E_h \text{ satisfying } a(u_h, v_h) = \lambda_h (u_h, v_h) \text{ for all } v_h \in E_h\}. \] In other words, $\Lambda_h$ is the set of Ritz values of the compression of $A$ on $E$. The sets $\Lambda$ and $\Lambda_h$ are compared using \revv{the} Hausdorff distance. We recall that the Hausdorff distance between two subsets $\Upsilon_1, \Upsilon_2 \subset \mathbb{R}$ is defined by \[ \revv{ \mathop{\mathrm{dist}}( \Upsilon_1, \Upsilon_2) = \max \left[ \sup_{\mu_1 \in \Upsilon_1} \mathop{\mathrm{dist}}( \mu_1, \Upsilon_2), \sup_{\mu_2 \in \Upsilon_2} \mathop{\mathrm{dist}}( \mu_2, \Upsilon_1) \right], } \] where $\mathop{\mathrm{dist}}(\mu, \Upsilon) = \inf_{\nu \in \Upsilon} | \mu - \nu|$ for any $\Upsilon \subset \mathbb{R}.$ Finally, let $C_E$ denote any finite positive constant satisfying $a(e_1, e_2) \le C_E \| e_1 \|_{\mathcal H} \| e_2 \|_{{\mathcal H}}$ for all $e_1, e_2 \in E$. We are now ready to state collectively the following results proved in~\cite{GopalGrubiOvall18}. \begin{theorem} \label{thm:abstract} Suppose Assumptions~\ref{asm:rN}--\ref{asm:Rlim} hold. Then there are constants $C_N, h_0>0$ such that, for all $h < h_0$, \begin{gather} \label{eq:Eh_ell_to_Eh} \lim_{\ell \to \infty} \gap_{\mathcal V}( \Eh \ell, E_h) = 0, \\ \label{eq:E_to_Eh} \lim_{h\to 0} \gap_{\mathcal V} (E, E_h) = 0, \\ \label{eq:gap_E_Eh} \gap_{\mathcal V}( E, E_h) \le C_N W \max_{k=0,\ldots, {N-1}} \left\| \big[ R(z_k) - R_h(z_k) \big] \raisebox{-0.1em}{\ensuremath{\big|_E}} \right\|_{\mathcal V}. \end{gather} If, in addition, Assumption~\ref{asm:Vh_in_dom(a)} holds and $\| u \|_{\mathcal V} = \| |A|^{1/2} u \|_{\mathcal H}$, then there are $C_1, h_1>0$ such that for all $h < h_1,$ \begin{gather} \label{eq:dist_L_Lh} \mathop{\mathrm{dist}}( \Lambda, \Lambda_h) \le (\Lambda_h^{\mathrm{max}})^2 \gap_{\mathcal V}(E, E_h)^2 + C_1 C_E \, \gap_{\mathcal H}(E, E_h)^2, \end{gather} where $\Lambda_h^{\mathrm{max}} = \sup_{e_h \in E_h} \| |A|^{1/2} e_h\|_{{\mathcal H}} / \| e_h \|_{\mathcal H}$ satisfies $ (\Lambda_h^{\mathrm{max}})^2\le \left[ 1-\gap_{\mathcal V}(E, E_h) \right]^{-2} C_E. $ \end{theorem} \section{Application to a DPG discretization}\label{DPG} In this section, we apply the abstract framework of the previous section to obtain convergence rates for eigenvalues and eigenspaces when the DPG discretization is used to approximate the resolvent of a model operator. \subsection{The Dirichlet operator} Throughout this section, we set ${\mathcal H}, {\mathcal V},$ and $A$ by \begin{equation} \label{eq:VHA_Dirichlet} { {\mathcal H} = L^2(\varOmega), \quad A = -\Delta, \quad \mathop{\mathrm{dom}}(A) = \{ \psi\in H_0^1(\varOmega):~\Delta\psi\in L^2(\varOmega)\} ,\quad {\mathcal V} = H_0^1(\varOmega),} \end{equation} where $\varOmega$ $\subset \mathbb{R}^n$ ($n\ge 2$) is a bounded polyhedral domain with Lipschitz boundary. We shall use standard notations for norms ($ \| \cdot \|_X$) and seminorms ($|\cdot|_X$) on Sobolev spaces~($X$). It is easy to see \cite{GopalGrubiOvall18} that Assumption~\ref{asm:Vshort} holds with these settings. Note that the operator $A$ in~\eqref{eq:VHA_Dirichlet} is the operator associated to the form \[ a(u,v) = \int_\varOmega \grad u \cdot \grad \overline{v} \; dx, \quad u, v \in \mathop{\mathrm{dom}}(a) = {\mathcal V} = H_0^1(\varOmega) \] and that the norm $\| u \|_{\mathcal V},$ due to the \Poincare\ inequality, is equivalent to $\| |A|^{1/2} u \|_{\mathcal H} $ $ = \| A^{1/2} u \|_{\mathcal H}$ $= \| \grad u \|_{L^2(\varOmega)} $ $= |u|_{H^1(\varOmega)}$. The \revv{solution of} the operator equation $(z - A ) u = v$ yields the application of the resolvent $u = R(z) v$. \revv{The weak form of this equation} may be stated as the problem of finding $u \in H_0^1(\varOmega)$ satisfying \begin{equation} \label{eq:Rzv-weak} b(u,w) = (v,w)_{\mathcal H} \qquad \text{ for all } w\in H_0^1(\varOmega), \end{equation} where \[ \revv{b(w_1,w_2) = z(w_1,w_2)_{\mathcal H} - a(w_1,w_2)} \] for any $w_1, w_2 \in H_0^1(\varOmega)$. As a first step in the analysis, we obtain an inf-sup estimate and a continuity estimate for $b$. In the ensuing lemmas $z$ is tacitly assumed to be in the resolvent set of $A$. \begin{lemma} \label{lem:dirichlet} For all $v \in H_0^1(\varOmega)$, \begin{align*} \revv{ \sup_{y \in H_0^1(\varOmega)} \frac{| b(v,y) |}{ \quad| y |_{H^1(\varOmega)}}\geq \beta(z)^{-1} | v |_{H^1(\varOmega)}, } \end{align*} where $\beta(z)=\sup\{|\lambda|/|\lambda-z|:\;\lambda\in\Sigma(A)\}$. \end{lemma} \begin{proof} Let $v \in H_0^1(\varOmega)$ be non-zero, and let $w = \overline{z}R(\overline{z})v$. Then \[ b(s,w)=z(s,v)_{\mathcal H}, \qquad \text{ for all } s\in H_0^1(\varOmega). \] Choosing $s=v$, it follows immediately that \begin{equation} \label{eq:2} b(v, v-w) = b(v,v) - z \| v \|^2_{L^2(\varOmega)} = -|v |_{H^1(\varOmega)}^2. \end{equation} Moreover, $v-w=(I-\overline{z}R(\overline{z}))v=-A\, R(\overline{z}) v$. Recall that the identity $\| A R(z) \|_{\mathcal H} = \beta(z)$ holds~\cite[p.~273, Equation (3.17)]{Kato95} for any $z$ in the resolvent set of $A$. Since $|s|_{H^1(\varOmega)} = \| A^{1/2}s\|_{\mathcal H}$ for all $s \in H_0^1(\varOmega)=\mathop{\mathrm{dom}}(a)=\mathop{\mathrm{dom}}(A^{1/2})$, and since $A^{1/2}$ commutes with $A R(z)$, we conclude that \begin{equation} \label{eq:3} |v - w|_{H^1(\varOmega)} = |A R(\overline{z}) v|_{H^1(\varOmega)} = \| A R(\overline{z}) A^{1/2} v \|_{\mathcal H} \le \beta(\overline{z}) \| A^{1/2} v \|_{\mathcal H} = \beta({z}) |v|_{H^1(\varOmega)}~, \end{equation} where $\beta(\overline{z})=\beta(z)$ because the spectrum is real. It follows from~\eqref{eq:2} and~\eqref{eq:3} that \begin{align*} \sup_{y \in H_0^1(\varOmega)} \frac{| b(v, y) |}{ \quad| y |_{H^1(\varOmega)}} & \ge \frac{| b(v,v-w) |}{ \quad| v-w |_{H^1(\varOmega)}} \ge \frac{ |v|_{H^1(\varOmega)}^2 }{ \beta(z) | v |_{H^1(\varOmega)}}~, \end{align*} as claimed. \end{proof} \subsection{The DPG resolvent discretization} \label{ssec:dpg} We now assume that $\varOmega$ is partitioned by a conforming simplicial finite element mesh $\varOmega_h$. As is usual in finite element theory, while the mesh need not be regular, the shape regularity of the mesh is reflected in the estimates. To describe the DPG discretization of $z-A$, we begin by introducing the nonstandard variational formulation on which it is based. We will be brief as the method is described in detail in previous works~\cite{DemkoGopal11,DemkoGopal13a}. Define \[ H^1(\varOmega_h) = \prod_{K \in \varOmega_h} H^1(K), \qquad Q = H(\div, \varOmega) / \prod_{K \in \varOmega_h} H_0(\div, K), \] normed respectively by \[ \| v \|_{H^1(\varOmega_h)} = \left( \sum_{K \in \varOmega_h} \| v \|_{H^1(K)}^2 \right)^{1/2}, \qquad \| q \|_Q = \inf\left\{ \| q - q_0\|_{H(\div, \varOmega)}: \; q_0 \in \displaystyle{ \prod_{K \in \varOmega_h}} H_0(\div, K)\right\}. \] On every mesh element $K$ in $\varOmega_h$, the trace $q\cdot n|_{\partial K}$ is in $H^{-1/2}(\partial K)$ for any $q $ in $H(\div,K)$. \revv{Above, $H_0(\div, K) = \{ q \in H(\div, K) : \left. q \cdot n \right|_{\partial K} = 0 \}$.} We denote by $\ip{q\cdot n, v}_{\partial K}$ the action of this functional on the trace $v|_{{\partial} K}$ for any $v$ in $H^1(K)$. Next, for any $u \in H_0^1(\varOmega)$, $q \in Q$ and $v \in H^1(\varOmega_h)$, set \[ b_h( (u,q), v) = \sum_{K \in \varOmega_h} \left[ \ip{ q \cdot n, \bar v}_{\partial K} + \int_K \left( z u \bar v - \grad u \cdot \grad \bar v \right)\, \rev{dx} \right]. \] This sesquilinear form gives rise to a well-posed Petrov-Galerkin formulation, as will be clear from the discussion below. For the DPG discretization, we use the following finite element subspaces. Let $L_h$ denote the Lagrange finite element subspace of $H_0^1(\varOmega)$ consisting of continuous functions, which when restricted to any $K$ in $\varOmega_h,$ are in $P_{p}(K)$ for some $p\ge 1$. Here and throughout, $P_\ell(K)$ denotes the set of polynomials of total degree at most~$\ell$ restricted to $K$. Note that when applying the earlier abstract framework to the DPG discretization, in addition to~\eqref{eq:VHA_Dirichlet}, we also set \begin{equation} \label{eq:Vh_DPG} {\mathcal V}_h = L_h. \end{equation} Let $RT_h \subset H(\div, \varOmega)$ denote the well-known Raviart-Thomas finite element subspace consisting of functions whose restriction to any $K \in \varOmega_h$ is a polynomial in \revv{$P_{p-1}(K)^n + x P_{p-1}(K)$}, where $x$ is the coordinate vector. Then we set $Q_h= \{ q_h \in Q: \; q_h |_K \in \revv{P_{p-1}(K)^n + x P_{p-1}(K)} + H_0(\div, K)\}$. Finally, let $Y_h \subset H^1(\varOmega_h)$ consist of functions \revv{which, when restricted to any $K\in \varOmega_h$,} lie in $P_{p+n+1}(K)$. We now define the approximation of the resolvent action $u = R(z)f$ by the DPG method, denoted by $u_h = R_h(z) f$, for any $f \in L^2(\varOmega)$. The function $u_h$ is in $L_h$. Together with $\varepsilon_h \in Y_h$ and $q_h \in Q_h$, it satisfies \begin{subequations} \label{eq:dpg-eq} \begin{align} ( \varepsilon_h, \eta_h)_{H^1(\varOmega_h)} \, + b_h (( u_h, q_h), \eta_h) & = \int_\varOmega f \, \bar\eta_h \, \rev{dx}, && \text{ for all } \eta_h \in Y_h, \\ b_h( ( w_h, r_h), \varepsilon_h) & = 0, && \text{ for all } w_h \in L_h, \; r_h \in Q_h. \end{align} \end{subequations} where \[ \revv{ ( \varepsilon_h, \eta_h)_{H^1(\varOmega_h)} = \sum_{K \in \varOmega_h} \int_K ( \varepsilon_h \bar{\eta}_h + \grad \varepsilon_h \cdot \grad \bar{\eta}_h ) \,\rev{dx} .} \] The distance between $u$ and $u_h$ is bounded in the next result. There and in similar results in the remainder of this section, we tacitly understand $z$ to vary in some bounded subset $D$ of the resolvent set of $A$ in the complex plane (containing the contour $\Gamma$) and write $t_1 \lesssim t_2$ whenever there is a positive constant $C$ satisfying $t_1 \le C t_2$ and $C$ is independent of \[ h = \max_{K \in \varOmega_h} \mathop{\mathrm{diam}}( K) \] but dependent on the diameter of $D$ and the shape regularity of the mesh $\varOmega_h$. The deterioration of the estimates as $z$ gets close to the spectrum of $A$ is identified using $\beta(z)$ \revv{of Lemma~\ref{lem:dirichlet}}. \begin{lemma} \label{lem:dpg-resolvent} For all $f \in L^2(\varOmega)$, \[ \revv{ \| R(z) f - R_h(z) f \|_{\mathcal V} \lesssim \beta(z) \left[ \inf_{w_h \in L_h} \| u - w_h \|_{H^1(\varOmega)} + \inf_{q_h \in RT_h} \| q - q_h \|_{H(\div,\varOmega)} \right], } \] where $u= R(z) f$ and $q = \grad u$. \end{lemma} \begin{proof} The proof proceeds by verifying the sufficient conditions for convergence of DPG methods known in the existing literature. The result of~\cite[Theorem~2.1]{GopalQiu14} immediately gives the stated result, provided we verify its three conditions, reproduced below in a form convenient for us. The first two conditions there, taken together, is equivalent to the bijectivity of the operator generated by $b_h(\cdot,\cdot)$. Hence we shall state them in the following alternate form (dual to the form stated in~\cite{GopalQiu14}). The first is the uniqueness condition \begin{subequations} \label{eq:A} \begin{gather} \label{eq:A1} \{ \eta \in H^1(\varOmega_h) : \; b_h((w,r), \eta) = 0,\; \text{ for all } (w,r) \in H_0^1(\varOmega) \times Q \} = \{ 0 \}. \end{gather} The second condition is that there are $C_1, C_2 >0$ such that \begin{equation} \label{eq:A2} C_1 \big[ |w|_{H^1(\varOmega)}^2 + \| r \|_Q^2\big]^{1/2} \le \sup_{ \eta \in H^1(\varOmega_h)} \frac{ |b_h ( (w,r), \eta) | }{ \| \eta\|_{H^1(\varOmega_h)} } \le C_2 \big[ |w|_{H^1(\varOmega)}^2 + \| r \|_Q^2\big]^{1/2} \end{equation} for all $w \in H_0^1(\varOmega)$ and $r \in Q.$ Finally, the third condition is the existence of a bounded linear operator $\varPi_h : H^1(\varOmega_h) \to Y_h$ such that \begin{equation} \label{eq:A3} b_h( (w_h, r_h), \eta - \varPi_h \eta) =0. \end{equation} \end{subequations} Once these conditions are verified, \cite[Theorem~2.1]{GopalQiu14} implies \begin{equation} \label{eq:4} |u - u_h |_{H^1(\varOmega)} \revv{\le} \frac{C_2 \| \varPi\| }{C_1} \left[ \inf_{w_h \in L_h} | u - w_h |_{H^1(\varOmega)} + \inf_{q_h \in RT_h} \| q - q_h \|_{H(\div,\varOmega)} \right] \end{equation} with $u=R(z)f$ and $u_h = R_h(z) f$. It is possible to verify conditions~\eqref{eq:A1} and~\eqref{eq:A2} on $b_h(\cdot,\cdot)$ using the properties of $b(\cdot,\cdot)$. First note that~\cite[Theorem~2.3]{CarstDemkoGopal16} shows that \[ \sup_{v \in H^1(\varOmega_h)} \frac{|\sum_{K \in \varOmega_h}\ip{r \cdot n, v}_{\partial K}|}{\| v \|_{H^1(\varOmega_h)}} = \| r \|_Q. \] This, together with~\cite[Theorem~3.3]{CarstDemkoGopal16} implies that the inf-sup condition for $b$ that we proved in Lemma~\ref{lem:dirichlet} implies an inf-sup condition for~$b_h$, namely the lower inequality of~\eqref{eq:A2} holds with \[ \revv{ \frac{1}{C_1^2} = \beta(z)^2 + \left[ \beta(z)(1+|z|) +1 \right]^2 . } \] \revv{By combining this with} the continuity estimate of $b_h$ with $C_2 = 1+|z|$, we obtain that $C_2/C_1$ is $O(\beta(z))$. Finally, Condition~\eqref{eq:A3} follows from the Fortin operator constructed in~\cite[Lemma ~3.2]{GopalQiu14} whose norm is a constant bounded independently of~$z$. Hence the lemma follows from~\eqref{eq:4}. \end{proof} \begin{remark} \label{rem:Pih} Note that the degree of functions in $Y_h$ was chosen to be $p+n+1$ in order to satisfy the moment condition \[ \int_K ( \eta - \varPi_h \eta ) w_p \, \rev{dx}= 0 \] for all $w_p \in P_p(K)$ and $\eta \in H^1(K)$ on all mesh simplices $K$ (see~\cite{GopalQiu14}). This moment condition was used while verifying~\eqref{eq:A3}. Other recent ideas, such as those in \cite{BoumaGopalHarb14, CarstHellw18}, may be used to reduce $Y_h$ without reducing convergence rates, and thus improve Lemma~\ref{lem:dpg-resolvent} for specific meshes and degrees. \end{remark} \begin{remark} \label{rem:nonselfadj} The DPG approximation of $u = R(z) f$, given by $u_h = R_h(z) f$, satisfies~\eqref{eq:dpg-eq}. We may rewrite~\eqref{eq:dpg-eq} using $x_h = (u_h, q_h)$, \begin{align*} M_h \varepsilon_h + B_h x_h & = f_h\revv{,}\\ \revv{B_h^* \varepsilon_h} & \revv{= 0.} \end{align*} We omit the obvious definitions of operators $B_h: L_h \times Q_h \to Y_h,$ $M_h : Y_h \to Y_h$, and that of $f_h$ (an appropriate projection of $f$). Eliminating $\varepsilon_h$, we find that $u_h = R_h(z) f$ is a component of \revv{$x_h = (B_h^* M_h^{-1} B_h)^{-1} B_h^* M_h^{-1}f_h$}. Thus, the operator $R_h(z)$ produced by the DPG discretization need not be selfadjoint even when $z$ is on the real line. For the same reason, the filtered operator $S_N^h$ produced by the DPG discretization is {\em not generally selfadjoint} even when $\{z_k: k=0, \ldots, N-1\}$ has symmetry about the real line. \rev{Note that selfadjointness of $S_N^h$ is not needed in Theorem~\ref{thm:abstract} to conclude the convergence of the eigenvalue cluster at double the convergence rate of eigenspace.} \end{remark} \subsection{FEAST iterations with the DPG discretization}\label{sec:FEASTDPG} To approximate \revv{$E \subseteq {\mathcal V}$}, we apply the filtered subspace iteration~\eqref{eq:1}. In this subsection, we complete the analysis of approximation of $E$ by $E_h$ and the accompanying eigenvalue approximation errors. The analysis is an application of the abstract results in Theorem~\ref{thm:abstract}. To verify the conditions of this theorem, we need some elliptic regularity. This is formalized in the next regularity assumption. \begin{assumption} \label{asm:reg} Suppose there are positive constants $C_{\mathrm{reg}}$ and $s$ such that the solution $u^f \in {\mathcal V}$ of the Dirichlet problem $-\Delta u^f = f$ admits the regularity estimate \begin{equation} \label{eq:reg} \| u^f \|_{H^{1+s} (\varOmega)} \le C_{\mathrm{reg}} \| f \|_{\mathcal H} \quad\text{ for any } f \in {\mathcal V}. \end{equation} \revv{Also suppose} that \begin{equation} \label{eq:reg-eig} \| u^f \|_{H^{1+{s_E}} (\varOmega)} \le C_{\mathrm{reg}} \| f \|_{\mathcal H} \quad\text{ for any } f \in E. \end{equation} \revv{(Since $E \subseteq {\mathcal V},$ \eqref{eq:reg} implies~\eqref{eq:reg-eig} with $s$ in place of $s_E$, but in many cases~\eqref{eq:reg-eig} holds with $s_E$ larger than $s$. This is the reason for additionally assuming~\eqref{eq:reg-eig}.)} \end{assumption} Its well known that if $\varOmega$ is convex, Assumption~\ref{asm:reg} holds with $s=1$ in~\eqref{eq:reg}. If $\varOmega\subset\mathbb{R}^2$ is non-convex, with its largest interior angle at a corner being $\pi/\alpha$ for some $1/2<\alpha<1$, Assumption~\ref{asm:reg} holds with any positive $s<\alpha$. These results can be found in~\cite{Grisv85}, for example. \begin{lemma} \label{lem:rates} Suppose Assumption~\ref{asm:reg} holds. Then, \begin{align} \label{eq:R-R_h_on_V} \| R(z) f - R_h(z) f \|_{\mathcal V} & \lesssim {\beta(z)^2} h^{\min(p, s, 1)} \| f \|_{\mathcal V}, && \text{ for all } f \in {\mathcal V}, \\ \label{eq:R-R_h_on_E} \| R(z) f - R_h(z) f \|_{\mathcal V} & \lesssim {\beta(z)^2} h^{\min(p, s_E)} \| f \|_{\mathcal V}, && \text{ for all } f \in E. \end{align} \end{lemma} \begin{proof} By Lemma~\ref{lem:dpg-resolvent}, the distance between $u = R(z) f$ and $u_h = R_h(z) f$ can be bounded using standard finite element approximation estimates for the Lagrange and Raviart-Thomas spaces, to get \begin{equation} \label{eq:approx1} \| u - u_h \|_{H^1(\varOmega)} \lesssim \beta(z) \bigg[ h^r |u|_{H^{1+r}(\varOmega)} + h^r |q|_{H^r(\varOmega)} + h^r |\div \,q |_{H^r(\varOmega)} \bigg], \qquad \text{ for } r \le p, \end{equation} where $q=\grad u$. Note that since $u$ satisfies $b(u, v) = (f, v)_{\mathcal H}$ for all $v \in H_0^1(\varOmega)$, by Lemma~\ref{lem:dirichlet}, \begin{equation} \label{eq:5} \beta(z)^{-1} |u|_{H^1(\varOmega)} \le \sup_{y \in H_0^1(\varOmega)} \frac{| b( u, y)|}{ |y|_{H_0^1(\varOmega)}} = \sup_{y \in H_0^1(\varOmega)} \frac{| (f, y)_{\mathcal H}|}{ |y|_{H_0^1(\varOmega)}} = \| f \|_{H^{-1}(\varOmega)}. \end{equation} which implies, by the \Poincare\ inequality, \begin{equation} \label{eq:6} \| u \|_{\mathcal H} \lesssim |u |_{\mathcal V} \lesssim \beta(z) \| f \|_{H^{-1}(\varOmega)} \lesssim \beta(z) \| f \|_{\mathcal H}. \end{equation} Applying elliptic regularity to $ \Delta u = f - zu$, for all $r \le s$ and $r\le 1,$ \begin{align} \nonumber | u |_{H^{1+r}(\varOmega)} & \le C_{\mathrm{reg}} ( \| f \|_{\mathcal H} + |z| \| u \|_{\mathcal H}) && \text{by~\eqref{eq:reg}}, \\ \label{eq:u-bd} & \lesssim \beta(z) \| f \|_{\mathcal H} && \text{by~\eqref{eq:6}}, \\ \label{eq:q-bd} | q |_{H^{r}(\varOmega)} & = | \grad u |_{H^{r}(\varOmega)} \lesssim \beta(z) \| f \|_{\mathcal H}, && \text{by~\eqref{eq:u-bd}}, \\ \nonumber |\div\, q|_{H^r(\varOmega)} & = | f - zu |_{H^r(\varOmega)} \\ \nonumber &\lesssim |f|_{H^r(\varOmega)} + |z|\beta(z) \| f\|_{\mathcal H} && \text{by~\eqref{eq:u-bd}}, \\ \label{eq:divq-bd} & \lesssim \beta(z) \| f \|_{\mathcal V} && \text{since $r \le 1$.} \end{align} Thus for all $0 \le r \le \min(p, s, 1)$, using the estimates~\eqref{eq:u-bd},~\eqref{eq:q-bd} and \eqref{eq:divq-bd} in~\eqref{eq:approx1}, we have proven~\eqref{eq:R-R_h_on_V}. The proof of~\eqref{eq:R-R_h_on_E} starts off as above using an $f \in E$. But now, due to the potentially higher regularity, we are able to obtain~\eqref{eq:u-bd} and~\eqref{eq:q-bd} for $r \le s_E$. Moreover, as in the proof of~\eqref{eq:divq-bd} above, we find that $|\div\, q |_{H^r(\varOmega)} \lesssim \beta(z) \| f \|_{H^r(\varOmega)}$. The argument to bound $ \| f \|_{H^r(\varOmega)}$ by $\| f \|_{\mathcal V}$ now requires a slight modification: since $-\Delta f \in E$, the regularity estimate~\eqref{eq:reg-eig} implies $\| f \|_{H^{1+r}(\varOmega)} \lesssim \| f \|_{{\mathcal H}}$. Thus \begin{align*} |\div\, q|_{H^r(\varOmega)} & \lesssim \beta(z) \| f \|_{\mathcal V} && \text{ for $r \le s_E$,} \end{align*} i.e., whenever $f \in E$, the estimates~\eqref{eq:u-bd},~\eqref{eq:q-bd} and \eqref{eq:divq-bd} hold for all $0\le r \le s_E$. Using them in~\eqref{eq:approx1}, the proof of~\eqref{eq:R-R_h_on_E} is complete. \end{proof} \begin{theorem} \label{thm:total} Suppose Assumption~\ref{asm:rN} (on spectral separation) and Assumption~\ref{asm:reg} (on elliptic regularity) hold. Then, there are positive constants $C_0$ and $ h_0$ such that for all $h<h_0$, the FEAST iterates $\Eh \ell$ obtained using the DPG approximation of the resolvent converge to $E_h$ and \begin{align} \label{eq:gapEEhDPG} \gap_{\mathcal V} (E, E_h) & \le \revv{C_0\, h^{\min(p,s_E)},} \\ \label{eq:LLhDPG} \mathop{\mathrm{dist}}( \Lambda, \Lambda_h) & \le C_0\, h^{2\min(p,s_E)}. \end{align} Here $C_0$ is independent of $h$ (but may depend on $\beta(z_k)^{2}$, $W,$ $C_N$, $p$, $\Lambda,$ $C_{\mathrm{reg}}$, and the shape regularity of the mesh). \end{theorem} \begin{proof} We apply Theorem~\ref{thm:abstract}. As we have already noted, Assumption~\ref{asm:Vshort} holds for the model Dirichlet problem with the settings in~\eqref{eq:VHA_Dirichlet}. Estimate~\eqref{eq:R-R_h_on_V} of Lemma~\ref{lem:rates} verifies Assumption~\ref{asm:Rlim}. Thus, since Assumptions~\ref{asm:rN}--\ref{asm:Rlim} hold, we may now apply~\eqref{eq:Eh_ell_to_Eh} of Theorem~\ref{thm:abstract} to conclude that $\gap_{\mathcal V}(\Eh\ell, E_h) \to 0$. Moreover, the inequality \eqref{eq:gap_E_Eh} of Theorem~\ref{thm:abstract}, when combined with the rate estimate~\eqref{eq:R-R_h_on_E} of Lemma~\ref{lem:rates} at each $z_k$, proves~\eqref{eq:gapEEhDPG}. Finally, to prove~\eqref{eq:LLhDPG}, noting that the ${\mathcal V}_h$ set in~\eqref{eq:Vh_DPG} satisfies Assumption~\ref{asm:Vh_in_dom(a)}, we appeal to \eqref{eq:dist_L_Lh} of Theorem~\ref{thm:abstract} to \begin{equation} \label{eq:DPG_L_Lh_V_H} \mathop{\mathrm{dist}}( \Lambda, \Lambda_h) \lesssim \gap_{\mathcal V}(E, E_h)^2 + \gap_{\mathcal H}(E, E_h)^2. \end{equation} To control the last term, first note that $\| e \|_{\mathcal V}^2 = a(e,e) \le C_E \| e \|^2_{\mathcal H} $ for all $e \in E$. Moreover, by Assumption~\ref{asm:Vshort}, $ \dist{{\mathcal H}} (e, E_h) \le C_{\mathcal V}\dist{{\mathcal V}} (e, E_h).$ Hence \begin{equation} \label{eq:deltaHh} \delta^{\mathcal H}_h:= \sup_{0 \ne e \in E} \frac{\dist{\mathcal H}( e, E_h) }{ \| e \|_{{\mathcal H}}} \lesssim {\sup_{0 \ne e \in E}}\frac{ \dist{\mathcal V}( e, E_h) }{ \| e \|_{{\mathcal V}}} \le \gap_{\mathcal V}(E,E_h). \end{equation} Note that \[ \gap_{\mathcal H}(E, E_h) = \max \bigg[ \delta_h^{\mathcal H}, \sup_{m \in U_{E_h}^{\mathcal H}} \dist{{\mathcal H}} ( m, E) \bigg]. \] Now, by the already proved estimate of~\eqref{eq:gapEEhDPG}, we know that $\gap_{\mathcal V}( E, E_h) \to 0$. Hence, when $h$ is sufficiently small, $\gap_{\mathcal V}( E, E_h)<1$, so $\dim(E_h) = \dim(E) = m$. Taking $h$ even smaller if necessary, $\delta^{\mathcal H}_h <1$ by~\eqref{eq:deltaHh}, so by~\cite[Theorem~I.6.34]{Kato95}, there is a closed subspace $\tilde{E}_h \subseteq E_h$ such that $\gap_{\mathcal H}(E, \tilde{E}_h) = \delta_h^{\mathcal H} < 1.$ But this means that $\dim(\tilde{E}_h)= \dim(E) = \dim(E_h) $, so $\tilde{E}_h=E_h$. Summarizing, for sufficiently small $h$, we have \[ \gap_{\mathcal H}(E, E_h) = \delta_h^{\mathcal H} \lesssim \gap_{\mathcal V}(E, E_h). \] Returning to~\eqref{eq:DPG_L_Lh_V_H}, we conclude that \[ \mathop{\mathrm{dist}}(\Lambda, \Lambda_h) \lesssim \gap_{\mathcal V}(E, E_h)^2, \] and the proof is finished using~\eqref{eq:gapEEhDPG}. \end{proof} \subsection{A generalization to additive perturbations} \label{ssec:gener} In this short subsection, we will generalize the above theory to the case of the Dirichlet operator when perturbed additively by a real-valued $L^\infty(\varOmega)$ reaction term. Let $\nu: \varOmega \to \mathbb{R}$ be a function in $L^\infty(\varOmega)$ and let \begin{equation} \label{eq:a-perturbed} a(u, v) = \int_\varOmega \big[ \grad u \cdot \grad \bar v - \nu u \bar v \big] \; dx \end{equation} for any $ u, v \in \mathop{\mathrm{dom}}(a) = {\mathcal V} = H_0^1(\varOmega)$. The operator under consideration in this subsection is the unbounded selfadjoint operator $A$ on ${\mathcal H} = L^2(\varOmega)$ generated by the form $a$, per a standard representation theorem~\cite[Theorem~10.7]{Schmu12}. The starting point for our theory in the previous subsections was an inf-sup condition (see Lemma~\ref{lem:dirichlet}) for the resolvent form $ b(u, v) = z(u, v)_{\mathcal H} - a(u,v). $ We claim that Lemma~\ref{lem:dirichlet} can be extended to the new $a(\cdot,\cdot)$. To prove the claim, given any $v \in H_0^1(\varOmega),$ we construct a $w \in H_0^1(\varOmega)$ slightly differently from the proof of Lemma~\ref{lem:dirichlet}, namely \[ \revv{w = R(\bar z) \, ( \bar z v + \nu v),} \] \rev{which solves $b(s,w) = z(s,v)_{\mathcal{H}} + (\nu s, v)_{\mathcal{H}}$ for all $s \in H_0^1(\Omega)$.} Then we continue to obtain the identity \begin{equation} \label{eq:8} b(v, v-w) = -|v|_{H^1(\varOmega)}^2. \end{equation} Next, for any $\mu > \| \nu\|_{L^\infty(\varOmega)}$, the form domain $\mathop{\mathrm{dom}}(a) = H_0^1(\varOmega)$ equals $\mathop{\mathrm{dom}}(A+ \mu)^{1/2}$ by~\cite[Proposition~10.5]{Schmu12}. The same result also gives \[ a(u,v) = ( (A+\mu)^{1/2} u, (A+\mu)^{1/2} v)_{\mathcal H} - \mu (u,v)_{\mathcal H} \qquad \revv{\text{for all}} \: u, v \in H_0^1(\varOmega). \] Hence \begin{equation} \label{eq:9} |w|_{H^1(\varOmega)}^2 = a(w,w) + (\nu w, w)_{\mathcal H} \le a(w,w) + \mu \| w \|_{\mathcal H}^2 = \|(A + \mu)^{1/2}w\|_{\mathcal H}^2. \end{equation} To proceed, recall that for any $z$ in the resolvent set, functional calculus \cite[Theorem~6.4.1]{BuhleSalam18} shows that the spectrum of the normal operator $(A+\mu)^{1/2} R(z)$, consists of $\{(\lambda+\mu)^{1/2} / (z - \lambda): \lambda \in \Sigma(A)\}$. Thus $(A+\mu)^{1/2} R(z)$ is a bounded operator of norm $ c_z = \sup\{ |\lambda+\mu|^{1/2} / |z - \lambda|: \;\lambda \in \Sigma(A)\} < \infty.$ Hence~\eqref{eq:9} implies $ |w|_{H^1(\varOmega)} \le \| (A+\mu)^{1/2} R(\bar z) \, ( \bar z v + \nu v)\|_{\mathcal H} \le c_z \| \bar z v + \nu v\|_{{\mathcal H}}. $ Using the \Poincare\ inequality $c_P\| v \|_{\mathcal H} \le |v|_{H^1(\varOmega)}$, this implies $|w|_{H^1(\varOmega)} \le (|z| + \mu) (c_z/c_P) |v|_{H^1(\varOmega)}, $ so \begin{equation} \label{eq:7} \revv{ |v - w|_{H^1(\varOmega)} \le d(z) |v|_{H^1(\varOmega)}, } \end{equation} where $ d(z) = 1 + (|z|+ \mu) c_z/c_P. $ Combining~\eqref{eq:8} and \eqref{eq:7}, we have \begin{align*} \sup_{y \in H_0^1(\varOmega)} \frac{| b(v,y) |}{ \quad| y |_{H^1(\varOmega)}}\geq \frac{| b(v, v-w) |}{ \quad| v-w |_{H^1(\varOmega)}} \ge \frac{ |v|_{H^1(\varOmega)}^2}{ d(z)| v|_{H^1(\varOmega)}}, \end{align*} so the inf-sup condition follows, extending Lemma~\ref{lem:dirichlet} as claimed. \begin{lemma}[Generalization of Lemma~\ref{lem:dirichlet}] Suppose $a$ as in~\eqref{eq:a-perturbed}, $ b(u, v) = z(u, v)_{\mathcal H} - a(u,v)$, $z$ is in the resolvent set of $A$, and $d(z)$ is as defined above. Then for all $v \in H_0^1(\varOmega)$, \begin{align*} \sup_{y \in H_0^1(\varOmega)} \frac{| b(v,y) |}{ \quad| y |_{H^1(\varOmega)}}\geq d(z)^{-1} \,| v |_{H^1(\varOmega)}. \end{align*} \end{lemma} Using this lemma in place of Lemma~\ref{lem:dirichlet}, the remainder of the analysis proceeds with minimal changes, provided we also assume that $\nu$ is piecewise constant. More precisely, assume that $\nu$ is constant on each element of the mesh $\varOmega_h$. Then the same Fortin operator used in the proof of Lemma~\ref{lem:dpg-resolvent} applies. Hence the final result of Theorem~\ref{thm:total} holds with a possibly different constant $C_0$ (still independent of $h$) whenever Assumption~\ref{asm:reg} holds. \section{Numerical convergence studies}\label{NS} In this section, we report on our numerical convergence studies using the FEAST algorithm with the DPG discretization for the model Dirichlet eigenproblem. This spectral approximation technique is exactly the one described in Section~\ref{ssec:dpg}. An implementation of this technique was built using~\cite{Gopal17}, which contains a hierarchy of Python classes representing approximations of spectral projectors. The DPG discretization is implemented using a python interface into an existing well-known C++ finite element library called NGSolve~\cite{Schob17}. We omit the implementation details of the FEAST algorithm as they can be found either in our public code~\cite{Gopal17} or previous works like \cite[Algorithm~1.1]{Saad16} and \cite{GuttePolizTang15}. We note that our implementation performs an implicit orthogonalization through a small Rayleigh-Ritz eigenproblem at each iteration. For all experiments reported below, we set $r_N$ to the rational function corresponding to the Butterworth filter obtained by setting $w_N=0$ and \begin{align}\label{CircleQuad} z_k=\gamma e^{\mathrm{i} (\theta_k+\phi)}+y,\quad\quad w_k=\gamma e^{\mathrm{i} (\theta_k+\phi)}/N, \qquad k=0, \ldots, N-1, \end{align} where $\theta_k=2\pi k/N$ and $ \phi=\pm\pi/N.$ This corresponds to an approximation of the contour integral in~\eqref{eq:DunfodTaylor}, with a circular contour $\Gamma$ of radius $\gamma$ centered at $y$, using the trapezoidal rule with $N$ equally spaced quadrature points. In all experiments reported below, we set $N=8$. \subsection{Discretization \revv{errors on the unit square}} \begin{figure}[b] \begin{subfigure}[t]{0.49\textwidth} \begin{center} \begin{tikzpicture} \begin{loglogaxis}[ footnotesize, width=0.95\textwidth, height=\textwidth, xlabel=$h$, ylabel={$d_h$}, legend pos = south east, max space between ticks=30pt, ] \addplot coordinates { (0.2500000000, 6.747273e+00) (0.1250000000, 1.975461e+00) (0.0625000000, 7.534260e-01) (0.0312500000, 3.509711e-01) (0.0156250000, 1.723048e-01) (0.0078125000, 8.576663e-02) }; \addplot coordinates { (0.2500000000, 9.049084e-01) (0.1250000000, 3.790549e-01) (0.0625000000, 8.318753e-02) (0.0312500000, 1.879642e-02) (0.0156250000, 4.437640e-03) (0.0078125000, 1.077704e-03) }; \addplot coordinates { (0.2500000000, 3.705158e-01) (0.1250000000, 7.932601e-02) (0.0625000000, 9.720224e-03) (0.0312500000, 1.209351e-03) (0.0156250000, 1.508522e-04) (0.0078125000, 1.886285e-05) }; \logLogSlopeTriangle{0.59}{0.15}{0.33}{3}{black}; \logLogSlopeTriangle{0.59}{0.15}{0.515}{2}{black}; \logLogSlopeTriangle{0.59}{0.15}{0.705}{1}{black}; \legend{$p=1$, $p=2$, $p=3$} \end{loglogaxis} \end{tikzpicture} \caption{Convergence rates for eigenfunctions} \label{fig:efsqr} \end{center} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \begin{center} \begin{tikzpicture} \begin{loglogaxis}[ footnotesize, width=0.95\textwidth, height=\textwidth, xlabel=$h$, ylabel={$\mathop{\mathrm{dist}}( {\Lambda}, {\Lambda_h})$}, legend pos = south east, max space between ticks=30pt, ] \addplot coordinates { (0.2500000000, 1.455193e+01) (0.1250000000, 4.124450e+00) (0.0625000000, 9.859321e-01) (0.0312500000, 2.436991e-01) (0.0156250000, 6.066035e-02) (0.0078125000, 1.513589e-02) }; \addplot coordinates { (0.2500000000, 5.419321e-01) (0.1250000000, 5.954395e-02) (0.0625000000, 4.126409e-03) (0.0312500000, 2.647773e-04) (0.0156250000, 1.668255e-05) (0.0078125000, 1.045518e-06) }; \addplot coordinates { (0.2500000000, 1.472728e-02) (0.1250000000, 5.240445e-04) (0.0625000000, 7.863915e-06) (0.0312500000, 1.218536e-07) (0.0156250000, 1.896943e-09) (0.0078125000, 3.102940e-11) }; \logLogSlopeTriangle{0.59}{0.15}{0.31}{6}{black}; \logLogSlopeTriangle{0.59}{0.15}{0.55}{4}{black}; \logLogSlopeTriangle{0.59}{0.15}{0.76}{2}{black}; \legend{$p=1$, $p=2$, $p=3$} \end{loglogaxis} \end{tikzpicture} \caption{Convergence rates for eigenvalues} \label{fig:ewsqr} \end{center} \end{subfigure} \caption{Results for the unit square} \label{fig:ewf} \end{figure} Let $\Omega = (0,1) \times (0,1)$ and consider the Dirichlet eigenvalues enclosed within the circular contour $\Gamma$ of radius $\gamma = 45$ and center $y = 20$. The exact set of eigenvalues for this example is known to be $\Lambda = \{ 2\pi^2, 5 \pi^2\}$. The first eigenvalue $ 2\pi^2= \lambda_1$ is of multiplicity 1, while the second $5\pi^2 = \lambda_2 = \lambda_3$ is of multiplicity~2. The corresponding eigenfunctions are well-known analytic functions. To perform the numerical studies, we begin by solving our problem on a coarse mesh of mesh size $h = 2^{-2}$ and refine until we reach a mesh size of $h = 2^{-7}$. Each mesh refinement halves the mesh size by either bisecting or quadrisecting the triangular elements of a mesh. For each mesh size value of $h$, we perform this experiment for polynomial degrees $p = 1, 2, $ and $3$. After each experiment we collect the approximate eigenvalues ordered so that $\lambda_{1, h} \le \lambda_{2, h} \le \lambda_{3, h}$ and their corresponding eigenfunctions $e_{i, h}$. One way to measure the convergence of eigenfunctions is through \begin{align*} \delta_i^{(1)} =& \min_{0 \ne e \in E} |e_{i,h} - e|_{H^1(\Omega)} = \dist{H_0^1(\varOmega)}( e_{i,h}, E), \\ \delta_i^{(2)} =& \min_{0 \ne e_h \in E_h} |e_i - e_h|_{H^1(\Omega)} = \dist{H_0^1(\varOmega)}( e_i, E_h). \end{align*} Note that both $\delta_i^{(1)}$ and $\delta_i^{(2)}$ are bounded by $\gap_{H_0^1(\varOmega)} (E_h, E)$. Since computing $\delta_i^{(1)}$ and $\delta_i^{(2)}$ require exact integration of quantities involving the exact eigenspace, we instead compute \[ \delta_{i, h}^{(1)} = \dist{H_0^1(\varOmega)} (e_{i, h}, I_h E) \quad \revv{\text{and}} \quad \delta_{i, h}^{(2)} = \dist{H_0^1(\varOmega)} (I_h e_i, E_h), \] where $I_h$ is a standard interpolant into the finite element space ${\mathcal V}_h$. For brevity, instead of plotting the behavior of each $\delta_{i, h}^{(j)}$ for all $i, j$, we plot the behavior of their sum \[ d_h = \sum_{i=1}^3 \sum_{j=1}^2 \delta_{i, h}^{(j)} \] for decreasing mesh sizes $h$ and increasing polynomial degrees $p$ in Figure~\ref{fig:ewf}. In the same figure panel, we also display the observed errors in the computed eigenvalues in $\Lambda_h$ by plotting the Hausdorff distance $\mathop{\mathrm{dist}}(\Lambda, \Lambda_h)$ for various values of $h$ and~$p$. Since $\delta_i^{(j)}$ should go to zero at the same rate as $\gap_{H_0^1(\varOmega)} (E_h, E)$ and since the interpolation errors are of the same order as the gap, we expect $d_h$ to go to zero as $h\to 0$ at the same rate as $\gap_{H_0^1(\varOmega)} (E_h, E).$ From Figure~\ref{fig:efsqr}, we observe that $d_h$ appears to converge to 0 at the rate $O(h^{p})$ for $p=1, 2,$ and 3. Since the eigenfunctions on the unit square are analytic, Assumption~\ref{asm:reg} holds for this example with {\em any} $s_E > 0$. Therefore, our observation on the rate of convergence of $d_h$ is in agreement with the gap estimate~\eqref{eq:gapEEhDPG} of Theorem~\ref{thm:total}. Figure~\ref{fig:ewsqr} shows that as $h$ decreases, $\mathop{\mathrm{dist}}(\Lambda, \Lambda_h)$ decreases to 0 at the rate $O(h^{2p})$ for $p=1, 2,$ and 3. This is also in good agreement with the eigenvalue error estimate~\eqref{eq:LLhDPG} of Theorem~\ref{thm:total}. The results presented above using the DPG discretization are comparable to those found in~\cite{GopalGrubiOvall18} using the FEAST algorithm with the standard finite element discretization of comparable orders. \begin{remark} \label{rem:other_experim} In other unreported experiments, we found that setting $Y_h$ to \[ \tilde{Y}_h=\{ y \in H^1(\varOmega_h): y|_K \in P_{p+1}(K)\} \] also gave the same convergence rates. This indicates that the space dictated by the theory, namely $Y_h = \{ y \in H^1(\varOmega_h): y|_K \in P_{p+3}(K)\}$, might be overly conservative. We already noted one approach to improve the estimates in Remark~\ref{rem:Pih}. Another approach might be through a perturbation argument, as the theory in~\cite{DemkoGopal13a} proves the error estimate of Lemma~\ref{lem:dpg-resolvent} at $z=0$ even when $Y_h$ is replaced by $\tilde{Y}_h$. \end{remark} \subsection{Convergence rates on an L-shaped domain} \begin{table \centering \begin{tabular}{|c|cc|cc|cc|} \hline & $\lambda_1$ & & $\lambda_2$ & & $\lambda_3$ & \\ $h$ & ERR & NOC & ERR & NOC & ERR & NOC \\ \hline $2^{-2}$ & 6.29e-02 & --- & 3.29e-02 & --- & 5.95e-02 & --- \\ $2^{-3}$ & 2.41e-02 & 1.39 & 2.65e-03 & 3.63 & 4.05e-03 & 3.88 \\ $2^{-4}$ & 9.48e-03 & 1.34 & 2.55e-04 & 3.38 & 2.59e-04 & 3.97 \\ $2^{-5}$ & 3.75e-03 & 1.34 & 2.99e-05 & 3.09 & 1.63e-05 & 3.99 \\ $2^{-6}$ & 1.49e-03 & 1.34 & 4.03e-06 & 2.89 & 1.02e-06 & 4.00 \\ \hline \end{tabular} \caption{Eigenvalue errors (ERR) and numerical order of convergence (NOC) for the smallest three eigenvalues on the L-shaped domain.} \label{tab:L} \end{table} In this example, we consider the Dirichlet eigenvalues of the L-shaped domain $\Omega = (0,2) \times (0,2) \setminus [1,2] \times [1,2]$ enclosed within a circular contour of radius $\gamma = 8$ centered at $y = 15$. The first three Dirichlet eigenvalues are enclosed in this contour and we are interested in determining the eigenvalue error and numerical order of convergence for these. We use the results reported in \cite{TrefeBetcke06} as our reference eigenvalues, namely $\lambda_1 \approx 9.6397238$, \revv{$\lambda_2 \approx 15.197252$}, and $\lambda_3 = 2\pi^2$. With the above values of $\lambda_i$ (displayed up to the digits the authors of \cite{TrefeBetcke06} claimed confidence in), we define $\text{ERR}(h) = |\lambda_{i,h} - \lambda_i|$, where $\lambda_{1,h}\le \lambda_{2,h}\le \lambda_{3,h}$ are the approximate eigenvalues obtained by FEAST. Then we define the numerical order of convergence (NOC) as $\text{NOC}(h) = \log(\text{ERR}(2h)/\text{ERR}(h)) / \log(2)$. We perform our convergence study, as in the unit square case, using a sequence of uniformly refined meshes, starting from a mesh size of $h = 2^{-2}$ and ending with a mesh size of $h = 2^{-6}$. In this example we confine the scope of our convergence study to polynomial degree $p = 2$. Further mesh refinements or higher degrees are not studied because the exact eigenvalues are only available to limited precision and errors below this precision cannot be used to surmise convergence rates accurately. The observations are compiled in Table~\ref{tab:L}. From the first column of Table~\ref{tab:L}, we find that the first eigenvalue is observed to converge at a rate of approximately $4/3$. For polygonal domains, its well known that Assumption~\ref{asm:reg} holds with any positive $s$ less than the $\pi/\alpha$ where $\alpha$ is the largest of the interior angles at the vertices of the polygon. Clearly $\alpha = 3 \pi/2$ for our L-shaped $\varOmega$. The eigenfunction corresponding to the first eigenvalue is known to be limited by this regularity, so $s_E$ may be chosen to be any positive number less than $2/3$. Therefore, the observed convergence rate of $4/3$ for the first eigenvalue is in agreement with the rate of $2\min(p,s_E)$ established in Theorem~\ref{thm:total}. Although Theorem~\ref{thm:total} does not yield improved convergence rates for the other eigenvalues, which have eigenfunctions of higher regularity, we observe from the remaining columns of Table~\ref{tab:L} that in practice we do observe higher order convergence rates. E.g., the eigenfunction corresponding to $\lambda_3 = 2\pi^2$ is analytic and we observed that the corresponding eigenvalue converges at a rate $O(h^{2p})$ that is not limited by $s_E$. \section{Application to optical fibers}\label{fib} Double-clad step-index optical fibers have resulted in numerous technological innovations. Although originally intended to carry energy in a single mode, for increased power operation large mode area (LMA) fibers are now being sold extensively. LMA fibers usually have multiple guided modes. In this section, we show how to use the method we developed in the previous sections to compute such modes. We begin by showing that the problem of computing the fiber modes can be viewed as a problem of computing an eigenvalue cluster of an operator of the form discussed in Subsection~\ref{ssec:gener}. These optical fibers have a cylindrical core of radius $r_{\text{core}}$ and a cylindrical cladding region enveloping the core, extending to radius $r_{\text{clad}}$. We set up our axes so that the longitudinal direction of the fiber is the $z$-axis. The transverse coordinates will be denoted $x, y$ while using Cartesian coordinates and the eigenvalue problem will be posed in these coordinates. Thus the space dimension (previously denoted by $n$) will be fixed to $2$ in this section, so denoting the refractive index of the fiber by $n$ in this section causes no confusion. We have in mind fibers whose refractive index $n(x,y)$ is a piecewise constant function, equalling $n_{\text{core}}$ in the core, and $n_{\text{clad}}$ in the cladding region $(n_{\text{clad}} < n_{\text{core}})$. The guided modes, also called the transverse core modes, decay exponentially in the cladding region. These {\em modes} of the fiber, which we denote by $\varphi_l(x,y),$ are non-trivial functions that, together with their accompanying (positive) {\em propagation constants} $\beta_l,$ solve \begin{subequations} \label{eq:optic-ewp} \begin{equation} \label{eq:optic-ewp-pde} (\Delta + k^2 n^2) \varphi_l = \beta_l^2 \varphi_l, \qquad r < r_{\text{core}}, \end{equation} where $k$ is a given wave number of the signal light, $\Delta = \partial_{xx} + \partial_{yy}$ denotes the Laplacian in the transverse coordinates $x, y$. Since the guided modes decay exponentially in the cladding, and since the cladding radius is typically many times larger than the core, we supplement~\eqref{eq:optic-ewp-pde} with zero Dirichlet boundary conditions at the end of the cladding: \begin{equation} \label{eq:optic-ewp-bc} \varphi_l = 0, \qquad r = r_{\text{core}}. \end{equation} \end{subequations} Since the spectrum of the Dirichlet operator $\Delta$ lies in the negative real axis and has an accumulation point at $-\infty$, we expect to find only finitely many $ \lambda_l \equiv \beta_l^2>0$ satisfying~\eqref{eq:optic-ewp}. This finite collection of eigenvalues $\lambda_l$ form our eigenvalue cluster $\Lambda$ in this application, and the corresponding eigenspace $E$ is the span of the modes $\varphi_l$. From the standard theory of step-index fibers \cite{Reide16}, it follows that the propagation constants $\beta_l$ of guided modes satisfy \[ \rev{n_{\text{clad}}^2} k^2 < \beta_l^2 < \rev{n_{\text{core}}^2} k^2. \] Thus, having a pre-defined search interval, the computation of the eigenpairs $(\lambda_l, \varphi_l)$ offers an example very well-suited for applying the FEAST algorithm. Moreover, since separation of variables can be employed to calculate the exact solution in terms of Bessel functions, we are able to perform convergence studies as well. Below, we apply the algorithm to a realistic fiber using the previously described DPG discretization of the resolvent of the Helmholtz operator $\Delta + k^2 n^2$ with Dirichlet boundary conditions to a realistic fiber. The fiber we consider is the commercially available ytterbium-doped Nufern\texttrademark \;(nufern.com) fiber, whose typical parameters are \begin{equation} \label{eq:Nufern} n_{\text{core}}=1.45097, \;\; n_{\text{clad}} = 1.44973,\;\; r_{\text{core}} = 0.0125~\mathrm{m},\;\; r_{\text{clad}}=16r_{\text{core}}. \end{equation} The typical operating wavelength for signals input to this fiber is $1064$ nanometers, so we set the wavenumber to $k = (2 \pi/1.064)\times 10^{6}$. Due to the small fiber radius, we compute after scaling the eigenproblem~\eqref{eq:optic-ewp} to the unit disc $\hat \varOmega = \{ r<1\}$, i.e., we compute modes $\hat\varphi_l: \hat \varOmega \to \mathbb{C}$ satisfying $ (\Delta + k^2 n^2 r_{\text{clad}}^2) \hat\varphi_l = r_{\text{clad}}^2\beta_l^2 \hat\varphi_l$ in $\hat\varOmega$ and $\hat\varphi_l = 0$ on $\partial\hat\varOmega$. As in the previous section, all results here are generated using our code~\cite{Gopal17} built atop NGSolve~\cite{Schob17}. Note that all experiments in this section are performed using the reduced $\tilde{Y}_h$ mentioned in Remark~\ref{rem:other_experim}. Results from the computation are given in Figures~\ref{fig:ybmesh} and~\ref{fig:ybeigenfuncs}. Note that the elements whose boundary intersects the core or cladding boundary are isoparametrically curved to minimize boundary representation errors -- see Figures~\ref{fig:ybmesh-far} and~\ref{fig:ybmesh-near}. The modes are localized near the core region, so the mesh is designed to be finer there. A six dimensional eigenspace was found. The computed basis for the 6-dimensional space of modes, obtained using polynomial degree $p=6$, are shown (zoomed in near the core region) in the plots of Figure~\ref{fig:ybeigenfuncs}. The mode $e_6$ shown in Figure~\ref{fig:mode6} is considered the ``fundamental mode'' for this fiber, also called the LP01 mode in the optics literature~\cite{Reide16}. \begin{figure} \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{mesh_afar_ngs.eps} \subcaption{The mesh with curved elements adjacent to the core and cladding boundaries.} \label{fig:ybmesh-far} \end{subfigure \quad \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{mesh_closeup_ngs.eps} \subcaption{Zoomed-in view of the mesh in Figure~\ref{fig:ybmesh-far} near the core.} \label{fig:ybmesh-near} \end{subfigure \caption{The mesh used for computing modes of the ytterbium-doped fiber.} \label{fig:ybmesh} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.875\textwidth]{ef0_interp_ngs.eps}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw[black, opacity=.5, thick, dashed] (0.5,0.5) circle (0.19); \end{scope} \end{tikzpicture} \subcaption{$ \varphi^h_1$} \end{subfigure ~ \begin{subfigure}[t]{0.3\textwidth} \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.875\textwidth]{ef1_interp_ngs.eps}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw[black, opacity=.5, thick, dashed] (0.5,0.5) circle (0.19); \end{scope} \end{tikzpicture} \subcaption{$ \varphi^h_2$} \end{subfigure ~ \begin{subfigure}[t]{0.3\textwidth} \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.875\textwidth]{ef2_interp_ngs.eps}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw[black, opacity=.5, thick, dashed] (0.5,0.5) circle (0.19); \end{scope} \end{tikzpicture} \subcaption{$\varphi^h_3$} \end{subfigure} \newline \newline \noindent \begin{subfigure}[t]{0.3\textwidth} \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.875\textwidth]{ef3_interp_ngs.eps}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw[black, opacity=.5, thick, dashed] (0.5,0.5) circle (0.19); \end{scope} \end{tikzpicture} \subcaption{$\varphi^h_4$} \label{fig:mode4} \end{subfigure ~ \begin{subfigure}[t]{0.3\textwidth} \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.875\textwidth]{ef4_interp_ngs.eps}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw[black, opacity=.5, thick, dashed] (0.5,0.5) circle (0.19); \end{scope} \end{tikzpicture} \subcaption{$\varphi^h_5$} \label{fig:mode5} \end{subfigure ~ \begin{subfigure}[t]{0.3\textwidth} \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=0.875\textwidth]{ef5_interp_ngs.eps}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw[black, opacity=.5, thick, dashed] (0.5,0.5) circle (0.19); \end{scope} \end{tikzpicture} \subcaption{$\varphi^h_6$} \label{fig:mode6} \end{subfigure} \caption{A close view of the approximate eigenfunctions $\varphi_j^h$ computed by FEAST for the ytterbium-doped fiber. The boundary of the fiber core region is marked by dashed black circles.} \label{fig:ybeigenfuncs} \end{figure} We also conducted a convergence study. We began with a mesh whose approximate mesh size in the core region is $h_c= 1/16$. We performed three uniform mesh refinements, where each refinement halved the mesh size. After each refinement, the elements intersecting the core or cladding boundary were curved again using the geometry information. Using the DPG discretization and $N = 16$ quadrature points for the contour integral, we computed the 6 eigenvalues, denoted by $\hat{\lambda}_l^h$, and compared them with the exact eigenvalues on the scaled domain, denoted by $\hat{\lambda}_l = r_{\text{core}}^2\beta_l^2$. For the parameter values set in~\eqref{eq:Nufern}, there are six such $\hat{\lambda}_l$ (counting multiplicities) whose approximate values are $ \hat{\lambda}_1= 2932065.0334243, \; \hat{\lambda}_2 = \hat{\lambda}_3 = 2932475.1036310,\; \hat{\lambda}_4= \hat{\lambda}_5=2934248.1978369, \; \hat{\lambda}_6=2935689.8561775. $ Fixing $p=3$, we report the relative eigenvalue errors \[ e_l =\frac{ | \hat{\lambda}_l - \hat{\lambda}_l^h| }{ \hat{\lambda}_l^h} \] in Table~\ref{tab:fiber_yb_rates} for each $l$ (columns) and each refinement level (rows). A column next to an $e_l$-column indicates the numerical order of convergence (computed as described in Section~\ref{NS}\rev{)}. The observed convergence rates are somewhat near the order of 6 expected from the previous theory. The match in the rates is not as close as in the results from the ``textbook'' benchmark examples of Section~\ref{NS}, presumably because mesh curving may have an influence on the pre-asymptotic behavior. Since the relative error values have quickly approached machine precision, further refinements were not performed. \begin{table} \setlength\tabcolsep{4pt} \centering \begin{footnotesize} \begin{tabular}{|c|cc|cc|cc|cc|cc|cc|} \hline core $h$ & $e_1$ & {\tiny{NOC}} & $e_2$ & {\tiny{NOC}} & $e_3$ & {\tiny{NOC}} & $e_4$ & {\tiny{NOC}} & $e_5$ & {\tiny{NOC}} & $e_6$ & {\tiny{NOC}} \\ \hline $h_c $ & 1.26e-07 & -- & 2.01e-07 & -- & 1.81e-07 & -- & 4.99e-08 & --&4.37e-08&-- &1.72e-08 &-- \\ $h_c/2$ & 9.42e-09 & 3.7 & 1.63e-08 & 3.6 & 1.32e-08 & 3.8 & 6.46e-09 &3.0&4.84e-09&3.2&3.38e-09 &2.4 \\ $h_c/4$ & 1.17e-10 & 6.3 & 2.13e-10 & 6.3 & 1.80e-10 & 6.2 & 7.03e-11 &6.5&4.84e-11&6.6&3.64e-11 &6.5 \\ $h_c/8$ & 9.16e-14 & 10.3 & 1.33e-12 & 7.3 & 3.06e-13 & 9.2 & 3.75e-13 &7.6&6.87e-13&6.1&6.69e-14 &9.1 \\ \hline \end{tabular} \end{footnotesize} \caption{Convergence rates of the fiber eigenvalues} \label{tab:fiber_yb_rates} \end{table} \input{references.tex} \end{document}
2,869,038,155,757
arxiv
\section{Introduction} Face recognition is an extensively studied topic in computer vision. Among the existing technologies of human biometrics, face recognition is the most widely used in real-world applications, such as the authentication and surveillance systems. According to the modality of data, face recognition can be divided into 2D image based methods and 3D scan based methods, which are quite different in development and application. Moreover, with the great advance of deep convolutional neural networks (DCNNs), deep learning based methods have achieved significant performance improvements on various computer vision tasks, including face recognition. In this survey, we focus on 2D image based end-to-end deep face recognition which takes the natural images or video frames as input, and extracts the deep features of each face as output. We provide a comprehensive review of the recent advances in the elements of end-to-end deep face recognition. Specifically, an end-to-end deep face recognition system is composed of three key elements: face detection, face preprocessing, and face representation. In the following, we give a brief introduction of each element. \begin{figure}[t] \centering \includegraphics[height=4.5cm]{./figures/statistics.png} \caption{The publication number of the elements of end-to-end deep face recognition from 2013 to July 2020.} \label{publications} \end{figure} Face detection is the first step of the end-to-end face recognition. It aims to locate the face regions in the natural images or video frames. Before the deep learning era, one of the pioneering work for face detection is Viola-Jones~\cite{viola2001rapid} face detector, which utilizes AdaBoost classifiers with Haar features to build a cascaded structure. Later on, the subsequent approaches explore the effective hand-craft features~\cite{Ojala2002Multiresolution,Mita2005Joint,Yang2014acf} and various classifiers~\cite{Li2002Statistical,Pham2007Fast,Brubaker2008On} to improve the detection performance. Besides, some methods~\cite{Felzenszwalb2010Object,YAN2014790} employ Deformable Part Models (DPM) for face detection. One can refer to~\cite{Stefanos2015A} for a thorough survey of traditional face detection methods. Recently, with the great progress of DCNNs, deep learning based face detection has been extensively studied. By learning from large-scale data with DCNN, face detectors become more robust to various conditions, such as large facial poses and occlusions. Next, face preprocessing refers to calibrate the natural face to a canonical view and crop it to a normalized pixel size, in order to facilitate the subsequent task of face representation computing. It is an essential intermediate procedure for a face recognition system. In this survey, we introduce two major practices for face preprocessing, ~et al.}{\emph{et al}\onedot{i.e.,} face alignment and face frontalization. Generally, the face alignment utilizes spatial transformations to warp faces to a canonical location with the reference of facial landmarks. So, the facial landmark localization is necessary for face alignment. Most traditional works of facial landmark localization focused on either generative methods~\cite{Cootes1992ActiveSM,Cootes2000ViewbasedA} or discriminative methods~\cite{Zhou2007ShapeRM,Martnez2013LocalEA}, and there are several exhaustive surveys about them~\cite{eliktutan2013ACS,Jin2017FaceAI,Wang2018FacialFP}. Instead of utilizing facial landmarks, some approaches directly generate aligned face from the input one. In addition, face frontalization studies to synthesize frontal faces from non-frontal inputs, which is commonly used to handle large pose face recognition. In the face representation stage, the discriminative features are extracted from the preprocessed face images for recognition. This is the final and core step of face recognition. In early studies, many approaches calculates the face representation by projecting face images into low-dimensional subspace, such as Eigenfaces~\cite{1991Eigenfaces} and Fisherfaces~\cite{Belhumeur1997Eigenfaces}. Later on, more handcrafted local descriptors based methods~\cite{Liu2002GaborFB,Ahonen2004Face} prevailed in face representation. For a detailed review of these traditional methods, one can refer to~\cite{W2003Face,2006fr,2009fr}. Recently, the face representation benefits from the development of DCNNs and witnesses great improvements for high performance face recognition. \begin{table*}[t] \begin{center} \caption{Representative surveys of face recognition} \label{fr_surveys} \resizebox{\linewidth}{!}{ \begin{tabular}{|p{10cm}|c|p{8cm}|} \hline {Title}&{Year}&{Description}\\ \hline\hline Face Recognition: A Literature Survey~\cite{W2003Face}&2003& Traditional image- and video-based methods in face recognition. Not covering deep face recognition.\\ \hline Face Recognition from a Single Image per Person: A Survey~\cite{2006fr}&2006& The methods to address the single sample problem in face recognition, not covering deep face recognition. \\ \hline A survey of approaches and challenges in 3D and multi-modal 3D+2D face recognition~\cite{bowyer2006survey}&2006& A survey of 3D and multi-modal face recognition, not covering deep face recognition. \\ \hline Illumination Invariant Face Recognition: A Survey~\cite{Zou2007IlluminationIF}&2007& Focus on illumination-invariant face recognition task, not covering deep face recognition. \\ \hline A Survey of Face Recognition Techniques~\cite{2009fr}&2009& Traditional face recognition methods on different modal face data, not covering deep face recognition. \\ \hline A Comprehensive Survey on Pose-Invariant Face Recognition~\cite{Ding2016ACS}&2016& Focus on pose-invariant face recognition task. \\ \hline A survey of local feature methods for 3D face recognition~\cite{Soltanpour2017ASO}&2017& A review of feature extraction based methods for 3D face recognition. \\ \hline Deep Learning for Understanding Faces~\cite{Ranjan2018deep}&2018& Provide a brief overview of the end-to-end deep face recognition, not covering the recent works. \\ \hline Deep Face Recognition: A Survey~\cite{wang2018deep}&2018& Focus on the deep face representation learning.\\ \hline Past, Present, and Future of Face Recognition: A Review ~\cite{electronics9081188}&2020& A review of 2D and 3D face recognition, not covering end-to-end deep face recognition. \\ \hline \end{tabular}} \end{center} \end{table*} This survey focuses on reviewing and analyzing the recent advances in each element of end-to-end deep face recognition. An important fact is that, the performance of face recognition depends on the contribution of all the elements (~et al.}{\emph{et al}\onedot{i.e.,} face detection, preprocessing and representation). In other words, inferiority in any one of the elements will become the shortest piece of cask and harm the final performance. In order to establish a high-performance end-to-end face recognition system, it is essential to discuss every element of the holistic framework and their mutual effect on each other. A number of face recognition surveys have been published in the past twenty years. The main difference between our survey and existing ones are summarized in Table~\ref{fr_surveys}. Specifically, there are certain surveys~\cite{W2003Face,2006fr,2009fr} about face recognition but do not cover deep learning based methods since they were published early before the deep learning era; besides, some surveys focus on 3D face recognition~\cite{bowyer2006survey,Soltanpour2017ASO} and specific tasks~\cite{Zou2007IlluminationIF,Ding2016ACS}. Instead, we focus on the 2D face recognition which is the most needed in practical applications. Ranjan~et al.}{\emph{et al}\onedot{et al.}~\cite{Ranjan2018deep} provided a brief overview of the three elements, while they did not cover the recent techniques that rapidly evolved in the past few years. As shown in Fig.~\ref{publications}, the number of published works has been increasing dramatically during these years. Wang~et al.}{\emph{et al}\onedot{et al.}~\cite{wang2018deep} presented a systematic review about deep face recognition, in which they mainly focused on deep face representation learning, and the categorization of training loss is sub-optimal. For instance, they sorted the supervised learning of deep face representation by euclidean-distance based loss, angular/cosine-margin-based loss, softmax loss and its variations, however, almost all the angular/cosine-margin-based losses are implemented as the variation of softmax loss rather than an individual set. In contrast, we suggest a more reasonable categorization of the training supervision with three subsets, ~et al.}{\emph{et al}\onedot{i.e.,} the classification, feature embedding and hybrid methods (in Section~\ref{sec:face_representation:supervision}). More recently, Insaf~et al.}{\emph{et al}\onedot{et al.}~\cite{electronics9081188} provided a review of 2D and 3D face recognition from the traditional to deep-learning era, while the scope was still limited in the face representation. In summary, the face recognition techniques need to be systematically reviewed with a wide scope covering all the elements of the end-to-end pipeline, while seldom of the existing surveys has fulfilled this job. Therefore, we systematically review the deep learning based approaches of each element in the end-to-end face recognition, respectively. The review of each element covers many aspects: algorithm designs, evaluation metrics, datasets, performance comparisons, remaining challenges, and promising directions for future research. We hope this survey could bring helpful thoughts to one for better understanding of the big picture of end-to-end face recognition and deeper exploration in a systematic way. Specifically, the main contributions can be summarized as follows: \begin{itemize} \item We provide a comprehensive survey of the recent advances of the elements in end-to-end deep face recognition, including face detection, face preprocessing, face representation. \item We discuss the three elements from many aspects: algorithm designs, evaluation metrics, datasets, and performance comparison ~et al.}{\emph{et al}\onedot{etc}. \item We further collect the existing challenges and promising directions for each element to facilitate future research, and also discuss the future trends from the view of the holistic framework. \end{itemize} \section{Overview} \label{sec:overview} \begin{figure*}[t] \centering \includegraphics[height=4.2cm]{./figures/overview.png} \caption{The standard pipeline of end-to-end deep face recognition system. First, the face detection stage aims to localize the face region on the input image. Then, the face preprocessing is proceeded to normalize the detected face to a canonical view. Finally, the face representation devotes to extract discriminative features for face recognition.} \label{Pipeline} \end{figure*} A typical end-to-end deep face recognition system includes three basic elements: face detection, face preprocessing, and face representation, as shown in Fig.~\ref{Pipeline}. First, face detection localizes the face region on the input image. Then, face preprocessing is proceeded to normalize the detected face into a canonical layout. Finally, face representation devotes to extract discriminative features from the prepossessed face. The features are used to calculate the similarity between them, in order to make the decision that whether the faces belong to the same identity. We structure the body sections (Section~\ref{sec:face_detection},~\ref{sec:face_preprocessing},~\ref{sec:face_representation}) with respect to the three elements, each of which is a research topic that covers abundant literatures in computer vision. We give an overview of the three elements briefly in this section, and dive into each of them in the following body sections. \subsection{Face Detection} \label{sec:overview:face_detection} Face detection is the first procedure of the face recognition system. Given an input image, the face detection aims to find all the faces in the image and give the coordinates of bounding box with a confidence score. The major challenges of face detection contain varying resolution, scale, pose, illumination, occlusion ~et al.}{\emph{et al}\onedot{etc}.The traditional methods focus on designing hand-crafted features that distinguishes facial and background region. With the development of deep learning, the deep features have been extensively used in face detection. In Section~\ref{sec:face_detection}, we provide a categorization of the deep learning based face detection methods from multiple dimensions, which includes multi-stage, single-stage, anchor-based, anchor-free, multi-task learning, CPU real-time and problem-oriented methods. Generally, the categorizing criterion of the multi-stage and single-stage methods relies on whether the face detectors generate candidate boxes, then the following one or more stages further refine the candidates for accurate predictions. Most anchor-based methods preset a number of anchors on the feature maps and then make classification and regression on these anchors. The anchors play a crucial role in this routine. Recently, another routine, ~et al.}{\emph{et al}\onedot{i.e.,} the anchor-free design, attracts growing attention in object detection due to its flexibility and efficiency. So, we also discuss the anchor-free methods and make comparison with the anchor-based ones. In addition, as the face detection is the prior step in face recognition systems, the computational efficiency of face detector is important in real-world applications. Although the detectors can achieve good performance with the DCNNs, it is impractical to deploy heavy-weight networks, especially on the non-GPU devices. Thus, we introduce the CPU real-time methods for practical applications. Certainly, we should not ignore another set of problem-oriented methods for face detection, since they have explicit motivation to tackle the specific challenges. From the above-mentioned perspectives, we provide an in-depth discussion about the existing deep face detection methods in Section~\ref{sec:face_detection}. It is worth noting that there exists overlapping techniques between the categories, because, as explained above, the categorization is built up from multiple perspectives. It will help us to better recognize the deep learning based methods for face detection. \begin{figure} \centering \includegraphics[height=2cm]{./figures/lmk.png} \caption{Visualization of facial landmarks of different versions. The 4-point and 5-point landmarks are often used for face alignment.} \label{fig:lmk_sample} \end{figure} \subsection{Face Preprocessing} \label{sec:overview:face_preprocessing} In the second stage, face preprocessing aims to calibrate the detected face to a canonical view (~et al.}{\emph{et al}\onedot{i.e.,} face alignment or frontalization), which is an essential procedure for improving the end-to-end performance of face recognition. Since human face appears with the regular structure, in which the facial parts (eyes, nose, mouth, ~et al.}{\emph{et al}\onedot{etc}.) have constant arrangement, the alignment of face is of great benefit to the subsequent feature computation for face recognition. Commonly, face alignment utilizes spatial transformation techniques to calibrate faces to a normalized layout. For most existing methods of face alignment, the facial landmarks, or so-called facial keypoints (as shown in Fig.~\ref{fig:lmk_sample}), are indispensable, because they are involved as the reference for similarity transformation or affine transformation. So, the facial landmark localization is a prerequisite for face alignment. The DCNNs based facial landmark localization methods can be divided into three subcategories: coordinate regression based approaches, heatmap regression based approaches and 3D model fitting based approaches. The coordinate regression based approaches take the landmark coordinates as the target of the regression objective, and aims to learn the nonlinear mapping from the input face image to the landmark coordinates. Besides, the heatmap regression based methods output likelihood response maps corresponding to each landmark, respectively. Moreover, the 3D model fitting based methods predict a 3D face shape from a 2D image, and then project it onto the image plane to obtain 2D landmarks. Without relying on the facial landmarks, several methods can directly output aligned face from the input by learning the transformation parameters. In addition, face frontalization techniques can also be applied in face preprocessing to tackle large pose variations by synthesizing identity-preserving frontal faces from non-frontal views. Both face alignment and face frontalization are the common practices for calibrating an unconstrained face to a canonical view and facilitating the subsequent face representation. We will review this set of methods in Section~\ref{sec:face_preprocessing}. \subsection{Face Representation} \label{sec:overview:face_representation} As the key step of face recognition systems, face representation devotes to learn deep face model and use it to extract features from preprocessed faces for recognition. The features are used to calculate the similarity of the matched faces. In Section~\ref{sec:face_representation}, we provide a review of deep learning based methods for discriminative face features. We retrospect these methods with respect to the network architecture and the training supervision which are two important aspects for learning face representation. For network architecture, we introduce the general architectures which are designed for a wide range of computer vision tasks, and the special architectures which are specialized for face representation. As for training supervision, we mainly introduce four schemes, including the classification, feature embedding, hybrid and semi-supervised schemes. Specifically, the classification scheme regards the face representation learning as a classification problem (each ID is regarded as a class), which generally uses softmax loss and its variants as the training supervision. The feature embedding scheme learns the representation by optimizing the distance between samples according to their identities. The hybrid scheme refers to the joint employment of classification and feature embedding for training the deep face model. Such three schemes focus on the supervised training. More recently, deep semi-supervised face representation learning draws increasing attention because they can improve the face representation learning by using large amount of unlabeled face data. Besides, we also present several specific face recognition scenes, including cross domain, low-shot learning and video based scenarios. \section{Face Detection} \label{sec:face_detection} Face detection is the first step in the end-to-end face recognition system, which aims to locate the face regions from the input images. Recently, with the great progress of deep convolutional neural network, deep face detection has been extensively studied. In this section, we first categorize and make comparison of the existing deep learning methods for face detection. Next, we introduce several popular datasets of face detection and the common metrics for evaluation. Finally, we describe some existing challenges and promising future directions. \begin{figure}[ht] \centering \includegraphics[height=2.9cm]{./figures/develop_fd.png} \caption{The development of representative face detection methods. The blue and gray represent multi-stage and single-stage methods; according to the anchor usage, the rectangle, oval, and diamond denote anchor-based, anchor-free and other methods. One can refer to Table~\ref{fd_class} for the references of these methods.} \label{Development_fd} \end{figure} \subsection{Categorization of Face Detection} \label{sec:face_detection:categorization} \begin{table*}[t] \begin{center} \caption{The categorization of deep face detection methods} \label{fd_class} \resizebox{\linewidth}{!}{ \begin{tabular}{|p{3cm}|p{5cm}|p{10cm}|} \hline {Category}&{Description}&{Method}\\ \hline {Multi-stage} & Detectors firstly generate candidate boxes, then the following one or more stages refine the candidates for face detection. & Faceness~\cite{2015Faceness}, HyperFace~\cite{HyperFace}, STN~\cite{2016stn}, ConvNet-3D~\cite{Li2016FaceDW}, WIDER FACE~\cite{Yang_2016_CVPR}, SAFD~\cite{hao2017scale}, CMS-RCNN~\cite{Zhu2017CMS-RCNN}, Wan~et al.}{\emph{et al}\onedot{et al.}~\cite{wan2016bootstrapping}, Face Faster RCNN~\cite{Jiang2017650}, DeepIR~\cite{SUN201842}, Grid loss~\cite{2016Grid_Loss}, Face R-CNN~\cite{wang2017facercnn}, Face R-FCN~\cite{wang2017facerfcn}, ZCC~\cite{Zhu2018}, FDNet~\cite{zhang2018face}, FA-RPN~\cite{Najibi_2019_CVPR}, Cascaded CNN~\cite{Cascade_CNN}, MTCNN~\cite{mtcnn}, Qin~et al.}{\emph{et al}\onedot{et al.}~\cite{Qin2016Joint}, LLE-CNNs~\cite{Ge_2017_CVPR}, PCN~\cite{pcn}, PPN~\cite{ZENG2019PPN} \\ \hline Single-stage & Detectors accomplish face classification and bounding box regression from feature maps at one time. & DDFD~\cite{farfade2015multiview}, HR~\cite{2017HR}, Faceboxes~\cite{Zhang2017Faceboxes}, SSH~\cite{2017SSH}, S$^3$FD~\cite{Zhang2017S3FD}, DCFPN~\cite{Zhang2018DCFPN}, RetinaFace~\cite{deng2020retinaface}, FAN~\cite{wang2017fan}, FANet~\cite{zhang2017fanet}, RSA~\cite{Liu2016rsa}, S$^2$AP~\cite{Song_2018_CVPR}, PyramidBox~\cite{tang2018pyramidbox}, DF$^2$S$^2$~\cite{tian2018df2s2}, SFace~\cite{wang2018sface}, DSFD~\cite{2019DSFD}, RefineFace~\cite{zhang2019refineface}, SRN~\cite{chi2019selective}, PyramidBox++~\cite{li2019pyramidbox}, VIM-FD~\cite{zhang2019robust}, ISRN~\cite{zhang2019improved}, AInnoFace~\cite{zhang2019accurate}, ASFD~\cite{Zhang2020ASFDAA}, HAMBox~\cite{Liu_2020_HAMBox}, DenseBox~\cite{huang2015densebox}, UnitBox~\cite{UnitBox}, CenterFace~\cite{xu2019centerface} \\ \hline Anchor-based& Detectors deploy a number of dense anchors on the feature maps, and then proceed the classification and regression on these anchors. & Wan~et al.}{\emph{et al}\onedot{et al.}~\cite{wan2016bootstrapping}, Face Faster RCNN~\cite{Jiang2017650}, RSA~\cite{Liu2016rsa}, Face R-CNN~\cite{wang2017facercnn}, FDNet~\cite{zhang2018face}, DeepIR~\cite{SUN201842}, SAFD~\cite{hao2017scale}, SSH~\cite{2017SSH}, S$^3$FD~\cite{Zhang2017S3FD}, DCFPN~\cite{Zhang2018DCFPN}, Faceboxes~\cite{Zhang2017Faceboxes}, FAN~\cite{wang2017fan}, FANet~\cite{zhang2017fanet}, PyramidBox~\cite{tang2018pyramidbox}, ZCC~\cite{Zhu2018}, S$^2$AP~\cite{Song_2018_CVPR}, DF$^2$S$^2$~\cite{tian2018df2s2}, SFace~\cite{wang2018sface}, RetinaFace~\cite{deng2020retinaface}, DSFD~\cite{2019DSFD}, RefineFace~\cite{zhang2019refineface}, SRN~\cite{chi2019selective}, VIM-FD~\cite{zhang2019robust}, PyramidBox++~\cite{li2019pyramidbox}, FA-RPN~\cite{Najibi_2019_CVPR}, ISRN~\cite{zhang2019improved}, AInnoFace~\cite{zhang2019accurate}, Group Sampling~\cite{Ming_2019_Group_Sampling}, HAMBox~\cite{Liu_2020_HAMBox} \\ \hline Anchor-free & Detectors directly find faces without preset anchors. &DenseBox~\cite{huang2015densebox}, UnitBox~\cite{UnitBox}, CenterFace~\cite{xu2019centerface} \\ \hline Multi-task learning & Detectors jointly learn the classification and bounding box regression with other tasks (~et al.}{\emph{et al}\onedot{e.g.,} landmark localization) in one framework. & STN~\cite{2016stn}, ConvNet-3D~\cite{Li2016FaceDW}, HyperFace~\cite{HyperFace}, MTCNN~\cite{mtcnn}, Face R-CNN~\cite{wang2017facercnn}, RetinaFace~\cite{deng2020retinaface}, DF$^2$S$^2$~\cite{tian2018df2s2}, FLDet~\cite{2019fldet}, PyramidBox++~\cite{li2019pyramidbox}, CenterFace~\cite{xu2019centerface} \\ \hline CPU real-time & Detectors can run on a single CPU core in real-time for VGA-resolution images. & Cascade CNN~\cite{Cascade_CNN}, STN~\cite{2016stn}, MTCNN~\cite{mtcnn}, DCFPN~\cite{Zhang2018DCFPN}, Faceboxes~\cite{Zhang2017Faceboxes}, PCN~\cite{pcn}, RetinaFace~\cite{deng2020retinaface}, FLDet~\cite{2019fldet}, FBI~\cite{2019fbi}, PPN~\cite{ZENG2019PPN}, CenterFace~\cite{xu2019centerface} \\ \hline Problem-oriented & Detectors focus on solving specific challenges in face detection, such as tiny faces, occluded faces, rotated and blurry faces. & HR~\cite{2017HR}, SSH~\cite{2017SSH}, S$^3$FD~\cite{Zhang2017S3FD}, Bai~et al.}{\emph{et al}\onedot{et al.}~\cite{bai2018finding}, PyramidBox~\cite{tang2018pyramidbox}, Grid loss~\cite{2016Grid_Loss}, FAN~\cite{wang2017fan}, LLE-CNNs~\cite{Ge_2017_CVPR}, PCN~\cite{pcn} \\ \hline \end{tabular}} \end{center} \end{table*} In order to present the current face detection methods with a clear categorization, we group them with seven sets,~et al.}{\emph{et al}\onedot{i.e.,} multi-stage, single-stage, anchor-based, anchor-free, multi-task style, CPU real-time, and problem oriented methods. These sets are not necessarily exclusive, because we establish the categorization from multiple perspectives. For example, multi-stage and single-stage methods are distinguished with the evidence of detection proposals and stage-wise learning, while anchor-based and anchor-free ones are divided according to the anchor usage. So, a detection method could be single-stage and also anchor-based simultaneously. This does not impede our presentation, but facilitate the readers to identify the very approaches they are interested in. \subsubsection{Multi-stage methods} Following the coarse-to-fine manner or the proposal-to-refine strategy, multi-stage based detectors first generate a number of candidate boxes, and then refine the candidates by one or more additional stages. The first stage employs sliding window to propose the candidate bounding boxes at a given scale, and the latter stages reject the false positives and refine the remaining boxes with higher resolution. In such regime, the cascaded architecture~\cite{Cascade_CNN,mtcnn,pcn,ZENG2019PPN} is naturally an effective solution for the coarse-to-fine face detection. Face detection can be considered as a specific objective of general object detection. Thus, many works~\cite{HyperFace,Zhu2017CMS-RCNN,Jiang2017650,wang2017facercnn,zhang2018face,SUN201842,Ge_2017_CVPR,2016stn,Li2016FaceDW,Najibi_2019_CVPR} inherited the remarkable achievements from the general object detectors. For example, Faster R-CNN~\cite{ren2015faster} is a classic and effective detection framework which employs a region proposal network (RPN) to generate region proposals with a set of dense anchor boxes in the first stage and then refines the proposals in the second stage. Based on the proposal-to-refine scheme of Faster R-CNN, CMS-RCNN~\cite{Zhu2017CMS-RCNN} presented a contextual multi-scale region based CNN to exploit the features around faces and bodies to accomplish the small face detection. Several works~\cite{Jiang2017650,wang2017facercnn,zhang2018face,SUN201842} improved Faster R-CNN for face detection from multiple aspects, such as improved loss design, online hard example mining, multi-scale training and test strategies, feature concatenation ~et al.}{\emph{et al}\onedot{etc}. Such like the effort for the refinement stages, the improvement in proposal stage~\cite{2016stn,Li2016FaceDW,Najibi_2019_CVPR} also draws much interest, such as using the auxiliary facial information, or sharing the classification parameters across anchors of different ratio. Besides, since the CNN lacks the ability of scale invariance, it requires additional parameters and computational costs to handle the facial scale variation which is a key challenge of face detection. Therefore, estimating the facial scale is a reasonable practice~\cite{hao2017scale,Song_2018_CVPR} to help to detect face at the appropriate scale. Apart from the modeling, how the train the multi-stage detector is another interesting topic. The multi-stage detectors are commonly trained stage by stage, since each stage is supervised by its own objective. This may lead to inferior optimization. To handle this issue, a joint training strategy~\cite{Qin2016Joint} was designed for both Cascaded CNN~\cite{Cascade_CNN} and Faster R-CNN to achieve end-to-end optimization and better performance on face detection. \begin{figure}[t] \centering \includegraphics[height=4.2cm]{./figures/face_detectors.png} \caption{The illustration of single-stage and multi-stage face detectors. The single-stage detector directly accomplishes the face detection from the entire feature maps, whereas the multi-stage detector adopts a proposal stage to generate candidates and one or more stages to refine these candidates.} \label{face_detectors} \end{figure} \subsubsection{Single-stage methods} The single-stage methods accomplish the candidate classification and bounding box regression from the entire feature maps directly, without involving the proposal stage. A classic structure of single stage comes from a general object detector named Single Shot multibox Detector (SSD)~\cite{Liu_2016_ssd}. Similar to RPN, SSD presets dense anchor boxes over different ratios and scales on the feature maps. SSD is a prevailing framework in object detection because it runs much faster than Faster R-CNN while maintaining comparable accuracy. So, many developers employed SSD for face detection in applications. However, SSD is not robust enough to large scale variation, especially to the small faces. Afterward, many methods~\cite{Zhang2017S3FD,Zhang2017Faceboxes,Zhang2018DCFPN,2019fbi,tang2018pyramidbox} studied to modify SSD for face detection. For example, Zhang~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhang2017S3FD} designed a scale-equitable version to obtain adequate features from the faces of different scales. Many state-of-the-art face detectors resort to the feature pyramid network (FPN)~\cite{lin2016feature} which consists of a top-down architecture with skip connections and merge the high-level and low-level features for detection. The high-level feature maps have more semantic information, while the low-level layers have smaller receptive field but more detailed local information. The feature fusion preserves the advantages from both sides, and brings great progress in detecting objects with a wide range of scales. Therefore, many single-stage face detectors~\cite{2017SSH,wang2017fan,zhang2017fanet,tang2018pyramidbox,tian2018df2s2,li2019pyramidbox,deng2020retinaface,chi2019selective,2019DSFD,zhang2019improved} are developed with the advantage of FPN. Not only handling the scale issue in face detection via FPN, but also these methods attempt to solve the inherent shortcomings of original FPN such like the conflict of receptive field. The special feature fusion operation~\cite{tang2018pyramidbox,2019DSFD,li2019pyramidbox} is also suitable for tackling the hard cases of face detection, such as blur and occluded faces. Although the single-stage methods have the advantage of high efficiency, their detection accuracy is below that of the two-stage methods. It is partially because the imbalance problem of positives and negatives brought by the dense anchors, whereas the proposal-to-refine scheme is able to alleviate this issue. Accordingly, RefineDet~\cite{Zhang_2018_refinedet} set up an anchor refinement module in its network to remove large number of negatives. Inspired by RefineDet, SRN~\cite{chi2019selective} presented a selective two-step classification and regression method; the two-step classification is performed at the low-level layers to reduce the search space of classifier, and the two-step regression is performed at high-level layers to obtain accurate location. Later on, VIM-FD~\cite{zhang2019robust}, ISRN~\cite{zhang2019improved}, AInnoFace~\cite{zhang2019accurate} and RefineFace~\cite{zhang2019refineface} improved SRN with several effective techniques, such as training data augmentation, improved feature extractor and training supervision, anchor assignment and matching strategy, multi-scale test strategy ~et al.}{\emph{et al}\onedot{etc}. Most aforementioned methods need to preset anchors for face detection, while some representative detectors of single-stage, such as DenseBox~\cite{huang2015densebox}, UnitBox~\cite{UnitBox} and CenterFace~\cite{xu2019centerface}, fulfil the detection without preset anchors. We will present them as anchor-free type in the next subsection. \subsubsection{Anchor-based and anchor-free methods} As shown in Table~\ref{fd_class}, most current face detectors are anchor-based due to the long-time development and superior performance. Generally, we preset the dense anchors on the feature maps, then fulfil the classification and bounding box regression on these anchors one or more times, and finally output the accepted ones as the detection results. Therefore, the anchor allocation and matching strategy is crucial to the detection accuracy. For example, the scale compensation for anchor matching, proposed by S$^3$FD~\cite{Zhang2017S3FD}, can effectively improve the recall of tiny and outer faces. Besides, S$^3$FD utilized a max-out label mechanism to reduce the large number of negatives which is a frequent issue in anchor-based mechanism as well. Zhu~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhu2018} introduced an expected max overlapping score (EMO) to evaluate the quality of matched anchors, and proposed several techniques to encourage the true positives achieve high EMO scores. Since the scale distribution of faces is imbalance in the training dataset, Group Sampling~\cite{Ming_2019_Group_Sampling} sorts the anchor boxes by their scales and maintains the same number of samples for each group during the training. More recently, HAMBox~\cite{Liu_2020_HAMBox} proposed an online anchor compensation strategy to help the detection of outer faces, taking the advantage of unmatched anchors that nonetheless provide favorable regression. The anchor-based methods have dominated state of the art in face detection, but there are several weaknesses of them. The hyperparameters (~et al.}{\emph{et al}\onedot{e.g.,} scale, stride, ratio, number) of preset anchors need to be carefully tuned for each particular dataset, which limits the generalization ability of detectors. Besides, the dense anchors increase the computational cost and bring the imbalance problem of positive and negative anchors. Anchor-free methods~\cite{Law2018CornerNet,zhu2019fs,Tian2019FCOS} attract growing attention in general object detection. As for face detection, certain pioneering works have emerged in recent years. DenseBox~\cite{huang2015densebox} and UnitBox~\cite{UnitBox} attempt to predict the pixel-wise bounding box and the confidence score. Besides, CenterFace~\cite{xu2019centerface} regards face detection as a generalized task of keypoint estimation, which predicts the facial center point and the size of bounding box in feature map. In brief, the anchor-free detectors get rid of the preset anchors and achieve the better generalization capacity. Regarding to the detection accuracy, it needs further exploration for better robustness to false positives and stability in training process. \subsubsection{Multi-task learning methods} Multi-task learning has been widely studied in computer vision community. Generally, the multi-task learning based approaches are designed for solving a problem together with other related tasks by sharing the visual representation. Here, we introduce the multi-task learning methods that trains the face detector with the associated facial tasks or auxiliary supervision branches to enrich the feature representation and detection robustness. Many multi-task learning methods~\cite{zhang2014mt,huang2015densebox,2016stn,mtcnn,Li2016FaceDW,2019fldet,xu2019centerface} have explored the joint learning of face detection and facial landmark localization. Among them, MTCNN~\cite{mtcnn} is the most representative one, which exploits the inherent correlation between facial bounding boxes and landmarks by a three-stage cascaded network. Subsequently, HyperFace~\cite{HyperFace} fused the low-level features as well as the high-level features to simultaneously conduct four tasks, including face detection, facial landmark localization, gender classification and pose estimation. Based on RetinaNet~\cite{lin2017focal}, RetinaFace~\cite{deng2020retinaface} integrated face detection, facial landmark localization and dense 3D face regression in one framework. From the multi-task routine, we can see that the face detectors can benefit from the associated facial tasks. Moreover, certain methods~\cite{wang2017facercnn,tian2018df2s2,wang2018sface,li2019pyramidbox} exploited auxiliary supervision branches, such as segmentation branch, anchor-free branch ~et al.}{\emph{et al}\onedot{etc}.These branches are used to boost the training of face detection. \begin{table}[t] \begin{center} \caption{Running efficiency of CPU real-time face detectors. ``Accuracy (\%)'' denotes the true positive rate at 1000 false positives on FDDB. } \label{cpu_detection} \resizebox{0.6\linewidth}{!}{ \begin{tabular}{|c|c|c|c|} \hline {Method}&{CPU-model}&{Speed (FPS)}&{Accuracy ($\%$)}\\ \hline\hline Faceboxes~\cite{Zhang2017Faceboxes} &[email protected]&20&96.0\\ \hline STN~\cite{2016stn} &i7-4770K&30&-\\ \hline DCFPN~\cite{Zhang2018DCFPN} &2.60GHz&30&-\\ \hline FBI~\cite{2019fbi} &[email protected]&20&96.8\\ \hline PCN~\cite{pcn}&3.40GHz&29&-\\ \hline PPN~\cite{ZENG2019PPN} &i5&60&-\\ \hline RetinaFace~\cite{deng2020retinaface}&i7-6700K&60&-\\ \hline CenterFace~\cite{xu2019centerface}&[email protected]&30&98.0\\ \hline \end{tabular}} \end{center} \end{table} \subsubsection{CPU real-time methods} Although state-of-the-art face detectors have achieved great success in accuracy, their efficiency is not enough in real-world applications, especially on non-GPU devices. According to the demand of inference speed on CPU, we collect the CPU real-time face detectors~\cite{2016stn,Zhang2017Faceboxes,Zhang2018DCFPN,pcn,2019fldet,ZENG2019PPN,xu2019centerface,deng2020retinaface} here for convenient retrieval. These detectors are able to run at least 20 frames per second (FPS) on a single CPU with VGA-resolution input images. Table~\ref{cpu_detection} shows the running efficiency of them. Usually, the convolution operation consumes more time when the input size or channel gets larger. To speed up, the lightweight backbone~\cite{deng2020retinaface,deng2020retinaface} and rapidly digested convolutional layer~\cite{Zhang2017Faceboxes,Zhang2018DCFPN} are the common practice concerning to the network architecture. Knowledge distillation is another choice to boost the performance of lightweight face detectors~\cite{2019fbi}. Moreover, region-of-interest (RoI) convolution~\cite{2016stn} was introduced to calculate the convolution only on the RoI region. \subsubsection{Problem-oriented methods} In this subsection, we highlight some problem-oriented methods which are designed against a variety of specific challenges in face detection. Detecting faces with a wide range of scale is a long-existing challenge in face detection. Many methods~\cite{2017HR,2017SSH,Zhang2017S3FD,tang2018pyramidbox,Ming_2019_Group_Sampling} were designed for scale-invariant face detection, including scale selection, multi-scale detection, dense anchor setting, scale balancing strategy ~et al.}{\emph{et al}\onedot{etc}.Besides, generating clear super-resolution images~\cite{bai2018finding} is a feasible approach to locate blurry and tiny faces. The partially visible faces (~et al.}{\emph{et al}\onedot{i.e.,} with occlusion) harm the performance of the conventional face detectors. A number of methods~\cite{2015Faceness,2016Grid_Loss,wang2017fan,Ge_2017_CVPR} exploited specific techniques for detecting occluded faces. For example, Faceness~\cite{2015Faceness} computes the confidence score according to the occurrence and spatial arrangement of the facial parts, so the occluded face will be recalled with high confidence. FAN~\cite{wang2017fan} generates the occluded face for data augmentation, and introduces an anchor-level attention algorithm to emphasize the features from facial regions. Likewise, the in-plane rotation is an existing factor that impedes face detection. To deal with this problem, PCN~\cite{pcn} calibrates the candidates against the rotation towards upright progressively. \subsection{Evaluation Metrics and Datasets} \label{sec:face_detection:evaluation} \subsubsection{Metrics} \label{metrics} Like the general object detection algorithms, average precision (AP) is a widely used metric for evaluating the performance of face detection. AP is derived from detection precision-recall curve. To obtain precision and recall, the Intersection over Union (IoU) is used to measure the overlap of the predicted bounding box ($Box_{p}$) and the ground-truth bounding box ($Box_{gt}$), which can be formulated as: \begin{equation} \mathrm{IoU}=\frac{area(Box_{p}\cap Box_{gt})} {area(Box_{p} \cup Box_{gt})}. \end{equation} The output of face detector contain a confidence score and a predicted bounding box. The confidence score with a confidence threshold is used to determine whether to accept this prediction. An accepted prediction can be regarded as true positive (TP) when the IoU is larger than a preset threshold (usually 0.5 for face detection). Otherwise, it will be regarded as a false positive (FP). After determining the TP and FP, a precision-recall curve can be obtained by varying the confidence threshold. AP is computed as the mean precision at a series of uniformly-spaced discrete recall levels~\cite{Everingham2010The}. The receiver operating characteristic (ROC) curve is also used to evaluate the performance of face detection in FDDB~\cite{fddbTech}. FDDB proposed two metrics (~et al.}{\emph{et al}\onedot{i.e.,} discrete and continuous) to draw ROC curves of true positive rate over the false positive. For discrete metric, a predicted bounding box will be regarded as true positive if the IoU is larger than 0.5; for continuous metric, the scores equal to the matched IoU, reflecting how well the prediction fits the ground-truth. In addition, frames per second (FPS) is used to measure the runtime efficiency in practical applications. \begin{table}[t] \begin{center} \caption{Popular datasets for face detection.} \label{fd_dataset} \resizebox{0.9\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline {Datasets}&{Year}&{$\#$Image}&{$\#$Face}&{$\#$ of faces per image}&{Description}\\ \hline\hline \multicolumn{6}{|c|}{Training}\\ \hline ALFW~\cite{Kostinger2011alfw}&2011&21,997&25, 993&1.18& Training source for face detection.\\ \hline WIDER FACE~\cite{Yang_2016_CVPR}&2016&16k&199k&12.43&The largest face detection dataset.\\ \hline\hline \multicolumn{6}{|c|}{Test}\\ \hline FDDB~\cite{fddbTech}&2010&2,845&5,171&1.82& A classic face detection benchmark. \\ \hline AFW~\cite{Zhu2012Face}&2012&205&473&2.31& Multiple facial annotations.\\ \hline PASCAL faces~\cite{YAN2014790}&2014&851&1,335&1.57&Large facial variations.\\ \hline MALF~\cite{faceevaluation15}&2015&5,250&11,931&2.27& Fine-grained evaluation.\\ \hline WIDER FACE~\cite{Yang_2016_CVPR}&2016&16k&194k&12.12&The largest face detection dataset.\\ \hline MAFA~\cite{Ge_2017_CVPR} &2017&30,811&35,806&1.16&Masked face detection.\\ \hline \end{tabular}} \end{center} \end{table} \subsubsection{Datasets} We introduce several widely used datasets for face detection. The statistics of them are given in Table~\ref{fd_dataset}. Among them, FDDB~\cite{fddbTech} is a classic dataset of unconstrained face detection which includes low resolution faces, occluded faces and difficult pose variations. It is noteworthy that FDDB uses ellipse as face annotations instead of rectangular box. AFW~\cite{Zhu2012Face} is collected from Flickr, and includes cluttered background and large variations, such as ages, sunglasses, make-ups and expressions. The images in PASCAL faces dataset~\cite{YAN2014790} are taken from the Pascal person layout dataset~\cite{everingham2011pascal}. MALF~\cite{faceevaluation15} is designed for fine-grained evaluation of face detection in the wild. MAFA~\cite{Ge_2017_CVPR} is a masked face detection benchmark with various orientations and occlusion degrees. The above datasets are used for performance evaluation, while ALFW~\cite{Kostinger2011alfw} dataset is used for training only. In addition, WIDER FACE dataset~\cite{Yang_2016_CVPR} has the subsets for training, validation and test. Each subset has three difficult levels: easy, medium and hard, with the assessment of EdgeBox~\cite{Zitnick2014Edge}. WIDER FACE has promoted the community of face detection in the past few years, which provides a large number of training data and a challenging test benchmark with large variations. Fig.~\ref{WIDER_FACE_test} shows the precision-recall curves of state-of-the-art methods on WIDER FACE test subsets. \subsection{Challenges and Future Work} \label{sec:face_detection:challenge} In this sections, we provide a review on deep learning based face detection from poly-aspects. As we can see, face detection techniques have made great progress in recent years. The advance of face detection has also promoted other facial tasks, such as face recognition, facial attributes analysis. However, there still remains certain difficulties and challenges. \begin{itemize} \item ~\textbf{Running efficiency}: The state-of-the-art detectors have made great progress, but it still needs trade-off between detection accuracy and efficiency. For example, in many applications, resizing the input image is a common practice of acceleration for detectors, while it harms the recall of tiny faces as well. \item ~\textbf{Image variations}: In the unconstrained condition, such as surveillance video, human faces with large variation of pose and occlusion tend to be missed by detectors, whereas the diverse image background often leads to false positives. Besides, detecting faces with a wide range of scale is also a great challenge. \end{itemize} Face detection is the most advanced technique in the deep face recognition system. With the consideration of the remaining issues and state of the art, we collect several promising future work directions for deep face detection. \begin{itemize} \item ~\textbf{Effective and unified anchor settings}: The existing anchor-based methods design the anchor setting from many aspects, such as assignment and matching strategy~\cite{Zhang2017S3FD,tang2018pyramidbox,2019DSFD,li2019pyramidbox,Liu_2020_HAMBox}, attributes tuning~\cite{Zhang2017Faceboxes,Zhu2018,chi2019selective}, and sampling strategy~\cite{Ming_2019_Group_Sampling}. The well-tuned anchors may limit the generalization ability of face detectors. Hence, it is worth to explore an effective and unified anchor setting that can be used for different application demand. \item ~\textbf{Anchor-free face detection framework}: Anchor-free detectors~\cite{Law2018CornerNet,zhu2019fs,Tian2019FCOS} attract increasing attention in general object detection because they show flexible designs and more potential in generalization ability. However, a small number of works~\cite{huang2015densebox,UnitBox,xu2019centerface} have explored the anchor-free mechanism for face detection. The advantages of anchor-free framework can further promote the development of face detection. \item ~\textbf{More efficient detection framework}: Because face detection is the prior step in face recognition systems, the computational efficiency of face detectors is important for real-world applications. Many face detectors achieve great performance of detection accuracy based on heavy backbone networks, while the efficiency of light weight detector is much more important on mobile and embedded devices. Therefore, it is crucial to design a more efficient detection framework while preserving detection accuracy. \end{itemize} \begin{figure}[t] \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=3cm]{./figures/test_easy.png} \caption{Test: Easy} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=3cm]{./figures/test_med.png} \caption{Test: Medium} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=3cm]{./figures/test_hard.png} \caption{Test: Hard} \end{subfigure} \caption{The precision-recall curves~\cite{WIDERFACE} on WIDER FACE test subsets. The ranking is updated by ~\cite{Zhang2020ASFDAA}.} \label{WIDER_FACE_test} \end{figure} \section{Face Preprocessing} \label{sec:face_preprocessing} Given the detected face region, face preprocessing aims to calibrate unconstrained faces to a canonical layout for facilitating the downstream tasks of recognition and analysis, which is an essential intermediate step in the end-to-end face recognition system. In this section, we review two mainstream routines for face preprocessing, including face alignment and face frontalization. In order to remove the scale, rotation and translation variations, face alignment employs spatial transformation to calibrate faces to a predefined canonical layout with the help of facial landmarks. Without relying on the facial landmarks, several other methods can still generate aligned faces. Thus, we categorize the face alignment as landmark-based methods and landmark-free methods. Furthermore, face frontalization aims to synthesize frontal faces from non-frontal views, which can be used to help large pose face recognition and face data augmentation. Fig.~\ref{developmen_fp} shows the development by many methods for face preprocessing. \begin{figure}[ht] \centering \includegraphics[height=2.5cm]{./figures/develop_fp.png} \caption{The development of representative methods for face preprocessing. The orange, blue, green, yellow and gray represent coordinate regression, heatmap regression, 3D model fitting, landmark-free face alignment methods and face frontalization, respectively. One can refer to Table~\ref{fp_class} for the references of these methods.} \label{developmen_fp} \end{figure} \begin{table*}[t] \begin{center} \caption{The categorization of face preprocessing methods.} \label{fp_class} \resizebox{\linewidth}{!}{ \begin{tabular}{|p{3cm}|p{1.5cm}|p{5cm}|p{8cm}|} \hline \multicolumn{2}{|c|}{Category}&\multicolumn{1}{|c|}{Description}&\multicolumn{1}{|c|}{Method}\\ \hline \multicolumn{1}{|c|}{Landmark-based Face Alignment} & Coordinate regression& Take the landmark coordinates as the target of regression, and learn the nonlinear mapping from the input face image to the landmark coordinates.& DCNC~\cite{sun2013}, EFLL~\cite{Zhou2013ExtensiveFL}, CFAN~\cite{Jie2014Coarse}, TCDCN~\cite{Zhang2014Facial}, RAR~\cite{Xiao2016Robust}, MDM~\cite{Trigeorgis2016MDM}, TSR~\cite{Lv2016TSR}, JFA~\cite{Xu2017JFA}, RDN~\cite{Liu2020LearningRN}, SIR~\cite{Fan2018SelfReinforcedCR}, TCNN~\cite{Wu2018TCNN}, DSRN~\cite{Miao2018DirectSR}, SBR~\cite{Dong2018SBR}, Wing loss~\cite{Feng2018wingloss}, AAN~\cite{Yue2018AttentionalAN}, Lai~et al.}{\emph{et al}\onedot{et al.}~\cite{Lai2019EnhancedNM}, ODN~\cite{Zhu_2019_CVPR}, HyperFace~\cite{HyperFace}, MTCNN~\cite{mtcnn}, RetinaFace~\cite{deng2020retinaface}, FLDet~\cite{2019fldet}, CenterFace~\cite{xu2019centerface}\\ \cline{2-4} & Heatmap regression& Output the likelihood response maps of each landmark. & CALE~\cite{Bulat2016Convolutional}, RED~\cite{Peng2016RED}, Yang~et al.}{\emph{et al}\onedot{et al.}~\cite{Jing2017Stacked}, JMFA~\cite{Deng2019JMVFA}, FAN~\cite{Bulat2017HowFA}, LAB~\cite{Wu2018LAB}, SAN~\cite{Dong_2018_CVPR}, FALGCN~\cite{Merget_2018_CVPR}, PCD-CNN~\cite{Kumar2018Disentangling3P}, ELT~\cite{Honari2018ImprovingLL}, HRNet~\cite{wang2020deep}, Zhang~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhang2019StackedHN}, SA~\cite{Liu2019SemanticAF}, FHR~\cite{Tai2019TowardsHA}, Awing loss~\cite{Wang2019adawing}, DeCaFA~\cite{Dapogny2019decafa}, HSLE~\cite{Zou2019LearningRF}, FAB~\cite{Sun2019FABAR}, KDN~\cite{Chen2019FaceAW}, Dong~et al.}{\emph{et al}\onedot{et al.}~\cite{Dong2019TeacherSS}, Robinson~et al.}{\emph{et al}\onedot{et al.}~\cite{Robinson2019LaplaceLL}, LUVLi~\cite{Kumar2020LUVLiFA}, PropagationNet~\cite{Huang_2020_PropagationNet}\\ \cline{2-4} &3D model fitting& Infer a 3D face shape from a 2D face image, and then project it back to the image plane to obtain 2D landmarks. & LPFA~\cite{Jourabloo2016_D3PF}, 3DDFA~\cite{Zhu2016_3DDFA}, FacePoseNet~\cite{Chang2017FacePoseNet}, PIFASCNN~\cite{Jourabloo2017}, DeFA~\cite{Liu2017DenseFA}, RDR~\cite{Xiao2017RDR}, Bhagavatula~et al.}{\emph{et al}\onedot{et al.}~\cite{Bhagavatula2017FasterTR}, Zhang~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhang2018FaceAA}, PR-Net~\cite{feng2018joint}, PAFA~\cite{Li2019PoseAwareFA}\\ \hline \multicolumn{2}{|c|}{Landmark-free Face Alignment} &Directly output aligned faces without the explicit use of landmark.& Hayat~et al.}{\emph{et al}\onedot{et al.}~\cite{Hayat2017JointRA}, E2e~\cite{Zhong2017e2e}, ReST~\cite{Wu2017ReST}, GridFace~\cite{Zhou2018GridFaceFR}, Wei~et al.}{\emph{et al}\onedot{et al.}~\cite{Wei2020BalancedAF}, RDCFace~\cite{Zhao_2020_CVPR}\\ \hline \multicolumn{2}{|c|}{Face Frontalization} &Synthesize frontal faces from non-frontal views.& FIP~\cite{Zhu2013Deep}, Zhang~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhang2013RandomFG}, Zhu~et al.}{\emph{et al}\onedot{et al.}~\cite{zhu2014recover}, MVP~\cite{Zhu2014MultiViewPA}, SPAE~\cite{Kan2014StackedPA}, CPF~\cite{Yim2015CPF}, Yang~et al.}{\emph{et al}\onedot{et al.}~\cite{Yang2015WeaklysupervisedDW}, HPEN~\cite{Zhu2015HighfidelityPA}, Cole~et al.}{\emph{et al}\onedot{et al.}~\cite{Cole2017SynthesizingNF}, DR-GAN~\cite{Tran2017DisentangledRL}, FF-GAN~\cite{yin2017ffgan}, TP-GAN~\cite{Huang2017tpgan}, PIM~\cite{Zhao2018pim}, CAPG-GAN~\cite{Hu2018CAPG}, CR-GAN~\cite{Yu2018CR}, UV-GAN~\cite{Deng2018UVGANAF}, 3D-PIM~\cite{Jian20183D}, PW-GAN~\cite{Zhang2019PoseWeightedGF}, A3F-CNN~\cite{Zhang2019FaceFU}, HF-PIM~\cite{Cao2019TowardsHF}, FNM~\cite{Qian2019UnsupervisedFN}\\ \hline \end{tabular}} \end{center} \end{table*} \subsection{Landmark-based Face Alignment} \label{sec:face_preprocessing:lm_based_algin} Landmark-based face alignment utilizes the spatial transformation to calibrate faces to a predefined canonical layout by involving the facial landmarks as the reference. Therefore, the accurate facial landmark localization is the core task of landmark-based alignment. According to the existing landmark localization methods, we sort the landmark-based alignment methods into three subcategories, ~et al.}{\emph{et al}\onedot{i.e.,} coordinate regression based methods, heatmap regression based methods and 3D model fitting based methods. \subsubsection{Coordinate regression} The coordinate regression based methods regard the landmark coordinates as the objective of the regression via neural networks. In other words, they focus on learning the nonlinear mapping from the face images to the landmark coordinates vectors. Following the coarse-to-fine manner, most methods~\cite{sun2013,Zhou2013ExtensiveFL,Jie2014Coarse,Lv2016TSR} employed cascaded regression to progressively refine the previous results of landmark coordinate. Besides, since the recurrent neural network (RNN) is able to model the historical information in the cascaded refinement process, RAR~\cite{Xiao2016Robust} and MDM~\cite{Trigeorgis2016MDM} employed CNN and RNN together to extract global features and refine the prediction. The multi-task learning is also a common routine to facilitate landmark localization with the related facial tasks. A number of methods~\cite{HyperFace, mtcnn,deng2020retinaface,2019fldet,xu2019centerface} are designed to jointly detect faces and predict facial landmarks. In fact, these methods are initially designed for face detection task, and output a set of facial landmarks simultaneously. Specifically designed for facial landmark localization, TCDCN~\cite{Zhang2014Facial} and JFA~\cite{Xu2017JFA} benefit from the auxiliary facial attributes, such as expression, gender and head pose. The cascaded regression based methods have shown the great advantage in solving the facial landmark localization problem. However, since they employ multi-stage regressors to refine the prediction, the performance largely depends on the initial prediction, which means the improper shape initialization will likely lead to inaccurate prediction. Besides, the multiple regressors increase the computational cost. To address these shortcomings, several methods~\cite{Fan2018SelfReinforcedCR,Miao2018DirectSR,Yue2018AttentionalAN,Wu2018TCNN,Zhu_2019_CVPR,Liu2020LearningRN} developed new regression approaches for facial landmark localization, such as self-iterative regression, direct shape regression, and reasoning-decision regression ~et al.}{\emph{et al}\onedot{etc}. The learning of coordinate regression mainly employs the L1, L2, or smoothed L1 loss functions. These objectives drive the learning process to pay more attention to the large error samples, promoting the convergence towards accurate prediction; on the other hand, it brings high sensitivity to outliers. To tackle this problem, Feng~et al.}{\emph{et al}\onedot{et al.}~\cite{Feng2018wingloss} improved the loss function, namely wing loss, by amplifying the impact of the samples with small or medium range errors. Another problem is that, the optimization with respect to the Euclidean distance of landmark might result in a gap between training and test, since the test metric usually employs the normalized mean error (NME). To solve this issue, Lai~et al.}{\emph{et al}\onedot{et al.}~\cite{Lai2019EnhancedNM} proposed an enhanced normalized mean error loss to optimize the landmark localization network. The above methods studied the facial landmark localization on still images. For video face landmark localization, how to leverage the temporal information across frames becomes necessary. TSTN~\cite{Liu2018TwoStreamTN} developed a two-stream architecture, which locates the landmark from a single frame and captures the temporal consistency for refinement. Besides, SBR~\cite{Dong2018SBR} proposed to encourage the optical flow coherency of detected landmarks when training with video data. \subsubsection{Heatmap regression} In contrast to the coordinate regression, the heatmap regression based methods output likelihood response maps of each landmark. The early exploration~\cite{Bulat2016Convolutional} of heatmap regression studied how to aggregate the score maps and refine the prediction with DCNNs. Later on, Newell~et al.}{\emph{et al}\onedot{et al.}~\cite{Newell2016StackedHN} designed stacked hourglass (HG) network to generate heatmap for human pose estimation. Hourglass is a bottom-up and top-down architecture, playing an important role in the deep stack of bottleneck blocks along with intermediate supervision. Fig.~\ref{hourglass} is an illustration of stacked hourglass network. The stacked hourglass network has achieved great success in human pose estimation. As the facial landmark localization task is similar to the human pose estimation, many recent works~\cite{Jing2017Stacked,Bulat2017HowFA,Deng2019JMVFA,Zhang2019StackedHN,Wang2019adawing,Huang_2020_PropagationNet} adopted the stacked hourglass network for facial landmark localization and greatly improved the state-of-the-art performance. The dense pixel-wise classification by the fully convolutional network provides us an effective way for the heatmap regression task. The hourglass structure can be regarded as an instance of the fully convolutional network. Beyond the hourglass structure, a number of effective network architectures~\cite{Merget_2018_CVPR,Kumar2018Disentangling3P,Dong_2018_CVPR,Dapogny2019decafa,wang2020deep} are newly designed for heatmap regression. Among them, DeCaFA~\cite{Dapogny2019decafa} utilized stacked fully convolutional U-nets to preserve the spatial resolution, and landmark-wise attention maps to extract local information around the current estimation. High-resolution network (HRNet)~\cite{wang2020deep} was designed to maintain the high-resolution representation and showed its advantage for landmark-kind tasks. \begin{figure}[t] \centering \includegraphics[height=4cm]{./figures/hourglass.png} \caption{The illustration of stacked hourglass network~\cite{Newell2016StackedHN} for facial landmark localization. In each hourglass structure, the width (~et al.}{\emph{et al}\onedot{i.e.,} feature channels) is consistent, and the boxes represent the residual modules.} \label{hourglass} \end{figure} The abovementioned wing loss, which is designed for the coordinate regression, does not guarantee the convergence for the heatmap regression, due to the imbalance pixel number of foreground and background. To address this issue, Wang~et al.}{\emph{et al}\onedot{et al.}~\cite{Wang2019adawing} penalizes more on foreground pixels and less on background pixels; similarly, PropagationNet~\cite{Huang_2020_PropagationNet} presented a focal wing loss which adjusts the loss weight of samples in each mini-batch. Some facial landmarks have ambiguous definition, such as those on cheek, which leads to inconsistent annotations by different annotators. Besides, the landmarks in occluded facial regions also cause imprecise annotations. These two issues result in semantic bias and thus degraded performance of landmark localization. Many methods~\cite{Wu2018LAB,Zou2019LearningRF,Liu2019SemanticAF,Liu2019SemanticAF,Chen2019FaceAW,Kumar2020LUVLiFA} devoted to alleviate these issues. Facial boundary heatmap~\cite{Wu2018LAB} is a good choice to provide the facial geometric structure for reducing the semantic ambiguities. Regarding the semantic ambiguities as noisy annotation, Liu~et al.}{\emph{et al}\onedot{et al.}~\cite{Liu2019SemanticAF} provides another path to estimate the real landmark location with a probabilistic model. More recently, KDN~\cite{Chen2019FaceAW} and LUVLi~\cite{Kumar2020LUVLiFA} proposed to simultaneously estimate the facial landmarks and the uncertainty of predictions. The uncertainty can be used to identify the images in which the face alignment fails. Considering the expensive cost of constructing large-scale facial landmark dataset with precise annotation, some methods~\cite{Honari2018ImprovingLL,Robinson2019LaplaceLL,Dong2019TeacherSS} explored the semi-supervised learning for facial landmark localization. Honari~et al.}{\emph{et al}\onedot{et al.}~\cite{Honari2018ImprovingLL} presented an equivariant landmark transformation loss to make the prediction consistent with respect to different transformations on the same image. Based on the adversarial learning mechanism, Robinson~et al.}{\emph{et al}\onedot{et al.}~\cite{Robinson2019LaplaceLL} applied a generator to produce heatmaps for the unlabeled data and a discriminator to distinguish the generated heatmaps and real heatmaps. Moreover, assigning pseudo landmark labels on unlabeled data~\cite{Dong2019TeacherSS} is yet another promising routine of semi-supervised learning for boosting the landmark localization. For the face alignment in video frames, several methods~\cite{Peng2016RED,Tai2019TowardsHA,Sun2019FABAR} were designed to solve specific issues in videos. For instance, to alleviate the quantification errors of resized heatmaps in high-resolution videos, fractional heatmap regression~\cite{Tai2019TowardsHA} estimated the fractional coordinates by sampling multiple points in the heatmaps. To cope with the motion blur issue in videos, FAB~\cite{Sun2019FABAR} utilized a structure-aware deblurring module to recover a clear face by keeping the structure consistency across neighboring frames. \subsubsection{3D model fitting} Considering the explicit relationship between 2D facial landmarks and 3D face shape, the 3D model fitting based methods reconstruct a 3D face shape from a 2D image, and then project it onto the image plane to obtain the 2D landmarks. Compared with the regular 2D methods which estimate a set of landmarks, 3D model fitting based methods is able to fit faces with a 3D model of thousands of vertexes and align them with large poses. \begin{figure}[t] \centering \includegraphics[height=3.5cm]{./figures/3d_fitting.png} \caption{The process of 3D model fitting for face alignment. A dense 3D Morphable Model is used to model a 2D face to 3D mesh. The regression network estimates the parameters of 3D shape and projection matrix, and then the 3D shape is projected onto the image plane to obtain the 2D landmarks. } \label{D3PF} \end{figure} Since the cascaded regression is an effective manner to estimate model parameters, LPFA~\cite{Jourabloo2016_D3PF} and 3DDFA~\cite{Zhu2016_3DDFA} combined the cascaded CNN regressor with a dense 3D Morphable Model (3DMM)~\cite{BlanzVolker2003FaceRB} to estimate the 3D face shape parameters. Besides, they both designed special features to make the regressor robust to pose variations. Moreover, DeFA~\cite{Liu2017DenseFA} employed not only the landmarks as the regression constraint but also the projected contour of the 3D shape and the local descriptors. Despite many advantages, the cascaded CNNs often suffer from the lack of end-to-end training. As a roundabout, Jourabloo~et al.}{\emph{et al}\onedot{et al.}~\cite{Jourabloo2017} attempted to fit a 3D face model through a single CNN, which consists of several visualization blocks to adjust the 3D shape and projection matrix according to the features and predictions from the previous blocks. Although the above methods take great advantages from 3DMM, the diverse facial shape would lead to inaccurate 2D landmark location, especially when the 3D shape coefficients are sparse. To tackle this problem, RDR~\cite{Xiao2017RDR} proposed to fit 3D faces by a dynamic expression model and use a recurrent 3D-2D dual learning model to alternatively refine 3D face model and 2D landmarks. Beyond regressing the parameters of a 3D face shape, Faster-TRFA~\cite{Bhagavatula2017FasterTR} and FacePoseNet~\cite{Chang2017FacePoseNet} estimated the warping parameters of rendering a different view of a general 3D face model. Some methods~\cite{feng2018joint,Zhang2018FaceAA} regress the landmarks from the 3D coordinates of face shape. For instance, PR-Net~\cite{feng2018joint} adopted the UV position map to record the 3D coordinates of face shape with semantic correspondence, in which the predefined landmarks are included. \subsection{Landmark-free Face Alignment} \label{sec:face_preprocessing:lm_free_align} \begin{figure}[ht] \centering \includegraphics[height=2.5cm]{./figures/lmk_free.png} \caption{The illustration of the landmark-free face alignment framework. The face alignment and representation form an integrated trainable network.} \label{e2e} \end{figure} Landmark-free face alignment methods integrate the alignment transformation processing into DCNNs and output aligned face without relying on facial landmarks. This set of methods generally employ the spatial transformer network (Spatial-TN)~\cite{Jaderberg2015SpatialTN} for geometric warping, where the transformation parameters are learned via end-to-end training. Based on Spatial-TN, Hayat~et al.}{\emph{et al}\onedot{et al.}~\cite{Hayat2017JointRA} and Zhong~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhong2017e2e} optimize the face alignment with a subsequent module of face representation jointly. Since the facial variations are quite complex with various factors, some methods~\cite{Wu2017ReST,Zhou2018GridFaceFR} are designed to improve the deformation ability of Spatial-TN. For example, in ReST~\cite{Wu2017ReST}, a further transformation is performed based on the previously transformed face in each iteration, tackling the large facial variations progressively. Besides, the radial distortion of face images is another common problem, which is brought by wide-angle cameras. RDCFace~\cite{Zhao_2020_CVPR} proposed an cascaded network which learns the rectification against the radial lens distortion, the face alignment transformation, and the face representation in an end-to-end manner. More recently, Wei~et al.}{\emph{et al}\onedot{et al.}~\cite{Wei2020BalancedAF} provided a comprehensive analysis on the effect of face alignment. The results showed that the excessive alignment will hurt the subsequent face recognition, while the face recognition is robust to the alignment on feature map. Accordingly, they proposed to learn face alignment on feature map with the joint supervision of face recognition. \subsection{Face Frontalization} \label{sec:face_preprocessing:frontalization} In the uncontrolled environment, pose variation is a serious issue for face recognition. To eliminate the pose influence, face frontalization aims to synthesize identity-preserving frontal faces from non-frontal views. Fig.~\ref{face_frontalization} is an illustration of face frontalization with the downstream task of face representation. In the previous section, we have introduced some 3D model fitting based methods~\cite{Zhu2016_3DDFA,Jourabloo2016_D3PF,Jourabloo2017,Liu2017DenseFA,feng2018joint} which can construct the frontalized 3D faces by rotating the 3D face model and projecting back to 2D plane. Apart from the 3D routine, many approaches~\cite{Zhu2013Deep,Zhang2013RandomFG,zhu2014recover,Yim2015CPF,Zhu2014MultiViewPA,Kan2014StackedPA} employ deep neural networks with encoder-decoder architecture for recovering faces in a canonical view. For them, the identity-preserving property, which ensures the recognition performance in the downstream task, is not an easy goal to achieve. Recently, the high quality image generation has made great progress due to the generative adversarial networks (GAN)~\cite{Goodfellow2014GenerativeAN}. Many face frontalization methods~\cite{Huang2017tpgan,Qian2019UnsupervisedFN,Zhang2019FaceFU,Zhao2018pim} benefit from GAN for the synthesis. Among them, Huang~et al.}{\emph{et al}\onedot{et al.}~\cite{Huang2017tpgan} developed a two-pathway generative adversarial network (TP-GAN) to infer the global and local facial structure in frontal view, respectively. Towards recognition-oriented generation, Zhao~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhao2018pim} presented a pose invariant model (PIM), which consists of a face frontalization sub-net and a discriminative learning sub-net to mutually learn face frontalization and face representation. Beyond recovering the frontal view from profile views, several GAN-based methods~\cite{Tran2017DisentangledRL,Yu2018CR,Hu2018CAPG} render rotated faces in various poses. For instance, Tran~et al.}{\emph{et al}\onedot{et al.}~\cite{Tran2017DisentangledRL} proposed a disentangled representation learning GAN, which generates the identity-related representation and synthesize identity-preserving faces at arbitrary poses with a pose encoding. Moreover, some approaches~\cite{yin2017ffgan,Jian20183D,Deng2018UVGANAF,Zhang2019PoseWeightedGF,Cao2019TowardsHF} combine GAN with the 3D face model to exploit the facial prior knowledge for face synthesis. Resorting to the 3D face model, they leverage the global shape and appearance information to improve the quality of frontalized images, especially when synthesizing from large pose view. \begin{figure}[t] \centering \includegraphics[height=3cm]{./figures/face_frontal.png} \caption{Face frontalization aims to synthesize frontal faces from profile view, in order to facilitate the downstream tasks such as face representation computing. The face images are taken from Multi-PIE~\cite{Gross2008MultiPIE}.} \label{face_frontalization} \end{figure} \subsection{Evaluation Metrics and Datasets} \label{sec:face_preprocessing:evaluation} We introduce the commonly used evaluation metrics and datasets for face preprocessing, especially the landmark-based alignment. As presented in the following part of this subsection, most landmark-based methods employ the quantitative metrics, such as normalized mean error; whereas the face frontalization generally investigate the visual quality of synthesized frontal face, which lacks of a standard evaluation metric. Besides, some methods employ the evaluation oriented to large-pose face recognition, and we will describe their metrics in the face representation section. \subsubsection{Metrics} For facial landmark localization, the widely used evaluation metric is to measure the point-to-point Euclidean distance by normalized mean error (NME), which can be defined as: \begin{equation}{NME}=\frac{1}{M} \sum_{k=1}^{M} \frac{\left\|p_{k}-{g}_{k}\right\|_{2}}{d},\end{equation} where $M$ is the number of landmarks, $p_{k}$ and ${g}_{k}$ represent the prediction and ground-truth coordinates of the face landmarks, $k$ denotes the index of landmarks, and $d$ refers to the normalized distance which is defined by either the inter-ocular distance or inter-pupil distance. $d$ is used to alleviate the abnormal measurement caused by different face scales and large pose. A small NME means the method have better performance. The cumulative errors distribution (CED) curve is also used as an evaluation criterion. CED is a distribution function of NME. The vertical axis of CED represents the proportion of test images that have an error value less than or equal to the error value on the horizontal axis. The area under the curve (AUC) also provides a reference of how the algorithm performs at a given error: \begin{equation}{AUC}_{\alpha}=\int_{0}^{\alpha} f(e) d e,\end{equation} where ${\alpha}$ is the given error corresponding to the upper bound of integration calculation, $e$ is the progressive normalized errors and $f(e)$ refers to the CED curve. Larger AUC indicates better performance. Based on CED curve, failure rate can be used to measure the performance and robustness of an algorithm, which denotes the percentage of samples in the test set whose NME is larger than a threshold. \subsubsection{Datasets} \begin{table}[t] \begin{center} \caption{Statistics of face landmark datasets. ``-'' refers to none official protocol for splitting the training and test set.} \label{flmk_dataset} \resizebox{0.95\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline {Datasets}&{Year}&{$\#$ Total}&{$\#$ Training}&{$\#$ Test}&{$\#$ Point}&{Description}\\ \hline\hline Multi-PIE~\cite{Gross2008MultiPIE}&2008&755,370&-&-&68& The largest facial dataset in controlled condition.\\ \hline LFPW~\cite{Belhumeur2011LocalizingPO}&2010&2,845&-&-&35& Images taken from uncontrolled setting.\\ \hline ALFW~\cite{Kostinger2011alfw}&2011&24,386&20,000&4,386&21&A large-scale facial landmark dataset.\\ \hline AFW~\cite{Zhu2012Face}&2012&473&-&-&6& Multiple facial annotations.\\ \hline HELEN~\cite{Le2012InteractiveFF}&2012&2,330&2,000&330&194&Providing dense landmark annotations.\\ \hline COFW~\cite{BurgosArtizzu2013RobustFL}&2013&1,852&1,345&507&29& Containing occluded faces.\\ \hline 300-W~\cite{Sagonas2013300FI} &2013&3,837&3,148&689&68&The most frequently used dataset of facial landmark.\\ \hline 300-VW~\cite{Shen2015TheFF}&2015&114&50&64&68& A video facial landmark dataset.\\ \hline Menpo~\cite{Zafeiriou2017TheMF} &2017&28,273&12,014&16,259&68&Containing both semi-frontal and profile faces.\\ \hline WFLW~\cite{Wu2018LAB} &2018&10,000&7,500&2,500&98& Multiple annotations and large variations.\\ \hline JD-landmark~\cite{Liu2019GrandCO} &2019&15,393&13,393&2,000&106& Covering large facial variations.\\ \hline \end{tabular}} \end{center} \end{table} The facial landmark datasets can be sorted by the constrained condition and in-the-wild condition. The statistics of these datasets are given in Table~\ref{flmk_dataset}. CMU Multi Pose, Illumination, and Expression (Multi-PIE)~\cite{Gross2008MultiPIE} is the largest facial dataset in constrained condition, which provides 337 subjects with 15 predefined poses, 19 illumination conditions and 6 facial expressions. The annotated facial landmarks are 68 points for frontal faces and 39 points for profile faces. Because it contains a wide range of pose variation, Multi-PIE often servers as a dataset for face frontalization. In addition, more in-the-wild datasets~\cite{Belhumeur2011LocalizingPO,Kostinger2011alfw,Zhu2012Face,Le2012InteractiveFF,BurgosArtizzu2013RobustFL,Sagonas2013300FI,Shen2015TheFF,Zafeiriou2017TheMF,Wu2018LAB,Liu2019GrandCO} are proposed for facial landmark localization. Among them, 300-W~\cite{Sagonas2013300FI,Sagonas2016300FI} is the most frequently used dataset, which follows annotation configuration of Multi-PIE and re-annotates the images in LFPW, AFW, HELEN, and a newly collected dataset, iBug. Fig.~\ref{fig:performance_aglinment} shows the performance comparison of different landmark localization algorithms on the 300-W test set. Besides, Menpo~\cite{Zafeiriou2017TheMF} is a large-scale facial landmark dataset with more difficult cases for facial landmark localization. JD-landmark~\cite{Liu2019GrandCO} annotates face images with 106 facial landmarks to provide more structural information of facial components. The aforementioned datasets focus on still images, while 300-VW~\cite{Shen2015TheFF} provides 50 video clips for training and 64 for test of facial landmark localization in video. \subsection{Challenges and Future Work} \label{sec:face_preprocessing:challenge} In this survey, face preprocessing (~et al.}{\emph{et al}\onedot{i.e.,} face alignment and face frontalization) refers to normalizing an unconstrained face to a canonical view for facilitating the downstream tasks. Face alignment aims at spatially transforming faces to a canonical location, and face frontalization focuses on synthesizing identity-preserving frontal face from non-frontal view. Both of them can be applied as an intermediate procedure to improve the performance of face recognition. Despite the development of them which has made significant progress, their research improvement is continuing. The followings are the major challenges of face alignment and face frontalization. \begin{itemize} \item ~\textbf{Facial Variations}: The facial landmark localization is still not robust enough when working under a variety of extreme variations, such as motion blur, severe occlusion, large pose, low illumination ~et al.}{\emph{et al}\onedot{etc}. \item ~\textbf{Runtime efficiency}: In many practical applications, the face recognition system allocates low runtime budget for the intermediate procedure, especially for the deployment on mobile and embedded devices. \item ~\textbf{The annotation ambiguity}: Due to the fuzzy location of some facial landmarks, such as the landmarks on cheek, the annotation ambiguity of landmark is a common problem in facial landmark datasets. \item ~\textbf{The annotation granularity}: Most of the existing facial landmark datasets provide the annotation of 68 or 106 points. Generally, we desire more landmark points in the annotation to depict the abundant facial structure. \item ~\textbf{High-fidelity face frontalization}: The high-fidelity face frontalization demands the high-resolution, identity-preserving output which is an ill-pose problem from profile view. \end{itemize} \begin{figure} \centering \includegraphics[height=5.5cm]{./figures/lmk_performance.png} \caption{Performance comparison of different landmark localization methods on the 300W test set. The metric is NME ($\%$) with inter-pupil normalization. Lower NME indicates better performance. } \label{fig:performance_aglinment} \end{figure} To cope with these challenges, we propose a number of promising future work directions. \begin{itemize} \item ~\textbf{High robustness and efficiency}: There is a large amount of facial variations in real-world applications, which requires the preprocessing method being robust to various input faces. The efficiency of facial landmark localization is also required as an intermediate step in the system. \item ~\textbf{Dense landmark localization}: The most datasets employ 68 or 106 keypoints as annotation configuration. They are enough for face alignment (usually 5 keypoints needed), but not sufficient to the complex face analysis tasks, such as facial motion capture. Besides, the dense landmark configuration will help to locate more accurate alignment-needed keypoints. Therefore, the dense facial landmark localization datasets and algorithms are worth to explore for many face analysis tasks. \item ~\textbf{Video-based landmark localization}: Most existing methods accomplish the job on still images. Several methods~\cite{Peng2016RED,Liu2018TwoStreamTN,Dong2018SBR,Tai2019TowardsHA,Sun2019FABAR} focus on video-based facial landmark localization, which is still a promising research direction. How to make better use of the temporal information is a major challenge for video-based landmark localization. Other problems, such as the motion blur, low resolution and detection efficiency, are also interesting topics. \item ~\textbf{Semi-supervised landmark localization}: The extensive research on landmark localization belongs to the regime of supervised learning, which needs the precise annotated landmarks. However, it is expensive and inefficient to obtain large-scale dataset with the precise annotations. As explored by the pioneering works~\cite{Dong2018SBR,Honari2018ImprovingLL,Robinson2019LaplaceLL,Dong2019TeacherSS}, the semi-supervised routine is a feasible and valuable solution for facial landmark localization. \item ~\textbf{High-fidelity face frontalization and its metrics}: It is still a challenging task to synthesize high-fidelity frontal face from profile view. For evaluating the face frontalization methods, the current practice measures the accuracy of frontalized face recognition to prove the identity-preserving ability. The metric of visual quality needs to be developed as well. \end{itemize} \begin{figure*}[t] \centering \includegraphics[height=5cm]{./figures/face_representaion.png} \caption{The pipeline of face representation training phase and test phase. In the training phase, two schemes, ~et al.}{\emph{et al}\onedot{i.e.,} classification and feature embedding, are often used for learning face representation. In the test phase, face verification and face identification are the major tasks.} \label{face_representaion} \end{figure*} \section{Face Representation} \label{sec:face_representation} Subsequent to face preprocessing, in the stage of face representation, the goal is to map the aligned face images to a feature space, where the features of the same identity are close and those of the different identity are far apart. In practical applications, there are two major tasks of face recognition, ~et al.}{\emph{et al}\onedot{i.e.,} face verification and face identification. The face verification refers to predict whether a pair of face images belong to the same identity. The face identification can be regarded as an extension of face verification, which aims to determine the specific identity of a face (~et al.}{\emph{et al}\onedot{i.e.,} probe) among a set of identities (~et al.}{\emph{et al}\onedot{i.e.,} gallery); moreover, in the case of open-set face identification, a prior task is needed, whose target is predicting whether the face belongs to one of the gallery identities or not. For either the face verification or face identification, face representation is used to calculate the similarity between face images. Therefore, how to learn discriminative face representation is the core target of the face recognition system. With the advanced feature learning ability of DCNNs, face representation has made great progress. In the followings, we provide a systematic review of the learning methods of face representation from two major aspects, ~et al.}{\emph{et al}\onedot{i.e.,} network architecture and training supervision. \begin{table*}[t] \begin{center} \caption{The categorization of face representation.} \label{fr_class} \resizebox{\linewidth}{!}{ \begin{tabular}{|p{2cm}|p{3cm}|p{5cm}|p{10cm}|} \hline \multicolumn{2}{|c|}{Category}&{Description}&{Method}\\ \hline {Network Architectures} &General architectures & The basic and universal designs for common visual recognition tasks. & AlexNet~\cite{Krizhevsky2012ImageNetCW}, VGGNet~\cite{Simonyan2015VeryDC}, GoogleNet~\cite{Szegedy2015GoingDW}, ResNet~\cite{He2016DeepRL}, Xception~\cite{Chollet2017XceptionDL}, DenseNet~\cite{Huang2017DenselyCC} AttentionNet~\cite{Wang2017ResidualAN}, SENet~\cite{Hu2018SqueezeandExcitationN}, SqueezeNet~\cite{Iandola2017SqueezeNetAA}, MobileNet~\cite{Howard2017MobileNetsEC}, ShuffleNet~\cite{Zhang2018ShuffleNetAE}, MobileNetV2~\cite{Sandler2018MobileNetV2IR}, Shufflenetv2~\cite{Ma2018ShuffleNetVP} \\ \cline{2-4} & Specialized architectures & The modified or ensemble designs oriented to face recognition. & ConvNet-RBM\cite{Sun2013HybridDL}, DeepID~\cite{sun2014deep,Sun2015DeeplyLF,Sun2015DeepID3FR}, MM-DFR~\cite{Ding2015RobustFR}, B-CNN~\cite{Chowdhury2016OnetomanyFR}, ComparatorNet~\cite{Xie2018ComparatorN}, Contrastive CNN~\cite{Han2018FaceRW}, PRN~\cite{Kang2018PairwiseRN}, AFRN~\cite{Kang2019AttentionalFR}, FANface~\cite{Yang2020FANFaceAS}, Sparse ConvNet~\cite{sun2015sparsifying}, Light-CNN~\cite{Wu2015ALC,Wu2018ALC}, MobileFaceNet~\cite{chen2018mobilefacenets}, Mobiface~\cite{Duong2018MobiFaceAL}, ShuffleFaceNet~\cite{MartnezDaz2019ShuffleFaceNetAL}, Hayat~et al.}{\emph{et al}\onedot{et al.}~\cite{Hayat2017JointRA}, E2e~\cite{Zhong2017e2e}, ReST~\cite{Wu2017ReST}, GridFace~\cite{Zhou2018GridFaceFR}, RDCFace~\cite{Zhao_2020_CVPR}, Wei~et al.}{\emph{et al}\onedot{et al.}~\cite{Wei2020BalancedAF}, Co-Mining~\cite{wang2019co}, GroupFace~\cite{Kim2020GroupFaceLL}, DB~\cite{Cao2020DomainBF} \\ \hline {Training Supervision} & Classification& Considering the face representation learning as a classification task.& DeepFace~\cite{taigman2014deepface}, DeepID series~\cite{Sun2014DeepID,sun2014deep,Sun2015DeeplyLF,Sun2015DeepID3FR}, NormFace~\cite{wang2017normface}, L2-softmax~\cite{ranjan2017l2}, COCO loss~\cite{Liu2017RethinkingFD}, Ring loss~\cite{zheng2018ring}, L-softmax~\cite{liu2016large}, SphereFace~\cite{liu2017sphereface}, AM-softmax~\cite{wang2018additive}, CosFace~\cite{wang2018cosface}, ArcFace~\cite{deng2019arcface}, AdaptiveFace~\cite{liu2019adaptiveface}, Fair loss~\cite{Fair_Loss}, MV-softmax~\cite{Wang2019MisclassifiedVG}, ArcNeg~\cite{liu2019towards}, CurricularFace~\cite{Huang2020CurricularFaceAC}, Adacos~\cite{zhang2019adacos}, P2SGrad~\cite{Zhang2019P2SGradRG}, NTP~\cite{Hu2019NoiseTolerantPF}, UT~\cite{Zhong_2019_CVPR}, Co-Mining~\cite{wang2019co}, Shi~et al.}{\emph{et al}\onedot{et al.}~\cite{shi2020universal}, GroupFace~\cite{Kim2020GroupFaceLL}, DB~\cite{Cao2020DomainBF}, RCM loss~\cite{Wu_2020_CVPR}, PFE~\cite{Shi2019ProbabilisticFE}, DUL~\cite{Chang2020DataUL} \\ \cline{2-4} & Feature embedding&Optimizing the feature distance according to the label of sample pair. & DeepID2~\cite{sun2014deep}, DeepID2+~\cite{Sun2015DeeplyLF}, DeepID3~\cite{Sun2015DeepID3FR}, FaceNet~\cite{Schroff2015FaceNetAU}, N-pair loss~\cite{sohn2016improved}, Lifted structured~\cite{oh2016deep}, Smart mining~\cite{Manmatha2017SamplingMI}, Doppelganger mining~\cite{smirnov2017doppelganger} \\ \cline{2-4} &Hybrid & Applying classification and feature embedding together as the supervisory signals.& DeepID2~\cite{sun2014deep}, DeepID2+~\cite{Sun2015DeeplyLF}, DeepID3~\cite{Sun2015DeepID3FR}, TUA~\cite{liu2015targeting}, Doppelganger mining~\cite{smirnov2017doppelganger}, Center loss~\cite{wen2016discriminative}, Range loss~\cite{Zhang2017RangeLF}, UniformFace~\cite{UniformFace}, RegularFace~\cite{zhao2019regularface}, UT~\cite{Zhong_2019_CVPR}, LBL~\cite{zhu2019large}, Circle loss~\cite{sun2020circle} \\ \cline{2-4} &Semi-supervised & Exploiting labeled and unlabeled faces for representation learning. & CDP~\cite{Zhan2018ConsensusDrivenPI}, GCN-DS~\cite{yang2019learning}, GCN-VE~\cite{Yang2020LearningTC}, UIR~\cite{Yu2019UnknownIR}, Shi~et al.}{\emph{et al}\onedot{et al.}~\cite{Shi2020GeneralizingFR}, RoyChowdhury~et al.}{\emph{et al}\onedot{et al.}~\cite{RoyChowdhury2020ImprovingFR} \\ \hline {Specific Tasks} & Cross-age& Identifying faces across a wide range of ages. & LF-CNNs~\cite{Wen2016LatentFG}, CAN~\cite{Xu2017AgeIF}, AFRN~\cite{Du2019AgeFR}, DAL~\cite{Wang2019DecorrelatedAL}, AE-CNN~\cite{Zheng2017AgeEG}, OE-CNN~\cite{Wang2018OrthogonalDF}, IPCGANs~\cite{Wang2018FaceAW}, LMA~\cite{Antipov2017BoostingCF}, Dual cGANs~\cite{song2018dual}, AIM~\cite{Zhao2019LookAE} \\ \cline{2-4} & Cross-pose& Identifying faces across a wide range of poses. & TP-GAN~\cite{Huang2017tpgan}, PIM~\cite{Zhao2018pim}, DREAM~\cite{Cao2018PoseRobustFR}, DA-GAN~\cite{Zhao2017DualAgentGF}, DR-GAN~\cite{Tran2017DisentangledRL}, UV-GAN~\cite{Deng2018UVGANAF}, CAPG-GAN~\cite{Hu2018CAPG}, PAMs~\cite{Masi2016PoseAwareFR}, AbdAlmageed~et al.}{\emph{et al}\onedot{et al.}~\cite{AbdAlmageed2016FaceRU}, MvDN~\cite{kan2016multi} \\ \cline{2-4} &Racial bias& Addressing the imbalance race distribution of training datasets. & IMAN~\cite{Wang2019RacialFI}, RL-RBN~\cite{Wang_2020_CVPR}\\ \cline{2-4} &Cross-modality& Performing face recognition on a pair of images captured by different sensing modalities. & Reale~et al.}{\emph{et al}\onedot{et al.}~\cite{Reale2016SeeingTF}, HFR-CNNs~\cite{Saxena2016HeterogeneousFR}, TRIVET~\cite{Liu2016TransferringDR}, IDR~\cite{He2017LearningID}, DVR~\cite{Wu2018DisentangledVR}, MC-CNN~\cite{Deng2019MutualCC}, WCNN~\cite{He2019WassersteinCL}, NAD~\cite{Lezama2017NotAO}, ADHFR~\cite{Song2018AdversarialDH}, CFC~\cite{He2020AdversarialCF}, Mittal~et al.}{\emph{et al}\onedot{et al.}~\cite{Mittal2015CompositeSR}, ForensicFR~\cite{Galea2017ForensicFP}, TDFL~\cite{Wan2019TransferDF}, E2EPG~\cite{Zhang2015EndtoEndPG}, CASPG~\cite{Zhang2017ContentAdaptiveSP}, DualGAN~\cite{Yi2017DualGANUD}, PS2-MAN~\cite{Wang2018HighQualityFP}, DTFS~\cite{Zhang2019DualTransferFS}, Cascaded-FS~\cite{Zhang2020CascadedFS}, PTFS~\cite{Zhang2019SynthesisOH} \\ \cline{2-4} &Low-shot & Training and testing with the data that have a small number of samples per identity. & SSPP-DAN~\cite{Hong2017SSPPDANDD}, Guo~et al.}{\emph{et al}\onedot{et al.}~\cite{guo2017one}, Choe~et al.}{\emph{et al}\onedot{et al.}~\cite{Choe2017FaceGF}, Hybrid Classifiers~\cite{Wu2017LowShotFR}, Cheng~et al.}{\emph{et al}\onedot{et al.}~\cite{cheng2017know}, Doppelganger mining~\cite{smirnov2017doppelganger}, Yin~et al.}{\emph{et al}\onedot{et al.}~\cite{yin2019feature}, \\ \cline{2-4} &Video-based & Performing face recognition with video sequences. & TBE-CNN~\cite{Ding2018TrunkBranchEC}, NAN~\cite{Yang2017NeuralAN}, C-FAN~\cite{Gong2019VideoFR}, FANVFR~\cite{Liu2019FeatureAN}, MARN~\cite{Gong2019LowQV}, Rao~et al.}{\emph{et al}\onedot{et al.}~\cite{Rao2017LearningDA}, CFR-CNN~\cite{Parchami2017UsingDA}, ADRL~\cite{Rao2017AttentionAwareDR}, DAC~\cite{Liu2018DependencyAwareAC} \\ \hline \end{tabular}} \end{center} \end{table*} \subsection{Network Architectures} \label{sec:face_representation:architecture} The recent improvement of face representation partly benefits from the advance of deep architecture design. Thus, we first review the literature of network architecture for face representation learning. According to the designing purpose, we divide them into general architectures and specialized architectures. The general architectures are the basic and universal designs for common visual recognition tasks in the first place, and applied to face representation learning afterward. The specialized architectures include the modified or ensemble designs oriented to face recognition. \subsubsection{General architectures} With the advanced feature learning ability of deep convolution neural networks~\cite{Krizhevsky2012ImageNetCW,Simonyan2015VeryDC,Szegedy2015GoingDW,He2016DeepRL,Chollet2017XceptionDL,Huang2017DenselyCC,Wang2017ResidualAN,Hu2018SqueezeandExcitationN}, deep face representation has made great progress. Among them, AlexNet~\cite{Krizhevsky2012ImageNetCW} obtained the first place in ImageNet competition (ILSVRC) in 2012~\cite{deng2009imagenet} and achieved significant improvement compared with the traditional methods. Then, VGGNet~\cite{Simonyan2015VeryDC} presented a more generic network, which replaced the large convolutional kernels by the stacked 3$\times$3 ones, enabling the network to grow in depth. In order to enlarge the network without the extra increase of computational budget, GoogleNet~\cite{Szegedy2015GoingDW} developed an inception architecture to concatenate the feature maps that are generated by the convolutions of different receptive field. Soon, GoogleNet was applied to face representation learning, namely FaceNet~\cite{Schroff2015FaceNetAU}. More recently, ResNet~\cite{He2016DeepRL} proposed a residual mapping framework to make it possible for training deep networks that have hundreds of layers. ResNet is a modern network that has been widely used on many visual tasks, including face recognition. Moreover, AttentionNet~\cite{Wang2017ResidualAN} introduced the attention module into residual networks for leveraging the spatial attention in the feature inference. SENet~\cite{Hu2018SqueezeandExcitationN} presented squeeze-and-excitation (SE) blocks to fuse the channel-wise and spatial information, which can be also regarded as an attention mechanism in channel dimension. Additionally, several lightweight neural networks~\cite{Iandola2017SqueezeNetAA,Zhang2018ShuffleNetAE,Howard2017MobileNetsEC,Sandler2018MobileNetV2IR,Ma2018ShuffleNetVP} were proposed to achieve the speed and accuracy trade-off. All of these architectures have been employed as backbone network for representation learning in the face recognition literature after being designed. \subsubsection{Specialized architectures} The aforementioned architectures were initially proposed for general visual tasks. Besides, many works develop specialized architectures for face representation learning. At first, many studies~\cite{Sun2013HybridDL,Sun2014DeepID,sun2014deep,Ding2015RobustFR} attempted to assemble multiple convolution networks together for learning multiple local features from a set of facial patches. Given the human face appears with regular arrangement of facial parts (eyes, nose, mouth, ~et al.}{\emph{et al}\onedot{etc}.), such combination of multiple networks with respect to facial part can be more reliable than a single network. Later, based on Bilinear CNN~\cite{Lin2015BilinearCM}, Chowdhury~et al.}{\emph{et al}\onedot{et al.}~\cite{Chowdhury2016OnetomanyFR} utilized a bilinear architecture for face representation learning. Besides, Xie~et al.}{\emph{et al}\onedot{et al.}~\cite{Xie2018ComparatorN} designed an end-to-end architecture, namely Comparator Network, to measure the similarity of two sets of a variable number of face images. Similar to the multi-network assembling, Comparator Network employs the local attending to facial parts to boost the set-wise representation learning. Han~et al.}{\emph{et al}\onedot{et al.}~\cite{Han2018FaceRW} proposed a contrastive CNN to deal with the task of face verification via generating contrastive kernels for convolution so that the features are adaptive to the input face pair. Kang~et al.}{\emph{et al}\onedot{et al.}~\cite{Kang2018PairwiseRN} introduced a pair-wise relational network to capture the relations between a pair of local appearance patches. Further, AFRN~\cite{Kang2019AttentionalFR} improve the pair-wise relational network with the attention mechanism. More recently, FANFace~\cite{Yang2020FANFaceAS} integrates the face representation network and facial landmark localization network, so that the heatmap of landmarks will boost the features for recognition. To achieve speed-accuracy trade-off, some studies~\cite{sun2015sparsifying,Wu2015ALC,Wu2018ALC,chen2018mobilefacenets,Duong2018MobiFaceAL,MartnezDaz2019ShuffleFaceNetAL} focus on developing the lightweight architecture. To reduce the parameters of deep networks, Sparse ConvNet~\cite{sun2015sparsifying} proposed sparsifying neural network connections, which can iteratively learn sparse structures from the previously learned dense models. Besides, Light-CNN~\cite{Wu2015ALC,Wu2018ALC} introduced a max-feature-map (MFM) activation function to gain better generalization ability than ReLU for face recognition; based on MFM, the author developed the light weight architecture that achieves the advantages in terms of speed and model size. MobileFaceNet~\cite{chen2018mobilefacenets} replaced the global average pooling layer in the original MobileNet~\cite{Sandler2018MobileNetV2IR} with a global depth-wise convolution layer so the output feature can be improved by the spatial importance in the last layer. Mobiface~\cite{Duong2018MobiFaceAL} modified MobileFaceNet by employing fast downsampling and bottleneck residual block with the expansion layers. ShuffleFaceNet~\cite{MartnezDaz2019ShuffleFaceNetAL} extended ShuffleNetV2~\cite{Ma2018ShuffleNetVP} by using the global depth-wise convolution layer and parametric rectified linear unit (PReLU) for real-time face recognition applications. It is worth mentioning that, in some landmark-free face alignment methods~\cite{Hayat2017JointRA,Zhong2017e2e,Wu2017ReST,Zhou2018GridFaceFR,Zhao_2020_CVPR,Wei2020BalancedAF} which we have presented in the face preprocessing section, the network can be optimized with respect to the objective of face representation learning and face alignment jointly. Most of them calibrate the face by using spatial transformer network~\cite{Jaderberg2015SpatialTN} which is followed by the representation module; the spatial transformer network and representation module are jointly learned with respect to the face recognition objective. The following architecture developments are oriented to some specific targets. To handle the label noise problem in training datasets, Co-Mining~\cite{wang2019co} employs two peer networks to collaboratively distinguish the noisy label samples and take the remaining clean samples for training. Kim~et al.}{\emph{et al}\onedot{et al.}~\cite{Kim2020GroupFaceLL} presented an architecture called GroupFace that can learn the latent grouping scheme of face and facilitate the recognition with the group-aware representation. To deal with the long-tail domain issue, Cao~et al.}{\emph{et al}\onedot{et al.}~\cite{Cao2020DomainBF} introduced a residual balancing mapping block to combine the face representation with the domain related feature. \subsection{Training Supervision} \label{sec:face_representation:supervision} Besides network architectures, the training supervision also plays a key role for learning face representation. The objective of supervision for face representation learning is to encourage the faces of same identity to be close and those of different identities to be far apart in the feature space. \begin{figure}[ht] \centering \includegraphics[height=3.5cm]{./figures/develop_fr.png} \caption{The development of training supervision for face representation learning. The orange, green, gray and blue represent classification, feature embedding, hybrid, and semi-supervised methods, respectively. One can refer to Table~\ref{fr_class} for the detailed references.} \label{Development_re} \end{figure} Following the convention of representation learning, we categorize the existing methods of training supervision for face representation into supervised scheme, semi-supervised scheme, and unsupervised scheme. Although there are certain recent progress of deep unsupervised learning methods~\cite{shi2018face,lin2018deep,wang2019linkage,GuoDensityAwareFE} for face clustering, in this review, we focus on the supervised and semi-supervised ones which comprise the major literature of state-of-the-art face recognition. Figure~\ref{Development_re} shows the development of training methods for face representation learning. In the supervised scheme, we can further categorize the existing works into three subsets, ~et al.}{\emph{et al}\onedot{i.e.,} classification, feature embedding and hybrid methods. The classification methods accomplish face representation learning with a $N$-way classification objective, regarding each of the $N$ classes as an identity. The feature embedding methods aim to optimize the feature distance between samples with respect to the identity label, which means maximizing the inter-person distance and minimizing the intra-person distance. Besides, several works employ both classification and feature embedding routine to jointly train the representation network, namely hybrid methods. As for the semi-supervised scheme, there are also several studies that exploit the labeled and unlabeled faces for representation learning. \subsubsection{Classification scheme} The classification based deep face representation learning is derived from the general object classification task. Each class corresponds to an identity that contains a number of faces of the same person. The softmax training loss is the most widely used supervision for classification task, which consists of a fully-connected (FC) layer, the softmax function and the cross-entropy loss. For face representation learning, DeepFace~\cite{taigman2014deepface} and DeepID~\cite{Sun2014DeepID} are the pioneers of utilizing softmax to predict the probability over a large number of identities of training data. Their training loss function can be formulated as follows: \begin{equation} \mathcal{L}= -\frac{1}{N} \sum_{i=1}^{N} \log \frac{e^{W_{y_{i}}^{T} {x}_{i}+b_{y_{i}}}}{\sum_{j=1}^{c} e^{W_{j}^{T} {x}_{i} + b_{j}}}, \end{equation} where $N$ is the batch size, $c$ is the number of classes (identities), $W_{y_{i}}$ is the ground-truth weight vector of sample $x_{i}$ in the FC layer, and $b_{j}$ is the bias term. The term inside the logarithm is the predicted probability on the ground-truth class. The training objective is to maximize this probability. Based on the softmax training loss, some methods studied the effect of normalization on the feature and the weight vectors, and reformulated the objective with the cosine similarity between them. L2-softmax~\cite{ranjan2017l2} first proposed to normalize the feature vectors to lie on a hypersphere of a fixed radius. Besides, NormFace~\cite{wang2017normface} and COCO loss~\cite{Liu2017RethinkingFD} studied the necessity of the normalization operation and applied $L_{2}$ normalization constraint on both features and weights with omitting the bias term $b_{j}$. To effectively train the normalized features, they employ a scale factor to re-scale the cosine similarity between the features and the weights. Moreover, instead of directly using $L_{2}$ normalization on features, Ring loss~\cite{zheng2018ring} introduced a soft normalization that can gradually constrain the norm of features to the target norm value. In summary, the normalized softmax can be reformulated as: \begin{equation}\mathcal{L}=-\frac{1}{N} \sum_{i=1}^{N} \log \frac{e^{s\cos(\theta_{y_{i}})}}{e^{s\cos(\theta_{y_{i}})}+\sum_{j=1, j \neq y_{i}}^{c} e^{s \cos \theta_{j}}},\end{equation} where $\cos(\theta_{j})$ derives from the inner product ${W_{j}^{T}x_{i}}$ with the $L_{2}$ normalization on weights $W_j = \frac{W_j}{\|{W_{j}}\|_{2}}$ and features $x_i = \frac{x_i}{\|{x_i}\|_{2}}$, $s$ is the scale parameter, and $y_i$ is the ground-truth label of sample $x_i$. To further improve the intra-class compactness and inter-class separateness, several methods introduced the margin to the loss function. L-softmax~\cite{liu2016large} replaced the ground-truth logit $\cos \left( \theta_{y_{i}}\right)$ with $\psi\left(\theta_{y_{i}}\right)$ which is defined as \begin{equation}\psi(\theta_{y_{i}})=(-1)^{k} \cos (m \theta_{y_{i}})-2k,\quad \theta_{y_{i}} \in\left[\frac{k \pi}{m}, \frac{(k+1) \pi}{m}\right],\end{equation} where $m$ is the angular margin that being a positive integer, and $k$ is also an integer that $k \in[0, m-1]$. The modified logit makes the learning objective become harder. Similar to L-softmax, SphereFace~\cite{liu2017sphereface} applied an angular margin in the ground-truth logit $\cos \left( \theta_{y_{i}}\right)$ to make the learned face representation to be more discriminative on a hypersphere manifold. However, the multiplicative angular margin in $\cos \left( m \theta_{y_{i}}\right)$ leads to potentially unstable convergence during the training. To overcome the problem, AM-softmax~\cite{wang2018additive} and CosFace~\cite{wang2018cosface} presented an additive margin penalty to the logit, $\cos \left( \theta_{y_{i}}\right) + m_{1} $, which brings more stable convergence. Subsequently, ArcFace~\cite{deng2019arcface} introduced an additive angular margin inside the cosine, $\cos \left( \theta_{y_{i}} + m_{2}\right)$, which corresponds to the geodesic distance margin penalty on a hypersphere manifold. The following is a unified formulation of AM-softmax, CosFace, and ArcFace: \begin{equation}\mathcal{L}=-\frac{1}{N} \sum_{i=1}^{N} \log \frac{e^{s\left(\cos \left(\theta_{y_{i}}+m_{2}\right)+m_{1}\right)}}{e^{s\left(\cos \left( \theta_{y_{i}}+m_{2}\right)+m_{1}\right)}+\sum_{j=1, j \neq y_{i}}^{c} e ^{s \cos \theta_{j}}},\end{equation} where $m_{1} < 0$ represents the additive cosine margin of AM-softmax and CosFace, $m_{2} > 0$ denotes to the additive angular margin of ArcFace. They are easy to be implemented and can achieve better performance than the original softmax. Going further with the margin based supervision, AdaptiveFace~\cite{liu2019adaptiveface} presented a learnable margin that being adaptive to each identity in the training data. The purpose is to address the imbalance distribution problem in training dataset, which means the identities have different number and various diversity of samples. Similarly, Fair loss~\cite{Fair_Loss} introduced an adaptive margin strategy, which applies reinforcement learning to select an appropriate margin against the imbalance distribution problem. Resorting to the advantage of hard sample mining strategy~\cite{shrivastava2016training,lin2017focal}, MV-softmax~\cite{Wang2019MisclassifiedVG} proposed to re-weight the negative (non-ground-truth) logit to emphasize the supervision on the mis-classified samples, and thus to improve the face representation learning from the non-ground-truth perspective. ArcNeg~\cite{liu2019towards} reformulated the negative logit in softmax with a distance-aware Gaussian function to conduct the hard negative mining and weaken the influence of the label noise. Considering the relative importance of easy and hard samples being changing during the training, CurricularFace~\cite{Huang2020CurricularFaceAC} introduced the idea of curriculum learning into face representation learning, which emphasizes more on easy samples in the early stage and on hard samples in the later stage. Adacos~\cite{zhang2019adacos} studied the effect of the scale and margin parameters in the aforementioned formulation of margin-based loss function, and found them substantially influence the prediction probability. Thus, Adacos proposed a unified and adaptive way to reformulate the mapping between the logits and the predicted probability without preset parameters. P2SGrad~\cite{Zhang2019P2SGradRG} analyzed the effects of margin-based softmax loss from the perspective of training gradient, and proposed to replace the classification probability with the cosine similarity in the backward propagation for better optimization. The high-quality label annotation is expensive for large-scale face dataset. Thus, noisy label is an inevitable problem, and certain training methods pursue the noise-robust face representation learning. Hu~et al.}{\emph{et al}\onedot{et al.}~\cite{Hu2019NoiseTolerantPF} proposed a noise-tolerant paradigm that re-weights the training samples for the supervision according to the angular distribution and adjusts the weight at different training stages. Zhong~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhong_2019_CVPR} analyzed the discrepancy between the incorrect label and the prediction, and proposed a noise resistance loss to handle the noisy label problem. Assuming the noise rate over training dataset is a prior knowledge, Co-Mining~\cite{wang2019co} employed two peer networks to find the clean samples and discard the noisy samples based on the loss value, and emphasize the supervision of clean samples during training process. More recently, some methods go further with the classification supervision for face representation learning. In order to improve the generalization on various test conditions, Shi~et al.}{\emph{et al}\onedot{et al.}~\cite{shi2020universal} presented a confidence-aware softmax loss to emphasize on hard samples and split the feature representation into sub-embeddings for learning complementary information. To learn the inherent grouping information, ~et al.}{\emph{et al}\onedot{i.e.,} each group contains a set of people whose faces have common characteristics, GroupFace~\cite{Kim2020GroupFaceLL} introduced a feature aggregation method to combine the features from two perspectives. The first one comes from the branch that can be regarded as the original representation of individual face; the second one comes from the branch of self-grouping which indicates to which group does this face belong most likely. Such a scheme facilitates the representation learning with classification objective in a wide range of identities that includes various characteristics. Moreover, Cao~et al.}{\emph{et al}\onedot{et al.}~\cite{Cao2020DomainBF} proposed a domain balancing mechanism to address the long-tailed domain distribution problem. Specifically, they presented a feature enhancement module to extract domain related features, and a domain balancing margin to optimize the feature of the tail domains. For alleviating the performance degradation of low-bit quantified model of face representation, Wu~et al.}{\emph{et al}\onedot{et al.}~\cite{Wu_2020_CVPR} regarded the quantization error as the combination of class error and individual error, and proposed a rotation consistent margin loss to reduce the latter error which is more critical for face representation. Some recent works, such as PFE~\cite{Shi2019ProbabilisticFE} and DUL~\cite{Chang2020DataUL}, proposed to take into account the data uncertainty for modeling deep face representation, in order to address the problem of uncertainty caused by low quality face images. \subsubsection{Feature embedding scheme} Feature embedding scheme aims to optimize the feature distance according to the label of sample pair. If the pair belong to the same identity, ~et al.}{\emph{et al}\onedot{i.e.,} positive pair, the objective is to minimize the distance or to maximize the similarity; otherwise, ~et al.}{\emph{et al}\onedot{i.e.,} negative pair, to maximize the distance or to minimize the similarity. Like the conventional metric learning approaches~\cite{Xing2002DistanceML,Schultz2003LearningAD,Weinberger2005DistanceML}, the feature embedding scheme for face representation acquires the same goal. Contrastive loss~\cite{yi2014learning,sun2014deep,Sun2015DeeplyLF,Sun2015DeepID3FR} direct optimizes the pair-wise distance with a margin that to encourage positive pairs to be close together and negative pairs to be far apart. The loss function to be minimized is written as \begin{equation} \mathcal{L}_{c} = \begin{cases} \frac{1}{2} \|f(x_{i})-f(x_{j})\| _{2}^{2} & \text{if } y_i = y_j, \\ \frac{1}{2} \max (0, m_{d}- \|f(x_{i})-f(x_{j})\| _{2})^{2} & \text{if } y_i \neq y_j, \end{cases} \end{equation} where $y_{i}=y_{j}$ denotes $x_{i}$ and $x_{j}$ are positive pair, $y_{i}\not=y_{j}$ denotes negative pair, $f(\cdot)$ is the feature embedding function, and $ m_{d}$ is the non-negative distance margin. Therefore, the contrastive loss drives the supervision on all the positive pairs and those negative pairs whose distance is smaller than the margin. The margin can be set by a fixed value, or updates according to the distance distribution along the training process. FaceNet~\cite{Schroff2015FaceNetAU} first employed the triplet loss~\cite{Schultz2003LearningAD,Weinberger2005DistanceML} to deep face representation learning. Different from contrastive loss, the triplet loss encourages the positive pairs to have smaller distance than the negative pairs with respect to a margin, \begin{equation}\mathcal{L}_{t}=\sum_{i}^{N}\left[\left\|f\left(x_{i}^{a}\right)-f\left(x_{i}^{p}\right)\right\|_{2}^{2}-\left\|f\left(x_{i}^{a}\right)-f\left(x_{i}^{n}\right)\right\|_{2}^{2}+m_{d} \right]_{+},\end{equation} where $m_{d}$ is the distance margin, $x_{i}^{a}$ denotes the anchor sample, $x_{i}^{p}$ and $x_{i}^{n}$ refer to the positive sample and negative sample, respectively. The contrastive loss and triplet loss take into account only one negative example each time, while negative pairs are abundant in training data and deserve thorough involvement in training supervision. So, N-pair loss~\cite{sohn2016improved} generalized the triplet loss to the form with multiple negative pairs, and gained further improvement on face verification and identification. Compared with the supervision of classification, feature embedding can save the parameters of FC layer in softmax, especially when the training dataset is in large scale. But the batch size of training samples limits the performance of feature embedding. To alleviate this problem, some methods~\cite{oh2016deep,Manmatha2017SamplingMI,smirnov2017doppelganger} proposed the hard sample mining strategy to enrich the effective information in each batch, which is crucial to promote the performance of feature embedding. \subsubsection{Hybrid methods} The hybrid methods refer to those which apply classification and feature embedding together as the supervisory signals. DeepID2~\cite{sun2014deep}, DeepID2+~\cite{Sun2015DeeplyLF}, DeepID3~\cite{Sun2015DeepID3FR} utilized softmax loss and contrastive loss jointly for learning face representation. Additionally, Liu~et al.}{\emph{et al}\onedot{et al.}~\cite{liu2015targeting} proposed a two-stage training method with the softmax loss in the first stage and the triplet loss in the second stage. Latter, several methods improved the feature embedding portion within the hybrid scheme, by utilizing either the intra-class or the inter-class constraints, such as Center loss~\cite{wen2016discriminative}, UniformFace~\cite{UniformFace} and RegularFace~\cite{zhao2019regularface}. Many hybrid methods~\cite{Zhang2017RangeLF,Zhong_2019_CVPR,zhu2019large} show the advantage for handling the long-tail distributed data which is a widely-existing problem in face recognition. Generally, the classification scheme works well on the head data but poorly on the tail data, because it requires each class to have sufficient training samples. Compared to classification scheme, the feature embedding scheme is able to provide the complementary supervision on the tail data. Thus, the combination of classification and feature embedding can improve the training on long-tail distributed data. Following this path, Range loss~\cite{Zhang2017RangeLF} optimizes the largest intra-class distance and the nearest inter-class distance in one mini-batch to effectively utilize the tail data, and Zhong~et al.}{\emph{et al}\onedot{et al.}~\cite{Zhong_2019_CVPR} proposed to reduce the inter-class similarity on tail data. Moreover, Zhu~et al.}{\emph{et al}\onedot{et al.}~\cite{zhu2019large} introduced a three-stage (~et al.}{\emph{et al}\onedot{i.e.,} classification-verification-classification) strategy to address the training problem on large-scale ID versus Spot face data which only contains two samples in each identity. Sun~et al.}{\emph{et al}\onedot{et al.}~\cite{sun2020circle} proposed a circle loss from a unified perspective of the classification and embedding learning, which integrates the triplet loss with the cross-entropy loss to simultaneously learn deep features with pair-wise labels and class-wise labels. \subsubsection{Semi-supervised scheme} The aforementioned methods focus on supervised learning. Constructing labeled dataset requires much of annotation efforts, while large amount of unlabeled data is easily available. Therefore, it is an attractive direction that to exploit the labeled and unlabeled data together for training deep models. For semi-supervised face representation learning, assuming the identities of unlabeled data being disjoint with the labeled data, several existing works~\cite{Zhan2018ConsensusDrivenPI,yang2019learning,Yang2020LearningTC,Yu2019UnknownIR} focus on generating the pseudo labels for unlabeled data or minimizing the softmax classification probabilities of unlabeled data over the labeled identities. Moreover, considering the domain gaps between the labeled and unlabeled data, Shi~et al.}{\emph{et al}\onedot{et al.}~\cite{Shi2020GeneralizingFR} developed a domain generalization framework to reduce their gap in the feature space. However, these methods assume non-overlapping identities between unlabeled and labeled data, which is generally impractical in the real-world scenarios. Consequently, the unlabeled samples of overlapping identity will be incorrectly clustered as a new class by the pseudo-labeling methods. Moreover, the label noise in pseudo-labeled data is another problem. To address these issues, RoyChowdhury~et al.}{\emph{et al}\onedot{et al.}~\cite{RoyChowdhury2020ImprovingFR} proposed to separate unlabeled data into samples of disjoint and overlapping classes via an out-of-distribution detection algorithm. Besides, they designed an improved training loss based on uncertainty to alleviate the label noise of pseudo-labeled data. \subsection{Specific Face Recognition Tasks} \label{sec:face_representation:specific_scene} \subsubsection{Cross-domain face recognition} Here, the term of cross-domain refers to a generalized definition that includes various factors, such like cross-age and cross-pose face recognition. As deep learning is a data-driven technique, the deep network usually works well on the training domains but poorly on the unseen domains. In the real-world applications of face recognition, it is essential to improve the generalization ability of face representation across different domain factors. In the following, we discuss certain aspects of cross-domain face recognition that includes cross-age, cross-pose, cross-race and cross-modality; also, we review the current methods that specifically study the cross-domain face recognition. \textbf{Cross-age}: As the facial appearance has large intra-class variation along with the growing age, identifying faces across a wide range of age is a challenging task. For such cross-age face recognition, there are two directions followed by the current works. In the first direction, many approaches~\cite{Wen2016LatentFG,Xu2017AgeIF,Du2019AgeFR,Wang2019DecorrelatedAL,Zheng2017AgeEG,Wang2018OrthogonalDF} aim to learn age-invariant face representation by decomposing deep face features into age-related and identity-related components. The second direction is based on generative mechanism. In this way, several face aging methods~\cite{KemelmacherShlizerman2014IlluminationAwareAP,Wang2016RecurrentFA,Antipov2017FaceAW} attempt to synthesize faces of target age, but they present imperfect preservation of the original identities in aged faces. Thus, more methods~\cite{Wang2018FaceAW,Antipov2017BoostingCF,song2018dual,Zhao2019LookAE} focus on improving the identity-preserving ability during face aging. \textbf{Cross-pose}: In unconstrained conditions, such as surveillance video, the cameras cannot always capture the frontal face image for every appeared subject. Thus, the captured faces have large pose variation from frontal to profile view. As aforementioned in the face preprocessing section, converting profile face images to the frontal pose is a feasible way for cross-pose face recognition, such as TP-GAN~\cite{Huang2017tpgan}, FF-GAN~\cite{yin2017ffgan}, PIM~\cite{Zhao2018pim}, and FNM~\cite{Qian2019UnsupervisedFN}. However, generating the frontal faces will increase the burden of face recognition systems. Cao~et al.}{\emph{et al}\onedot{et al.}~\cite{Cao2018PoseRobustFR} alleviate this issue by transforming the representation of a profile face to the frontal view in the feature space. Another problem is that the number of profile faces are much fewer than frontal faces in the training data. Thus, some generative approaches~\cite{Zhao2017DualAgentGF,Tran2017DisentangledRL,Deng2018UVGANAF,Hu2018CAPG} proposed to synthesize identity-preserving faces of arbitrary poses to enrich the training data. Moreover, certain methods~\cite{Masi2016PoseAwareFR,AbdAlmageed2016FaceRU,kan2016multi} developed multiple pose-specific deep models to compute the multi-view face representations. \textbf{Racial bias}: Racial bias is another issue in face recognition. Due to the imbalance distribution of different races in training data, the deep face feature shows favorable recognition performance with partiality to the races of large proportion in training data than the small proportion. A few works~\cite{Wang2019RacialFI,Wang_2020_CVPR} have studied this problem recently. Wang~et al.}{\emph{et al}\onedot{et al.}~\cite{Wang2019RacialFI} constructed an in-the-wild face dataset (RFW) with both identity and race annotation, which consists of four racial subsets, namely Caucasian, Asian, Indian, and African. Besides, they proposed an information maximization adaptation network to alleviate the racial bias in face recognition. Later on, in the work of RL-RBN (reinforcement learning based race balance network)~\cite{Wang_2020_CVPR}, they set a fixed margin for the large-proportion races and automatically select an optimal margin for the small-proportion races, in order to achieve balanced performance against the racial bias issue. \textbf{Cross-modality}: Cross-modality face recognition generally refers to the heterogeneous face recognition, which performs with a pair of input face images captured by different sensing modalities, such as infrared vs. visible or sketch vs. photo. Many traditional methods~\cite{galoogahi2012inter,Yi2015SharedRL,Jin2015LargeMC,Li2016MutualCA,Shi2017CrossModalityFR} have comprehensively studied this topic before. How to alleviate the domain gaps between different modalities is a major challenge for deep face recognition methods. Besides, compared with the large-scale training samples of regular visible face images, the available infrared or sketch face images are of very limited number. The existing works mainly deal with these two issues. For the infrared-visible face recognition, several methods~\cite{Reale2016SeeingTF,Saxena2016HeterogeneousFR,Liu2016TransferringDR} employ the transfer learning mechanism, ~et al.}{\emph{et al}\onedot{i.e.,} pretraining on the large amount of visible-light (VIS) images and finetuning with the near-infrared (NIR) data. Another set of methods~\cite{He2017LearningID,Wu2018DisentangledVR,Deng2019MutualCC,He2019WassersteinCL} aim to decompose the NIR and VIS representations to the modality-specific and modality-invariant components, and use the latter one for the recognition task. Moreover, many methods~\cite{Lezama2017NotAO,Song2018AdversarialDH,He2020AdversarialCF} study to synthesize the VIS faces from NIR input, and then perform the regular face recognition algorithms in the VIS domain. Similarly, to reduce the domain discrepancy between photo and sketch, transfer learning~\cite{Mittal2015CompositeSR,Galea2017ForensicFP,Wan2019TransferDF} is employed in sketch-photo face recognition. Besides, sketch face synthesis is another direction. To convert the photos to sketches, some works~\cite{Zhang2015EndtoEndPG,Zhang2017ContentAdaptiveSP} try to learn the fully convolutional network (FCN) with a generative loss, which provides a dense pixel-wise synthesis of sketch face. However, the synthesized sketch face is often degraded by severe noise. Inspired by the GAN-based methods~\cite{Yi2017DualGANUD,Zhu2017UnpairedIT} in image generation, many approaches~\cite{Wang2018HighQualityFP,Zhang2019DualTransferFS,Zhang2019SynthesisOH,Zhang2020CascadedFS} developed GAN-based frameworks to recover the realistic facial structures and preserve the identity-related information for sketch face synthesis. \subsubsection{Low-shot face recognition} Low-shot learning in face recognition focuses on the task of low-shot identification of face IDs, each of which has a small number of face samples. Most of these methods attempt to address such low-shot problem mainly on the MS-Celeb-1M low-shot learning benchmark~\cite{guo2017one}, which has about 50 to 100 training samples for each ID in a base set and only one training sample for each ID in a novel set. The target is to recognize the IDs in both base and novel sets. The key challenge is to correctly recognize the subjects in the novel set which has only one training sample per identity. To handle this problem, Choe~et al.}{\emph{et al}\onedot{et al.}~\cite{Choe2017FaceGF} and Hong~et al.}{\emph{et al}\onedot{et al.}~\cite{Hong2017SSPPDANDD} proposed to augment the number of low-shot training samples with different attributes and poses via face synthesis. Wu~et al.}{\emph{et al}\onedot{et al.}~\cite{Wu2017LowShotFR} developed the hybrid classifier, which is composed of a CNN and a nearest neighbor model. Smirnov~et al.}{\emph{et al}\onedot{et al.}~\cite{smirnov2017doppelganger} proposed to construct better training mini-batches by sampling pairs of similar-looking identities together. Moreover, several methods~\cite{guo2017one,cheng2017know,yin2019feature} improved the low-shot face recognition with better training supervision. Generally, the norm of weights for low-shot classes is smaller than that of the regular classes in softmax classifier, leading to weak discrimination for low-shot classes. Accordingly, Guo~et al.}{\emph{et al}\onedot{et al.}~\cite{guo2017one} proposed to optimize the classifier by aligning the norms of the weight vectors of the low-shot and regular classes. Yin~et al.}{\emph{et al}\onedot{et al.}~\cite{yin2019feature} found the feature distribution of low-shot class is under-represented due to the insufficient training samples. Thus, they proposed a feature transfer method to enrich the feature space of low-shot classes to mimic that of the regular classes. \subsubsection{Video face recognition} The above algorithms focus on still image-based face recognition. For video face recognition, a common way~\cite{Chen2017UnconstrainedSF,Ding2018TrunkBranchEC} is to equally consider the importance of each frame and simply average a set of deep features to obtain a template face representation. However, this routine does not consider the different quality of frames and the temporal information across frames. How to obtain an optimal template face representation in video is the major challenge of video-based face recognition. Many methods~\cite{Yang2017NeuralAN,Gong2019VideoFR,Liu2019FeatureAN,Gong2019LowQV} proposed to aggregate the frame-level features with the attention weights or quality scores. Besides, Rao~et al.}{\emph{et al}\onedot{et al.}~\cite{Rao2017LearningDA} aggregate the multiple frames to synthesize the representative face image. Parchami~et al.}{\emph{et al}\onedot{et al.}~\cite{Parchami2017UsingDA} employed an autoencoder to generate high-quality canonical faces to handle the problem of low-quality frames. Most methods exploit the spatial information of each frame independently without considering the temporal information across the frames. Accordingly, some methods~\cite{Rao2017AttentionAwareDR,Liu2018DependencyAwareAC} model the temporal-spatial information with the sequential attention mechanism to exploit the rich correlation and find the focus of video frames. \subsection{Evaluation Metrics and Datasets} \label{sec:face_representation:evaluation} \subsubsection{Metrics} The performance of face recognition is usually evaluated on two tasks: verification and identification, each of which has its corresponding evaluation metrics. Specifically, two sets of samples, ~et al.}{\emph{et al}\onedot{i.e.,} gallery and probe, are required for the evaluation. The gallery refers to a set of faces registered in the face recognition system with known identities, while the probe denotes a set of faces need to be recognized in verification or identification. Before discussing the commonly used evaluation metrics, we first introduce some basic concepts. A face recognition system determines whether to accept the matching of a probe face and a gallery face by comparing their similarity, computed by some measurement between their features, with a given threshold. Specifically, when a probe face and a gallery face are the same identity, a true acceptance (TA) means their similarity is above the threshold, and a false rejection (FR) represents their similarity is below the threshold; if they are different identities, a true rejection (TR) means their similarity is below the threshold, and a false acceptance (FA) represents their similarity is above the threshold. These are the basic concepts to build the evaluation metrics in the followings. One can refer to~\cite{Grother2003FaceRV,Grother2014Face} for more details \textbf{Verification task:} Face verification is often applied in identity authentication system, which measures the similarity of face pairs. One presents his or her face and claims the enrolled identity in the gallery. Then, the system determines whether it accepts the person being the same one of the claimed identity by calculating the similarity between the presented face and the claimed face. In other words, given a pair of photos, the system compares the faces in the two photos to determine if they are the same identity. Thus, the verification task can be regarded as a one-to-one face matching process. The false accept rate (FAR) and true accept rate (TAR) are used to evaluate the verification performance. FAR is the fraction of impostor pairs with the similarity above the threshold, which can be calculated by $\frac{FA}{FA+TR}$; TAR represents the fraction of genuine pairs with the similarity above the threshold, which can be calculated by $\frac{TA}{TA+FR}$. Then, by varying the threshold, the ROC curve can be drawn by many operating points, each of which is determined by a pair of TAR vs. FAR. The ROC curve (with TAR value at selected FAR) and its AUC (~et al.}{\emph{et al}\onedot{i.e.,} area under curve) are widely used to evaluate the performance for the face verification task. \textbf{Identification task:} Face identification task determines whether a probe face belongs to a enrolled identity in the gallery set. To this end, the probe face needs to be compared with every person in the gallery set. Thus, the identification task can be also referred as one-to-$N$ face matching. Generally, face identification confronts two tasks, ~et al.}{\emph{et al}\onedot{i.e.,} the open-set and closed-set identification. The open-set identification task refers to that the probe face is not necessarily the very identity contained in the gallery set, which is a general case in practice. The true positive identification rate (TPIR) and false positive identification rate (FPIR) are the most used metrics for the following two situations. The first situation refers to that the probe corresponds to an enrolled identity in the gallery set. This situation is called mate searching, and the probe is called mate probe. The succeeded mate searching represents that the rank of true matching is higher than the target rank, and meanwhile its similarity is above the threshold. In such case, the mate probe is correctly identified as its true identity, and the mate searching is measured by the TPIR which represents the proportion of succeeded trials of mate searching. The second is non-mate searching, in which the probe does not correspond to any enrolled identity (~et al.}{\emph{et al}\onedot{i.e.,} non-mate probe). The non-mate searching is measured by the FPIR which reports the proportion of non-mate probes wrongly identified as enrolled identity. By fixing the rank and varying the threshold, the ROC curve can be drawn by many operating points, each of which is determined by a pair of TPIR vs. FPIR. The ROC curve (TPIR value at a given FPIR) is used to evaluate performance in the open-set face identification task. In the closed-set scenario, the identity of each probe face is included in the gallery set. The cumulative match characteristic (CMC) curve is used for evaluating the closed-set face identification. The CMC curve is drawn by the operating points that are determined by a pair of identification rate vs. rank. The identification rate refers to the fraction of probe faces that are correctly identified as the true identities, thus the CMC curve reports the fraction of the true matching with a given rank, and the identification rate at rank one is the most commonly used indicator of performance. It is noteworthy that the CMC is a special case of the TPIR when we relax the threshold. \subsubsection{Datasets} With the development of deep face recognition in recent years, another key role to promote face representation learning is the growing datasets for training and test. In the past few years, the face datasets have become large scale and diverse, and the testing scene has been approaching to the real-world unconstrained condition. Here, we provide a review of the datasets used for training and test in deep face recognition. The statistics of them are presented in Table~\ref{fr_data}. \begin{table*}[t] \begin{center} \caption{The performance ($\%$) comparison on LFW and MegaFace Challenge. ``Training Data'' denotes the number of training face images used by the corresponding method. For the evaluation on MegaFace, ``Id.'' refers to the rank-1 face identification accuracy with 1M distractors, and ``Veri.'' refers to the face verification TAR at 1e-6 FAR. The performance with ``*'' refers to the evaluation on the refined version of MegaFace. ``-'' indicates that the authors did not report the performance with the corresponding protocol. } \centering \resizebox{0.7\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method}&\multirow{2}{*}{Training Data}&\multirow{2}{*}{Architecture}&\multirow{2}{*}{LFW}&\multicolumn{2}{c|}{MegaFace} \\ \cline{5-6}&&&&Id.&Veri.\\ \hline\hline DeepFace~\cite{taigman2014deepface}&4M&CNN-8&97.35&-&-\\ \hline DeepID2~\cite{sun2014deep}&0.3M&CNN-8&99.15&65.21&78.86\\ \hline FaceNet~\cite{Schroff2015FaceNetAU}&400M&GoogleNet&99.63&-&-\\ \hline VGG Face~\cite{Parkhi2015DeepFR}&2.6M&VGGNet&98.95&64.79&78.32\\ \hline Center Loss~\cite{wen2016discriminative}&0.7M&CNN-11&99.28&65.49&80.14\\ \hline L-Softmax~\cite{liu2016large}&0.5M&VGGNet-18&99.10&67.12&80.42\\ \hline SphereFace~\cite{liu2017sphereface}&0.5M&ResNet-64&99.42&72.72&85.56\\ \hline Ring loss~\cite{zheng2018ring}&3.5M&ResNet-64&99.50&74.93&-\\ \hline AM-softmax~\cite{wang2018additive}&0.5M&ResNet-20&98.98&72.47&84.44\\ \hline CosFace~\cite{wang2018cosface}&0.5M&ResNet-64&99.42&77.11&89.88\\ \hline ArcFace~\cite{deng2019arcface}& 0.5M&ResNet-50&99.53&77.50&92.34\\ \hline RegularFace~\cite{zhao2019regularface}& 3.1M&ResNet-20&99.61&75.61&91.13\\ \hline UniformFace~\cite{UniformFace}&3.8M&ResNet-34&99.80&79.98&95.36\\ \hline Fair Loss~\cite{Fair_Loss}&0.5M&ResNet-50&99.57&77.45&92.87\\ \hline PFE~\cite{Shi2019ProbabilisticFE}&4.4M&ResNet-64& 99.82&78.95&92.51\\ \hline TURL~\cite{shi2020universal} &0.6M&ResNet-100& 99.78&78.60&95.04\\ \hline \hline AdaCos~\cite{zhang2019adacos} &2.35M&ResNet-50&99.73&97.41$^*$&-\\ \hline P2SGrad~\cite{Zhang2019P2SGradRG}&2.35M&ResNet-50&99.82&97.25$^*$&-\\ \hline AdaptiveFace~\cite{liu2019adaptiveface}&5M&ResNet-50&99.62&95.02$^*$&95.61$^*$\\ \hline Circle Loss~\cite{sun2020circle}&3.6M&ResNet-100&99.73&98.50$^*$&-\\ \hline DUL~\cite{Chang2020DataUL}&3.6M&ResNet-64&99.83&98.60$^*$&-\\ \hline DB~\cite{Cao2020DomainBF}&5.8M&ResNet-50&99.78&96.35$^*$&96.56$^*$\\ \hline ArcFace~\cite{deng2019arcface}&5.8M &ResNet-100&99.82 &98.35$^*$&98.48$^*$\\ \hline MV-AM-softmax~\cite{Wang2019MisclassifiedVG}&3.2M&Attention-56&99.78&97.14$^*$&97.57$^*$\\ \hline CurricularFace~\cite{Huang2020CurricularFaceAC} &5.8M&ResNet-100&99.80 &98.71$^*$&98.64$^*$\\ \hline GroupFace~\cite{Kim2020GroupFaceLL} & 5.8M &ResNet-100&99.85 &98.74$^*$&98.79$^*$\\ \hline \end{tabular} } \label{performance_fr} \end{center} \end{table*} \begin{table*}[ht] \begin{center} \caption{The commonly used training and test public datasets for deep face recognition.} \label{fr_data} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline {Dataset}&{Year}&{$\#$Subject}&{$\#$Image/Video}&{$\#$ of Img/Vid per Subj}&{Description}\\ \hline\hline \multicolumn{6}{|c|}{Training}\\ \hline CASIA-WebFace~\cite{yi2014learning}&2014&10,575&494,414/-&47&The first public large-scale face dataset\\ \hline VGGFace~\cite{Parkhi2015DeepFR}&2015&2,622&2.6M/-&1,000&Containing large number of images in each subject \\ \hline CelebA~\cite{liu2015faceattributes}&2015&10,177 &202,599/-&20& Rich annotations of attributes and identities \\ \hline UMDFaces~\cite{Bansal2017UMDFacesAA}&2015&8,277&367K/-&45& Abundant variation of facial pose\\ \hline MS-Celeb-1M~\cite{guo2016ms}&2016&100k&10M/-&100&The largest public dataset of celebrity faces\\ \hline MegaFace~\cite{kemelmacher2016megaface,Nech2017LevelPF}&2016&672,057&4.7M/-&7&A long-tail dataset of non-celebrity \\ \hline VGGFace2~\cite{Cao2018VGGFace2AD}&2017&9,131&3.31M/-&363&A high-quality dataset with a wide range of variation\\ \hline UMDFaces-Videos~\cite{Bansal2017TheDA}&2017&3,107&-/22,075&7&A video training dataset collected from YouTube\\ \hline MS-Celeb-1M Low-shot~\cite{guo2017one}&2017&20k,1k&1M,1k/-&58,1&Low-shot face recognition\\ \hline IMDb-Face~\cite{Wang2018TheDO}&2018&57k&1.7M/-&29&A large-scale noise-controlled dataset\\ \hline QMUL-SurvFace~\cite{Wang2018TheDO}&2018&5,319&220,890/-&41&A low-resolution surveillance dataset\\ \hline\hline \multicolumn{6}{|c|}{Test}\\ \hline LFW~\cite{LFWTech}&2007&5,749&13,233/-&2.3& A classic benchmark in unconstrained conditions\\ \hline YTF~\cite{Wolf2011FaceRI}&2011&1,595&-/3,425&2.1&Face recognition in unconstrained videos\\ \hline CUFSF~\cite{Zhang2011CoupledIE}&2011&1,194&2,388/-&2&Photo-sketch face recognition\\ \hline CASIA NIR-VIS v2.0~\cite{Li2013TheCN}&2013&725&17,580/-&24.2&Near-infrared vs. RGB face recognition\\ \hline IJB-A~\cite{Klare2015PushingTF}&2015&500&5,712/2,085&11.4/4.2& Set-based face recognition with large variation\\ \hline CFP~\cite{sengupta2016frontal}&2016&500&7,000/-&14&Frontal to profile cross-pose face verification\\ \hline MS-Celeb-1M Low-shot~\cite{guo2017one}&2016&20k,1k&100k,20k/-&5,20&Low-shot face recognition\\ \hline MegaFace~\cite{kemelmacher2016megaface,Nech2017LevelPF}&2016&690,572&1M/-&1.4&A large-scale benchmark with one million faces\\ \hline IJB-B~\cite{Whitelam2017IARPAJB}&2017&1,845&11,754/7,011&6.37/3.8&Set-based face recognition with full pose variation \\ \hline CALFW~\cite{zheng2017cross}&2017&4,025&12,174/-&3& Cross-age face verification\\ \hline AgeDB~\cite{Moschoglou2017AgeDBTF}&2017&570&16,516/-&29& Cross-age face verification\\ \hline SLLFW~\cite{deng2017fine}&2017&5,749&13,233/-&2.3& Improving the difficulty of negative pairs in LFW\\ \hline CPLFW~\cite{zheng2018cross}&2017&3,968&11,652/-&2.9&Cross-poss face verification\\ \hline Trillion Pairs~\cite{trillionpairs.org}&2018&1M&1.58M/-&1.6&A large-scale benchmark with massive distractors\\ \hline IJB-C~\cite{Maze2018IARPAJB}&2018&3,531&31,334/11,779&6/3&Set-based face recognition with large variation\\ \hline IJB-S~\cite{Kalka2018IJBSIJ}&2018&202&5,656/552&28/12&Real-world surveillance videos \\ \hline RFW~\cite{Wang2019RacialFI}&2018&11,429&40,607/-&3.6& For reducing racial bias in face recognition\\ \hline DFW~\cite{Kushwaha2018DisguisedFI}&2018&600&7,771/-&13&Disguised face recognition\\ \hline QMUL-SurvFace~\cite{Wang2018TheDO}&2018&10,254&242,617/-&23.7&Low-resolution surveillance videos\\ \hline \end{tabular}} \end{center} \vspace{-2em} \end{table*} \textbf{Training data:} Large-scale training datasets are essential for learning deep face representation. The early works often employed the private face datasets, such as Deepface~\cite{taigman2014deepface}, FaceNet~\cite{Schroff2015FaceNetAU}, DeepID~\cite{sun2014deep}. To make it possible for a fair comparison, Yi~et al.}{\emph{et al}\onedot{et al.}~\cite{yi2014learning} released the CASIA-WebFace dataset, which contains 10,575 subjects with 494,414 images and has been one of the most widely-used training datasets. Afterward, more public training datasets were published to provide abundant face images for training deep face model. Among them, VGGFace~\cite{Parkhi2015DeepFR} and VGGFace2~\cite{Cao2018VGGFace2AD} contains many training samples for each subject. In contrast, MS-Celeb-1M~\cite{guo2016ms}, MegaFace~\cite{kemelmacher2016megaface} and IMDb-Face~\cite{Wang2018TheDO} provide a large number of subjects with limited training samples per subject. Label noise is a common problem when collecting large-scale face datasets. IMDb-Face~\cite{Wang2018TheDO} estimated the noise distribution in the existing datasets and showed that they suffer serious noise problem. They also found that cleaning data can effectively improve the performance of face recognition. \textbf{Test data:} As for testing, Labeled Faces in the Wild (LFW)~\cite{LFWTech} is a classic and the most widely used benchmark for face recognition in unconstrained environments. The original protocol of LFW contains 3,000 genuine and 3,000 impostor face pairs, and evaluates the mean accuracy of verification on these 6,000 pairs. So far, the state-of-the-art accuracy has been saturated on LFW, whereas the total samples in LFW are more than those in the original protocol. Based on this, BLUFR~et al.}{\emph{et al}\onedot{et al.}~\cite{Liao2014ABS} proposed to exploit all the face images in LFW for a large-scale unconstrained face recognition evaluation; SLLFW~\cite{deng2017fine} replaced the negative pairs of LFW with more changeling ones. In addition, CFP~\cite{sengupta2016frontal}, CPLFW~\cite{zheng2018cross}, CALFW~\cite{zheng2017cross}, AgeDB~\cite{Moschoglou2017AgeDBTF} and RFW~\cite{Wang2019RacialFI} utilize the similar evaluation metric of LFW to test face recognition with various challenges, such as cross pose, cross age and multiple races. MegaFace~\cite{kemelmacher2016megaface,Nech2017LevelPF} and Trillion Pairs~\cite{trillionpairs.org} focus on the performance at the strict false accept rates (~et al.}{\emph{et al}\onedot{i.e.,} 1e-6 and 1e-9) on face verification and identification task with million-scale distractors. Table~\ref{performance_fr} shows the performance comparison of many methods on LFW and MegaFace. The above test datasets focus on image-to-image face recognition, whereas YouTube Faces (YTF)~\cite{Wolf2011FaceRI}, IJB-A~\cite{Klare2015PushingTF}, IJB-B~\cite{Whitelam2017IARPAJB}, IJB-C~\cite{Maze2018IARPAJB}, IJB-S~\cite{Kalka2018IJBSIJ} and QMUL-SurvFace~\cite{Wang2018TheDO} serve as the evaluation benchmark of video-based face recognition. Especially, IJB-S and QMUL-SurvFace are constructed from real-world surveillance videos, which are much more difficult and realistic than the tasks on still images. CASIA NIR-VIS v2.0~\cite{Li2013TheCN} and CUFSF~\cite{Zhang2011CoupledIE} focus on cross-modality face recognition, such as the near-infrared vs. RGB face verification and identification. Besides, DFW~\cite{Kushwaha2018DisguisedFI} aims to study the disguised face recognition, such as the faces with make-up, beard, moustache and sunglasses ~et al.}{\emph{et al}\onedot{etc}.Moreover, MS-Celeb-1M low-shot dataset ~\cite{guo2017one} provides a benchmark for the low-shot face recognition. \subsection{Challenges and Future Work} \label{sec:face_representation:challenge} In this section, we have reviewed the recent advances of deep face representation from many perspectives including network architecture, training supervision, specific face recognition tasks and datasets. Since the prevalence of deep learning, deep face representation has made remarkable progress and been successfully applied in many real-world scenarios. The remaining major challenges are given as follows: \begin{itemize} \item ~\textbf{Under limited conditions}: Although existing methods achieve high accuracy on various benchmarks, it is still challenging when the development and application (~et al.}{\emph{et al}\onedot{i.e.,} training and inference) are limited in the computational cost and the training data amount. \item ~\textbf{Surveillance video face recognition}: In many real-world applications, surveillance face recognition is a common scenario, where the challenges include various facial variations, such large poses, motion blur, low illumination, low resolution, occlusion ~et al.}{\emph{et al}\onedot{etc}.. \item ~\textbf{Label noise}: The label noise in large-scale face datasets occurs frequently and harms the training. There are yet large room to develop by the noise-robust approaches. \item ~\textbf{Imbalance data}: Imbalance distribution of training data also brings issues to the face representation learning, such as long-tail distribution over face identities or domains ~et al.}{\emph{et al}\onedot{etc}. \end{itemize} To address these challenges, a number of worthwhile research directions need to be explored in the future. We present them in the following. \begin{itemize} \item ~\textbf{Lightweight face recognition}: The large memory and computational costs often make it impractical to employ heavy-weight networks on mobile or embedded devices. Although many works ~\cite{Wu2015ALC,Wu2018ALC,chen2018mobilefacenets,Duong2018MobiFaceAL,MartnezDaz2019ShuffleFaceNetAL,Wu_2020_CVPR} have studied lightweight face recognition, it is still essential to improve the lightweight models with high efficiency and accuracy. \item ~\textbf{Robustness to variations in video}: It always requires robust face representation models against varying conditions, especially for the face recognition task in surveillance video. The robustness against low image quality and large facial pose is the core demand in many practical applications. \item ~\textbf{Noisy label learning}: Label noise is an inevitable problem when collecting large-scale face dataset. Certain works~\cite{Wang2018TheDO,deng2019arcface,trillionpairs.org,Zhang_2020_FaceGraph} study how to remove the noisy data to build a cleaned dataset, and some others~\cite{Hu2019NoiseTolerantPF,wang2019co,Zhong_2019_CVPR} aim at learning noise-robust face representation. But most of them are susceptible to the ability of the initial model, and need to be more flexible in real-world scenarios. It is still an open issue for noisy label learning in face recognition. \item ~\textbf{Cross domain face recognition}: There are many different domain factors in face data, such as facial age, pose, race, imaging modality ~et al.}{\emph{et al}\onedot{etc}., and some works~\cite{Wen2016LatentFG,Du2019AgeFR,Tran2017DisentangledRL,Cao2018PoseRobustFR,Wang2019RacialFI,Wang_2020_CVPR,Zhang2020CascadedFS} have studied the face recognition across a small fraction of them. How to obtain a universal representation for cross domain face recognition is a challenging research topic. \item ~\textbf{Learning with imbalance data}: Representation learning on the long-tail data is an existing problem in many face datasets. With the under-represented intra-class variations, the subjects with limited samples are usually neglected during the training. The domain bias caused by imbalance data scale is another common issue in face recognition. It is worth to handle these problems in a unified framework. \item ~\textbf{Learning with unlabeled faces}: There are a large amount of unlabeled face data in practical applications. However, it is excessively expensive to manually annotate them when the dataset keep growing. Recently, semi-supervised learning and face clustering methods attract increasing attention for face representation. How to effectively employ unlabeled data for boosting face recognition is a promising direction. \end{itemize} \section{Discussion and Conclusion} \subsection{Discussion} \label{sec:discussion} The deep face recognition has achieved great progress in recent years, while it still remains a number of challenging issues for each element. Readers can refer to the ending part of each body section where we have provided the detailed analysis of the issues. Here, we are going deeper with the discussion about the challenges and future works. Table ~\ref{conclusion_challenges} top half elaborates the common issues shared between face detection, preprocessing and representation. We can find that the issues mainly includes three aspects, ~et al.}{\emph{et al}\onedot{i.e.,} facial and image variations, data and label distribution, computational efficiency. For example, in the first aspect, the facial variations include large facial pose, extreme expression, occlusion and facial scale, while the image variations include the objective factors such as motion blur, low illumination and resolution which occur frequently in video face recognition. Another example indicates the need of training efficiency, including faster training and fast convergence, both of which devote to accelerate the learning of large face representation network (hundreds of layers normally) from weeks to hours; the former generally focuses on the mixed precision training or the distributed framework for large-scale training (over millions of identities), while the latter focuses on improving the supervision, initialization, updating manner, activation, architectures, ~et al.}{\emph{et al}\onedot{etc}.. Here, rather than replaying the every detail, we leave Table~\ref{conclusion_challenges} to readers for exploring the common challenges and further improvement. It is worth mentioning that all the elements will benefit from the solutions against these issues, since they are the common issues across the elements. \begin{table*}[t] \begin{center} \caption{Summary of the major challenges towards end-to-end deep face recognition.} \label{conclusion_challenges} \resizebox{\linewidth}{!}{ \begin{tabular}{|p{1cm}|p{4cm}|p{6cm}|} \hline \multicolumn{2}{|c|}{Challenges} &\multicolumn{1}{|c|}{Description} \\ \hline \multicolumn{1}{|l|}{The common issues across the elements.} & Facial / image variations & \vspace{-7pt} \begin{itemize}[leftmargin=*] \item Large pose, extreme expression, occlusion, facial scale. \item Motion blur, low illumination, low resolution. \end{itemize} \vspace{-12pt} \\ \cline{2-3} & Data / label distribution & \vspace{-7pt} \begin{itemize}[leftmargin=*] \item Limited labeled data, label noise. \item Usage of unlabeled data. \item Imbalance over scale, identity, race, domain, modality. \end{itemize} \vspace{-12pt} \\ \cline{2-3} & Computational efficiency & \vspace{-7pt} \begin{itemize}[leftmargin=*] \item Inference on non-GPU server and edge computing. \item Fast training and convergence. \end{itemize} \vspace{-12pt} \\ \hline \multicolumn{1}{|l|}{The issues concerning to the entire system.} & Joint modeling and optimization & \vspace{-7pt} \begin{itemize}[leftmargin=*] \item End-to-end training and inference. \item Unified learning objective. \item Mutual promotion. \end{itemize} \vspace{-12pt} \\ \cline{2-3} & Interpretability & \vspace{-7pt} \begin{itemize}[leftmargin=*] \item Explainable learning and inference. \end{itemize} \vspace{-12pt} \\ \hline \end{tabular}} \end{center} \end{table*} Despite the recent advances of individual element, it is still necessary to discuss and explore the future development trends from the view of the holistic framework because each element has significant impact to the whole system. Inferiority in any one of the elements will become the shortest piece of cask and harm the final performance. For example, face detection is the very first step of the end-to-end face recognition, and the accuracy of face bounding box directly influences the subsequent preprocessing; the inaccurate face localization will bring in mis-aligned information and disturbance from non-face regions, leading to the improper feature computing and thus damaging the performance. For another example, a set of inaccurate facial landmarks will harm the alignment and then impede the following feature computation as well, even if the afore-detected bounding box is fair. As for the final step of face representation, which is the core operation of the end-to-end face recognition system, it is crucial to pursue the performance of itself with the given cropping on the aligned face. The bottom half of Table~\ref{conclusion_challenges} indicates the major challenges from the perspective of entire system. The collection includes two main aspects. The first aspect relates to the interpretability in deep face recognition. Although the explainable artificial intelligence, so-called XAI, has been studied for a long time, the explainable deep face recognition is in its infancy~\cite{zhong2018deep,zee2019enhancing,yin2019towards,williford2020explainable}. We believe there are two ways to access the interpretability for deep face recognition, ~et al.}{\emph{et al}\onedot{i.e.,} the top-down and bottom-up, respectively. The top-down way resorts to the human prior knowledge for algorithm exploration, since human shows superior ability of face recognition than deep models in many rough conditions. The bottom-up way denotes the exploration from the perspective of face data itself, such as modeling the explainable deep face recognition in spatial and scale dimension. The second aspect refers to the joint modeling and optimization of face detection, preprocessing and representation. Ideally, the three elements should be jointly modeled and optimized with respect to the end-to-end accuracy. On one hand, such integration provides a possibility to search global optimal solution for the holistic system; on the other hand, the individual elements of the system can benefit from the upstream ones. However, the elements have different learning objectives regarding to their own tasks. For example, face detection aims to regress the correct bounding box for the real face, while face representation learning aims to span a discriminative feature space from the given cropped faces. Therefore, how to unify these learning objectives is a challenging and critical issue for the joint optimization. One can find a group of works~\cite{mtcnn,deng2020retinaface,Wu2017ReST,Zhao_2020_CVPR,Hayat2017JointRA,Zhong2017e2e,Zhou2018GridFaceFR,Zhao2018pim,Wei2020BalancedAF} attempt to integrate face detection and alignment, or face alignment and representation for a joint boost. But the face detection is still difficult to be integrated with face representation because they have quite different objectives and implementation mechanisms. Nevertheless, it is still worth to exploit the end-to-end trainable deep face recognition, and study how they can be further improved through the jointly learning. Furthermore, beyond the topic of this survey, there is also an open question that how can we develop a single network to perform the end-to-end face recognition. \subsection{Conclusion} \label{sec:conclusion} In this survey, we systematically review the recent advances of the elements of end-to-end deep face recognition, which consists of face detection, face preprocessing and face representation. Although there are many surveys about face recognition, they mostly focus on face representation problem without considering the mutual effects from other elements in the pipeline, whereas this survey is the first one which provides a comprehensive review of the elements of end-to-end deep face recognition. We present a detailed discussion and comparison of many approaches in each element from poly-aspects. Additionally, we analyze the existing challenges and collect certain promising future research directions of them. Moreover, we discuss the mutual effect of them and future work of the holistic framework. We hope this survey could bring helpful thoughts to one for better understanding of the big picture of end-to-end face recognition and deeper exploration in a systematic way. \def\tiny{\tiny} \bibliographystyle{ACM-Reference-Format}
2,869,038,155,758
arxiv
\section{Introduction} The transport and stirring of reactive scalars is a problem that naturally arises in many environmental and geophysical situations as well as in engineering applications. Important examples of reactive scalars may be found in oceanic ecosystems e.g. interacting nutrient and plankton populations, in atmospheric chemistry e.g. stratospheric ozone as well as in microfluidics and combustion. In all of these examples, fine-scale strongly-inhomogeneous structures, usually in the form of filaments, characterize the spatial structure of the corresponding reactive scalar fields \cite{Abraham_etal2000, Neufeld_etal2002, Nieves_etal2007, TuckHovde1999, Stremler_etal2004, Kiss_etal2003}. Understanding the main mechanisms controlling the nature of these small-scale structures is important as they can have a large-scale impact for instance on the global ozone depletion \cite{Edouard_etal1996} or on the total plankton production \cite{MahadevanArcher2000}. It is now well known that small-scale filamentary structures arise naturally through chaotic advection in spatially smooth (differentiable) and time-dependent velocity fields \cite{Aref1984,Crisanti_etal1999,Cartwright_etal1999}, relevant to a broad set of applications ranging from stably stratified flows in the atmosphere and the ocean \cite{Haynes1999} to microfluidic devices \cite{Aref2002}. Scalar mixing is induced through the continual stretching and folding of fluid elements by which large-scale scalar variability is transferred into small scales until it is dissipated by molecular diffusion. The rate at which the scalar is mixed is insensitive to the details of the diffusion and depends primarily on the stirring strength of the flow. A measure for the latter is given by the exponential rate at which neighboring fluid parcel trajectories separate in backward time. Following previous work \cite{Ottino1989,Ott1993} on dynamical systems theory applied to chaotic advection, we call this rate the flow Lyapunov exponent. More precisely, it is the most positive Lyapunov exponent associated with the backward dynamics. A non-trivial stationary-state spatial distribution is obtained in the presence of a large- scale space-dependent forcing \cite{Batchelor1959}. In the presence of reactions whose dynamics are stable and for a spatially smooth force, the distribution is filamental or smooth depending on whether the stirring of the flow is stronger or weaker than the rate of convergence of the reaction dynamics. The latter is measured by the set of Lyapunov exponents associated with the reaction dynamics, better known as the chemical Lyapunov exponents \cite{Neufeld_etal1999}, whose values depend on the reaction system and, to a lesser extent, on the driving induced by chaotic advection. A useful way to characterize the scaling behavior of the spatial distribution is by investigating the scaling exponents of statistical quantities such as structure functions. For closed chaotic flows (bounded flow domain) and at scales for which diffusion can be neglected, the small-scale structure of all the reactive scalar fields is shared and characterized by a single scaling regime (special conditions that give rise to exceptions will be discussed later). The theoretical prediction for the H\"older exponent, the scaling exponent associated with the field's first-order structure function, was found by \cite{Neufeld_etal1999} to be determined by the ratio of the least negative chemical Lyapunov exponent to the flow Lyapunov exponent (as defined previously) (see also \cite{Hernandez-Garcia_etal2002} for an extension to a multi-species reaction model). This theoretical prediction, deduced for reaction systems that are based on ordinary differential equations, was found to be in contradiction with the numerical results that \cite{Abraham1998} obtained for a reaction model that is based on delay differential equations. The latter is a model that describes the biological interactions among nutrients, phytoplankton and zooplankton and is in this paper referred to as the {\it delay plankton model}. The numerical results of \cite{Abraham1998} appeared to show that introducing a delay time into the reactions led to the decoupling among the phytoplankton and zooplankton distributions at all length scales. Moreover, as the value of the delay time was increased, the zooplankton distribution was found to become increasingly filamental, ultimately behaving like a passive, non-reactive scalar, in agreement with most oceanic observations at the mesoscale \cite{MackasBoyd1979,Mackas_etal1985, Tsuda1995}. The relation between the numerical work of \cite{Abraham1998} for the system with delay and the theoretical and numerical work of \cite{Neufeld_etal1999} and \cite{Hernandez-Garcia_etal2002} for the system without delay has recently been addressed in \cite{TzellaHaynes2007}. Based on an alternative numerical method that permits the study of smaller length scales, a new set of carefully performed numerical simulations revealed that for sufficiently small length scales, the phytoplankton and zooplankton distributions share the same small-scale structure, as would be expected in the absence of delay. However, at scales larger than a transition length scale, a second scaling regime appeared in which the scaling behavior that \cite{Abraham1998} observed was reproduced. The main focus of this paper is to present a theory for the spatial properties of reactive scalar fields whose reactions explicitly contain a delay time and which are stirred by a chaotic advection flow. One motivation is better understanding of the delay plankton model discussed above, but broader motivation comes from the wide application of delay equations to model chemical \cite{Roussel1996} and biological \cite{Murray1993} systems. By varying the delay time as well as the stirring strength of the flow and the reactions, two main issues are here investigated: firstly, the origin of the second scaling regime and secondly the parameters that control the transition length scale and scaling behavior in each of these two regimes. In order to obtain a theoretical understanding of such a system, models of increasing complexity will be considered starting with a single linear delay reactive scalar field and moving on to a system of nonlinearly interacting scalar fields. Scalar fields evolving according to reaction equations containing a delay time are in the following referred to as {\it delay reactive scalar fields}. The theoretical development is accompanied by a set of numerical results obtained for (i) a single linear delay reactive scalar and (ii) the delay plankton model, both coupled to a two-dimensional, unsteady and incompressible flow via a large scale spatially smooth source. This paper is organized into two parts. The first part, Sec. \ref{sec:DelayTheory}, is solely devoted to the theoretical development of a single delay reactive scalar, complemented in the Appendix for a system of such fields. A set of scaling laws are deduced describing the H\"older exponents associated within three scaling regimes. The transition length scale dividing small-scale and intermediate-scale regimes is found to depend on the product of the delay time and the stirring strength of the flow. The second part of the paper, Sec. \ref{sec:DelayNumericalResults}, consists of the numerical simulations to verify the theoretical results obtained in Sec. \ref{sec:DelayTheory}. The paper concludes with a summary and conclusion. \section{Theoretical Development}\label{sec:DelayTheory} \subsection{Reactive Scalar Evolution Models} The spatial and temporal evolution of passively advected reactive tracers is described by the Advection Diffusion Reaction (ADR) equations. For the case of an incompressible velocity field, $\bm{v}(\bm{x},t)$, and for $t>0$, the typical form of these equations is \begin{equation}\label{eqn:ADR} \frac{\partial}{\partial t} \bm{c}(\bm{x},t)+ \bm{v}(\bm{x},t)\cdot \nabla \bm{c}(\bm{x},t) = \bm{\mathcal{F}}_{-\tau}+D\nabla^{2}\bm{c}(\bm{x},t), \end{equation} where the fields $\bm{c}(\bm{x},t)=(c_1(\bm{x},t), c_2(\bm{x},t),\ldots,c_n(\bm{x},t))$, $n$ being the number of chemical species, are assumed to diffuse independently from one another with the same constant diffusivity $D$. The interactions among these scalar fields e.g. chemical reactions or predator-prey interactions, are described by the forcing term $\bm{\mathcal{F}}_{-\tau}\equiv\bm{\mathcal{F}}(\bm{c}(\bm{x},t), \bm{c}(\bm{x},t-\tau), \bm{x})$ in which the effects of sources and sinks are also included. The main feature of the forcing term is its dependence on a delay time $\tau$ associated with, e.g. the time it takes for a biological species to mature. Note that for Eq. (\ref{eqn:ADR}) to be well-defined, $\bm{c}(\bm{x},t)$ needs to be initialized for $t\in[-\tau,0]$. The explicit dependence of the forcing term on the spatial coordinate $\bm{x}$ accounts for the inhomogeneous distributions of these sources and sinks e.g. due to a spatially varying nutrient field, or for the spatial dependence of the reproduction and predation rates of biological species e.g. due to a temperature dependence. If the forcing term does not depend on the spatial coordinate, the reactions are not coupled to the flow and any initial inhomogeneity in the concentration fields is stirred down by advection and eventually smoothed out by diffusion. We will here concentrate on a forcing term that in the absence of advection has a single, stable, fixed point of equilibrium. In this case, as it will be clear later, for a time $t$ that is large enough, $\bm{c}(\bm{x},t)$ is assumed \cite{Neufeld_etal1999} to reach a statistical equilibrium. To tackle Eq. (\ref{eqn:ADR}) one can either consider the fields in the space domain the fluid is defined \cite{Corrsin1961} - the Eulerian approach - or instead consider their evolution along the trajectory traced by each fluid parcel that constitutes the fluid - the Lagrangian approach. The approach we will adopt is the Lagrangian one. For cases for which advective transport dominates diffusion i.e. large P\'eclet number, a natural approach is to set $D=0$. The chemical evolution of a fluid parcel is then independent of all such parcels and Eq. (\ref{eqn:ADR}) is reduced into a low-dimensional dynamical system given by \begin{subequations} \begin{align} \frac{d\bm{X}(t)}{dt}&=\bm{v}(\bm{X}(t),t) \label{eqn:traj},\\ \frac{d\bm{C}_{\bm{X}(t)}(t)}{dt}&=\bm{\mathcal{F}}_{-\tau}(\bm{C}_{\bm{X}(t)}(t),\bm{X}(t)),\label{eqn:chem} \end{align} \end{subequations} where $\bm{X}(t)$ denotes the fluid parcel's trajectory and $\bm{C}_{\bm{X}(t)}(t)$ is a vector of its chemical concentration fields, satisfying $\bm{C}_{\bm{X}(t)}(t)=\bm{c}(\bm{x}=\bm{X}(t),t)$. The implication of the neglect of diffusion is that any predictions concerning the spatial structure apply only above a certain spatial cut-off scale whose value approaches zero for smaller and smaller diffusivities (see \cite{LopezHernandez-Garcia2002} where this argument is developed for a linearly decaying reactive scalar). The principal aim here is to examine the small-scale structure of the scalar fields once statistical equilibrium has been attained and characterize this structure in terms of H\"older exponents. To do so, the concentration difference between neighboring points, given by $\delta\bm{c}(\delta\bm{x};\bm{x},t)\equiv\bm{c}(\bm{x}+\delta\bm{x},t)-\bm{c}(\bm{x},t)$, needs to be investigated as a function of $\delta \bm{x}$ from where the H\"older exponents $\bm{\gamma}=(\gamma_1, \gamma_2,\ldots,\gamma_n)$, defined by \begin{equation}\label{eqn:concentrationdiff2} |\delta c_i(\delta\bm{x};\bm{x},t)|\sim |\delta \bm{x}|^{\gamma_i}, \quad |\delta \bm{x}|\rightarrow 0, \end{equation} can de deduced. For a smooth field (i.e. differentiable) $\gamma_i=1$ at $\bm{x}$ while the range $0<\gamma_i<1$ corresponds to an irregular (e.g. filamental) field. This concentration difference can be estimated by considering the concentration difference between two neighboring fluid parcels $\bm{X}(t)$, $\bm{X}+\delta \bm{X}(t)$ with \begin{equation}\label{eqn:LagrangianEulerian2} \delta\bm{c}(\delta\bm{x};\bm{x},t) =\delta\bm{C}_{\delta\bm{X}(t);\bm{X}(t)} \equiv\bm{C}_{\bm{X}+\delta\bm{X}(t)}-\bm{C}_{\bm{X}(t)}. \end{equation} In order to simplify the analysis, in the following we will concentrate on the following simple example, \begin{equation}\label{eqn:1Ddelay} \frac{d}{dt}C(t)=-aC(t)-bC(t-\tau)+C_0(\bm{x}(t)), \end{equation} where $a,b,\tau$ are constants with $a,\tau>0$ and $C_0(\bm{x})$ is a spatially smooth source. The more general case (\ref{eqn:chem}) is considered in the Appendix. We will only consider two-dimensional flows, however the theory presented is readily extendable to large-scale flows in higher dimensions. \subsection{Key Properties of Forced Linear Delay Equations}\label{subsec:KeyProperties} To understand the role that a delay time plays on the fields' scaling behavior, the general properties of forced linear delay differential equations (DDEs) need to be considered. An overview of those is now presented. For more complete treatments see \cite{HaleLunel1993}, \cite{BellmanCooke1963} and \cite{Diekmann_etal1995}. Take the one-dimensional forced, linear DDE \begin{equation}\label{eqn:forced1D} \dot{y}=-ay(t)-by(t-\tau)+f(t), \end{equation} where $a$, $b$ and $\tau$ are the same as before and $f$ is a real continuous function. In order for $y(t)$ to be uniquely determined, it is necessary to prescribe an initial function on the interval $[-\tau,0]$. Denoting this function by $\phi(t)$, it follows that \begin{subequations}\label{eqn:general} \begin{align} y(t)&=\phi (t), \quad \text{for $t\in [-\tau,0]$,} \label{eqn:initialcd}\\ y(t)&= e^{-at}\phi(0)+ \nonumber \\ \phantom{y(t)}&\phantom{=} \int_0^t e^{-a(t-t')}[-by(t'-\tau)+f(t')]dt',\quad \text{for $t>0$,} \label{eqn:variationconstants} \end{align} \end{subequations} where Eq. (\ref{eqn:variationconstants}) is easily deduced using the well-known variation of constants (or parameters) formula. Based on Eq. (\ref{eqn:general}), an expression for $y(t)$ for $t\in[0,\tau]$ is readily determined. Substituting this expression into (\ref{eqn:variationconstants}), $y(t)$ can be calculated for $t\in[\tau,2\tau]$ and so on for larger time intervals. This method is called the {\it method of steps}. In a similar way to ordinary differential equations (ODEs), the {\it characteristic equation} for the homogeneous part of Eq. (\ref{eqn:forced1D}) is obtained by looking for solutions of the form $ce^{\lambda t}$, where $c$ is a constant and $\lambda$ is complex. The scalar equation \begin{equation}\label{eqn:1Dhom} \dot{y}=-ay(t)-by(t-\tau) \end{equation} has a nontrivial solution, $ce^{\lambda t}$, if and only if \begin{equation}\label{eqn:1Dchar} h(\lambda)\equiv\lambda+a+be^{-\lambda \tau}=0. \end{equation} Eq. (\ref{eqn:1Dchar}) is transcendental and thus the number of roots is infinite. At the same time, because $h(\lambda)$ is an entire function, the number of roots is finite within any compact region in the complex plane. Because $a$, $b$ and $\tau$ are real, the roots must come in complex conjugate pairs. It can be shown \cite{HaleLunel1993} that the real part of each root is bounded. Moreover, for $a>|b|$ and for all $\tau>0$, $\mathrm{Re\,}\lambda<0$. The latter is the necessary condition for the solution to Eq. (\ref{eqn:1Dhom}) to be stable. The solution to the forced delay equation (\ref{eqn:forced1D}) is closely dependent on a particularly initialized solution of the homogeneous delay equation (\ref{eqn:1Dhom}), called the {\it fundamental solution}. This function, denoted by $Y(t)$, is defined as the solution of (\ref{eqn:1Dhom}) which satisfies the following initial condition \begin{equation}\label{initialcds} Y(t) = \left\{ \begin{array}{rl} 0, &\mbox{ $t<0$,} \\ 1, &\mbox{ $t=0$.} \end{array} \right. \end{equation} For $0\leq t\leq\tau$, an exact expression for $Y(t)$ may be obtained using the {\it method of steps}. Substituting into Eq. (\ref{eqn:variationconstants}) the initial conditions given by Eq. (\ref{initialcds}) and setting $f=0$ gives \begin{equation}\label{eqn:relationa_eigenval} Y(t)=e^{-at}. \end{equation} For $t>\tau$, an expression for $Y(t)$, obtained using the {\it method of steps}, is no longer useful. This is because the expression involves terms in powers of $t$ and thus for large values of $t$ it is difficult to extract any insight into the behavior of $Y(t)$. Using Laplace transforms, it is possible to express $Y(t)$ in terms of an infinite sum of eigenfunctions. Taking the Laplace transform of Eq. (\ref{eqn:1Dhom}) with initial conditions given by Eq. (\ref{initialcds}) leads to \begin{equation}\label{eqn:fundamental_aux} \mathcal{L}(Y)(\lambda)\equiv\int_0^{\infty}e^{-\lambda t}Y(t)\,dt =h^{-1}(\lambda), \end{equation} where $\mathcal{L}$ stands for Laplace transform. Employing the inversion theorem, \begin{equation}\label{eqn:fundamental_formal} Y(t)=\int_{(\gamma)}e^{\lambda t}h^{-1}(\lambda)d\lambda, \quad t>0, \end{equation} where $\int_{(\gamma)}\equiv\lim_{T\rightarrow\infty} \,\frac{1}{2\pi i}\,\int_{\gamma-iT}^{\gamma+iT}$ with $\gamma>\text{max}\{\mathrm{Re\,}\lambda:h(\lambda)=0\}$. Using the Cauchy residue theorem to integrate $e^{\lambda t}h^{-1}(\lambda)$ along a suitably chosen contour, $Y(t)$ can be expressed as an infinite series of eigenfunctions \begin{equation}\label{eqn:Y_fundamental_alt} Y(t)=\sum_{j=1}^{\infty} \underset{\lambda=\lambda_j}{\text{Res}}\phantom{x} e^{\lambda t}h^{-1}(\lambda), \quad t>0, \end{equation} that is uniformly convergent in $t$ (see \cite{Lunel1995}). Since the roots are either real or come in complex conjugate pairs, Eq. (\ref{eqn:Y_fundamental_alt}) can be re-written as \begin{subequations}\label{eqn:Y_fundamental_alt23} \begin{equation}\label{eqn:Y_fundamental_alt2} Y(t)=\lim\limits_{N\rightarrow\infty}{Y_N(t)}, \quad t>0, \end{equation} with $Y_N(t)$ defined by \begin{equation}\label{eqn:Y_fundamental_alt3} Y_N(t)\equiv \sum_{\substack{j=1\\ \{\lambda_j^+:\,\mathrm{Im\,}\lambda_{j}\geq 0\}}}^N P_j(\lambda_j^+,t) \,e^{\mathrm{Re\,}\lambda_j^+ t},\quad t>0 \end{equation} where $\lambda_j^+$ represents a root of (\ref{eqn:1Dchar}) with a positive or zero imaginary part satisfying $\mathrm{Re\,}\lambda_j^+>\mathrm{Re\,}\lambda_{j+1}^+$ for all $j$ with \begin{equation}\label{eqn:Pjlambdat} P_j(\lambda_j^+,t)= 2^{\mathcal{H}(\mathrm{Im\,}\lambda_j^+)}\cos(\mathrm{Im\,}\lambda_j^+t-\phi_j^+)|h'(\lambda_j^+)|^{-1}, \end{equation} and \begin{equation}\label{eqn:angle1} \phi_j^+=\tan^{-1}\left(\frac{\mathrm{Im\,} h'(\lambda_j^+)}{\mathrm{Re\,} h'(\lambda_j^+)}\right). \end{equation} $\mathcal{H}(x)$ is defined as \begin{equation}\label{eqn:H(x)_back2} \mathcal{H}(x)= \begin{cases} \phantom{x}1,& \text{if $x> 0$},\\ \phantom{x}0,& \text{if $x\leq 0$}. \end{cases} \end{equation} \end{subequations} Note that by a suitable choice of parameters, all roots of (\ref{eqn:1Dchar}) are distinct and thus $e^{\lambda t}h^{-1}(\lambda)$ only has simple poles. It follows that for sufficiently large values of $t$, $Y(t)$ is dominated by its slowest decaying eigenfunction and thus \begin{equation} Y(t)\sim Y_1(t). \end{equation} $Y(t)$ is numerically determined and plotted for two sets of parameters $(a,b,\tau)$ in Fig. \ref{fig:SeriesFundamental1}. Both sets share the same $\mathrm{Re\,}\lambda_1\simeq-0.68$; the difference is that $\lambda_1$ is real in Fig. \ref{fig:SeriesFundamental1}(a) and imaginary in Fig. \ref{fig:SeriesFundamental1}(b). Also plotted in Fig. \ref{fig:SeriesFundamental1} are the functions $Y_1(t)$ and $Y_5(t)$. The roots of the characteristic equation are determined using the DDE-BIFTOOL \cite{DDE-BIFTOOL}. In both cases, $Y_1(t)$, is found to be in good agreement with $Y(t)$ for $t\gtrsim\tau$. This indicates that within this period, the remaining eigenfunctions have decayed sufficiently for $Y_1(t)$ to dominate the behavior of $Y(t)$. However for $0\leq t \leq \tau$, its dominant behavior depends on the contribution of many eigenfunctions, the number of which increases as $t\rightarrow 0$. Instead, one needs to refer to Eq. (\ref{eqn:relationa_eigenval}). \begin{figure}[t] \begin{minipage}{\linewidth} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=7cm]{1a-0_16b1tau.pdf}} \centerline{(a) $a=1$, $b=-0.16$, $\tau=1$} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=7cm]{1a0_9b1tau.pdf}} \centerline{(b) $a=1$, $b=0.9$, $\tau=1$} \end{minipage} \end{minipage} \caption{The fundamental solution, $Y(t)$, plotted as a function of $t/\tau$ (solid black). Also plotted are $Y_1(t)$ (dashed gray) and $Y_5(t)$ (dashed/dotted gray) (see Eq. (\ref{eqn:Y_fundamental_alt3})). In both parameter sets $\mathrm{Re\,}\lambda_1\simeq - 0.68$.} \label{fig:SeriesFundamental1} \end{figure} \begin{figure}[!] \begin{minipage}{\linewidth} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=7cm]{FundamentalPerfectCasesLongShort1par-0_16.pdf}} \centerline{(a) $a=1$, $b=-0.16$, $\tau=1$} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=7cm]{FundamentalPerfectCasesLongShort1par0_9.pdf}} \centerline{(b) $a=1$, $b=0.9$, $\tau=1$} \end{minipage} \end{minipage} \caption{Same as Fig. \ref{fig:SeriesFundamental1} but this time the fundamental solution is compared to expression (\ref{eqn:YLongShort}). $Y_1(t)$ is plotted (dashed gray) for $t>\tau$ and $e^{-at}$ for $0\leq t\leq \tau$ (dashed gray). } \label{fig:SeriesFundamental2} \end{figure} The above can be summarized into the following expression for the fundamental solution: \begin{equation}\label{eqn:YLongShort} Y(t) = \left\{ \begin{array}{rll} &e^{-at}, &\mbox{ $0\leq t\leq\tau$,} \\ \sim& Y_1(t) , &\mbox{ $t>\tau$.} \end{array} \right. \end{equation} The validity of expression (\ref{eqn:YLongShort}) is clearly depicted in Fig. \ref{fig:SeriesFundamental2} where it is plotted and compared to $Y(t)$ for the two sets of parameters already shown in Fig. \ref{fig:SeriesFundamental1}. Notice the central difference between the fundamental solution of an ODE and a DDE. While in the former case, the behavior of the fundamental solution remains unaltered at all times, in the latter case, a distinct transition takes place at $t=\tau$. At the same time, for $t\leq \tau$, the fundamental solution of a DDE is identical to the fundamental solution of the ODE that is obtained by omitting from the DDE the terms that contain a delay time i.e. equivalent to setting $b=0$ in Eq. (\ref{eqn:forced1D}). The reason for which so much attention is given to the fundamental solution is that the general solution to the forced delay equation (\ref{eqn:forced1D}) can be expressed in terms of it. To see this, consider the Laplace transform of Eq. (\ref{eqn:forced1D}) with initial conditions given by (\ref{eqn:initialcd}). Provided that the forcing $f(t)$ is exponentially bounded, \begin{equation}\label{eqn:variation_aux} \begin{split} h(\lambda)\,\mathcal{L}(y)(\lambda)=\phi(0) -be^{-\lambda\tau}&\int_{-\tau}^0 e^{-\lambda\theta}\phi(\theta)\,d\theta \\ +&\int_0^\infty e^{-\lambda t}f(t)dt. \end{split} \end{equation} Use of the convolution and inversion theorems leads to the following expression for the general solution \begin{subequations}\label{eqn:variation_tot} \begin{equation}\label{eqn:variation} y(\phi,f)(t)=y(\phi,0)(t)+\int_{0}^{t}Y(t-t')f(t')\,dt', \end{equation} where $y(\phi,0)(t)$ represents the solution to the (unforced) homogeneous delay equation (\ref{eqn:1Dhom}) and is given by \begin{equation}\label{eqn:variation_homogeneous} y(\phi,0)(t)=Y(t)\phi (0)-b\int_{-\tau}^{0}Y(t-\theta-\tau)\phi (\theta)\,d\theta. \end{equation} \end{subequations} Because of its similarity to ordinary differential equations, the representation of $y(\phi,f)(t)$ in this form is often referred \cite{HaleLunel1993} to as the variation of constants formula. Using this representation, it is easily deduced that the solution of any, either homogeneous or forced, linear delay equation is governed by its fundamental solution with the roots of the characteristic equation controlling its asymptotic behavior. \subsection{Scaling Behavior} Having presented some basic properties concerning linear DDEs, the next objective is to consider their coupling to a chaotic advection flow. For a chemical system satisfying Eq. (\ref{eqn:1Ddelay}), the evolution of the chemical difference between a pair of fluid parcels can be obtained by simultaneously linearizing the chemical (\ref{eqn:1Ddelay}) and trajectory (\ref{eqn:traj}) evolution Eqs. around a fluid parcel. Using the variation of constants formula (\ref{eqn:variation_tot}), \begin{equation}\label{eqn:chemicaldifference_delay} \begin{split} \delta C(t)= Y(t)\delta C(0) &-b \int_{-\tau}^0 Y(t-\theta-\tau)\, \delta C(\theta)\,d\theta \\ &+\phantom{b}\int_0^t Y(t-t') \, \left(\frac{\partial C_0}{\partial \bm{X}}\cdot \delta\bm{X}(t')\right) \,dt', \end{split} \end{equation} where $\{\delta\bm{X}(t);\bm{X}(t)\}$, the label on the fluid parcel difference, has been suppressed for brevity. For $t\in [-\tau,0]$, $\delta C(t)=\phi(t)$ where $\phi(t)$ is a prescribed initial function. To analyze the scaling behavior of the delay scalar field at statistical equilibrium, the long-time limit of Eq. (\ref{eqn:chemicaldifference_delay}) needs to be considered. A useful property for $Y(t)$ is that it is bounded with $|Y(t)|<K\exp[\mathrm{Re\,}\lambda_1t]$ where $K>0$ (see \cite{HaleLunel1993}). We impose that $a>|b|$, thus ensuring that $\mathrm{Re\,}\lambda_1<0$ for all $\tau>0$ (see \S\ref{subsec:KeyProperties}). It follows that in the long-time limit, the first two terms that describe the evolution of the initial conditions vanish. Note that this is not the case for either marginally stable ($\mathrm{Re\,}\lambda_1=0$) or unstable ($\mathrm{Re\,}\lambda_1>0$) chemical dynamics. At the same time, since the source depends smoothly on space, its spatial derivatives do not increase or decrease in a systematic way. Thus, the evolution of $|\delta_{\bm{X}}C_0(t')|$ is closely related to the evolution of the separation between the pair of fluid parcels i.e. $\delta_{\bm{X}} C_0(t')=\frac{\partial C_0}{\partial\bm{X}}\cdot\delta\bm{X}(t')\sim |\delta \bm{X}(t')|$. To obtain an expression for $|\delta\bm{X}(t')|$ in terms of $|\delta\bm{X}(t)|$, Eq. (\ref{eqn:traj}) is linearized around $\bm{X}(t')$ from where it can be deduced that for $t>t'$, \begin{subequations} \begin{equation} \delta\bm{X}(t')=\bm{N}(t',t) \delta \bm{X}(t), \end{equation} with \begin{equation} \bm{N}(t',t)=\exp\left[\int_{t}^{t'}\frac{\partial \bm{v}}{\partial\bm{X}}ds\right], \end{equation} \end{subequations} where $|\delta\bm{X}(t)|$ is considered to be much less than the characteristic length scale of the velocity field, $L$, where here $L=1$. Consequently, the evolution of $|\delta \bm{X}(t')|$ is dictated by $\bm{N}^T\bm{N}(t',t)$ once calculated along the fluid parcel trajectory in backward time. Because $\bm{N}^T\bm{N}$ is a real, non-negative symmetric matrix, its eigenvalues are positive. Therefore, depending on its orientation at time $t$, as time $t'$ decreases $|\delta \bm{X}(t')|$ increases or decreases exponentially according to a set of rates whose number equals the dimension of the flow and whose values depend on the eigenvalues of $\bm{N}^T\bm{N}$. In the limit of $t-t'\rightarrow\infty$, these rates are defined as the Lyapunov exponents \cite{Ottino1989,Ott1993}. For a two-dimensional, incompressible flow that is both ergodic and hyperbolic, all trajectories share the same set of Lyapunov exponents $\{h_0,-h_0\}$ with $h_0>0$. It follows that for almost all orientations at time $t$, the typical separation between a pair of neighboring fluid parcels increases exponentially in backward time at a rate given by the flow Lyapunov exponent $h_0$ with $|\delta \bm{X}(t')|\sim |\delta\bm{X}(t)|\exp[h_0(t-t')]$. The exponential increase of $|\delta \bm{X} (t)|$ can only be valid for the time period for which its length remains considerably less than the characteristic length scale of the velocity field. This is because for larger length scales ($\gtrsim 0.1$), linearizing the trajectory Eq. (\ref{eqn:traj}) is no longer valid. For these larger length scales, finite-size effects become important and the value of $|\delta\bm{X}(t)|$ saturates at the length of the characteristic length scale of the velocity field. The time it takes for $|\delta \bm{X}(t)|$ to saturate in backward evolving time is here referred to as the {\it stir-down time} and is denoted by $T_{\delta X}$. By choosing $|\delta \bm{X}(t)|$ to be sufficiently small, an approximate expression for $T_{\delta X}$ is given by \begin{equation}\label{eqn:stirring} T_{\delta X}=\frac{1}{h_0}\log(1/|\delta \bm{X}|),\quad \text{for $|\delta \bm{X}| \ll 1$}. \end{equation} It follows that qualitatively, the evolution of a typical separation between two fluid parcels can be divided into two parts: the first one corresponding to the period that it exponentially increases and the second one to the rest of the time during which its value remains saturated. Therefore, \begin{equation}\label{eqn:distance_asymptotic} |\delta \bm{X} (t')| \sim \begin{cases} \,|\delta \bm{X} (t)| \, e^{h_0 (t-t')}, & \text{ for $0<t-t'\leq T_{\delta X}$},\\ \,1, & \text{ for $t-t'>T_{\delta X}$}. \end{cases} \end{equation} The asymptotic behavior of the chemical difference between any two fluid parcels, and thus from (\ref{eqn:LagrangianEulerian2}), between any two neighboring points, may be deduced by substituting expression (\ref{eqn:distance_asymptotic}) into Eq. (\ref{eqn:chemicaldifference_delay}) (replacing $\bm{X}$ with $\bm{x}$ and $\delta\bm{X}$ with $\delta\bm{x}$). After making a change of variables from $t'$ to $\Delta t=t-t'$ and taking the limit of $t\rightarrow\infty$, the small-scale behavior ($|\delta \bm{x}\ll 1|$) is given by \begin{equation}\label{eqn:delayconcdiff} \delta c_\infty(\delta\bm{x})\sim \int_0^\infty Y(\Delta t)\,\text{min}\{|\delta \bm{x}|e^{h_0 \Delta t},1\}\, d\Delta t \end{equation} where a number of space- and time- factors are omitted since they do not affect the scaling laws. Note that within the approximation made here, the rate of exponential increase of the separation between fluid parcels, $h_0$, is taken to be independent of the individual trajectories and therefore the dependence of $\delta c_\infty$ on $\bm{x}$ is dropped. In reality, this rate will depend on the trajectory thus modifying the average scaling behavior of the field. (See \cite{Neufeld_etal2000a} for discussion of the implication of this for a linearly decaying reactive scalar. The extension of this discussion for the delay case is left for future work.) \subsection*{Transition length scale} An expression equivalent to (\ref{eqn:delayconcdiff}) was obtained by \cite{Neufeld_etal1999,Hernandez-Garcia_etal2002} in the context of an ordinary reactive scalar whose reactions involve no delay time. In both cases, delay and ordinary, the asymptotic behavior of the concentration field is governed by the convolution in time of the fundamental solution associated with the chemical subsystem with the separation between fluid parcels. However, a fundamental difference between the delay reactive scalar and the ordinary reactive scalar will significantly affect the asymptotic scaling behavior of the delay scalar field and modify it with respect to the scaling behavior of the ordinary reactive scalar. This difference lies in the fundamental solution. As discussed in \S\ref{subsec:KeyProperties}, the behavior of $Y(t)$ associated with a linear DDE is distinctly different depending on whether $t/\tau$ is larger than or less than $1$. It follows that the asymptotic behavior of $\delta c_\infty(\delta\bm{x})$ must differ according to whether $T_{\delta x}/\tau$ is larger than or less than $1$. Since the value of $T_{\delta x}$ depends on $|\delta \bm{x}|$, this transition must occur at a certain length scale, denoted by $\delta x_c$, here named the {\it transition length scale}. An approximate expression for $\delta x_c$ may be obtained by considering the value of $|\delta \bm{x}|$ for which \begin{align} T_{\delta x_c}&\sim\tau,\\ \intertext{from where it can be deduced that $\delta x_c$ must then approximately be equal to} \delta x_c&\sim e^{-h_0\tau}.\label{eqn:characteristiclength scale} \end{align} Thus, the magnitude of the transition length scale is controlled by the product of the delay time with the flow Lyapunov exponent while it is independent of the parameter details of the reactions. Expression (\ref{eqn:characteristiclength scale}) represents the first key theoretical result of the paper. \subsection*{Scaling regimes} The scaling behavior of the field is now separately examined for length scales less than and larger than the transition length scale. A good way to gain insight into this behavior is to consider the absolute value of the integrand of Eq. (\ref{eqn:delayconcdiff}). The first function has an exponential decay (perhaps oscillatory) while the second function initially increases exponentially and then saturates. Thus the absolute value of the integrand has a distinct maximum and the dominant contribution to the integral comes from the neighborhood of this maximum. The corresponding dependence of the integral on the value of $\delta x$ implies up to three possible scaling regimes (depending on $\delta x$ and on the other parameters in the problem). \begin{figure}[!] \centering \begin{minipage}{\linewidth} \centerline{$|Y(t)\,\text{min}\{|\delta \bm{x}| e^{h_0 t},1\}|$} \centerline{\includegraphics[width=6cm]{ProductRegimeI.pdf}} \centerline{(a) $T_{\delta X}>\tau$} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{\includegraphics[width=6cm]{ProductRegimeIandII.pdf}} \centerline{(b) $T_{\delta X}<\tau, \; |b|/ae \lesssim \delta x_c$} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{\includegraphics[width=6cm]{ProductRegimeIandIIandIII.pdf}} \centerline{(c) $T_{\delta X}<\tau, \; |b|/(ae) \gg \delta x_c$} \end{minipage} \vfill \caption{(a) $|Y(t)\,\text{min}\{\delta x e^{h_0 t},1\}|$ plotted for $\delta X=10^{-4}$ ($T_{\delta X}\approx 4\tau$) for the two sets of parameters $(a,b,\tau)$ previously considered in Fig. \ref{fig:SeriesFundamental1}: $(1,-0.16,1)$ (black) and $(1, 0.9,1)$ (gray). (b) The same as (a) this time $(1,0.05,10)$ (black) and $(1, 0.1, 10)$ (gray) with $\delta X=10^{-1}$ so that $T_{\delta X}\approx \tau/2$. (c) The same as (b) this time $(1,0.3,10)$ (black), $(1, 0.5, 10)$ (dark gray) and $(1, 0.75, 10)$ (light gray). } \label{fig:Product} \end{figure} \subsection*{Regime I \qquad $\bm{|\delta x|<\delta x_c}$} The first scaling regime, Regime I, concerns length scales that are smaller than $\delta x_c$. For these length scales, the stir-down times are larger than the delay time and thus the chemical dynamics converge at a rate which, for the linear case considered here, is exactly given by $-\mathrm{Re\,}\lambda_1$ (see expressions (\ref{eqn:Y_fundamental_alt3}) and (\ref{eqn:YLongShort}) where the `+' sign is omitted since $\mathrm{Re\,}\lambda_1=\mathrm{Re\,}\lambda_1^+$). In analogy to the flow Lyapunov exponent that controls the strength of the flow dynamics, this rate is called the chemical Lyapunov exponent \cite{Neufeld_etal1999}. It therefore follows that within this regime, the scaling behavior of the delay reactive scalar field is no different to the scaling behavior of an ordinary reactive scalar. For both delay and ordinary scalars, the small-scale structure is controlled by the relative strength of the chemical to the flow dynamics: If $-\mathrm{Re\,}\lambda_1/h_0<1$, the chemical processes are too slow to forget the different spatial histories experienced by the fluid parcels. In this case, the maximum of $|Y(t)\,\text{min}\{|\delta \bm{x}|e^{h_0 t},1\}|$ occurs at $t=T_{\delta x}$ (see Fig. \ref{fig:Product}(a)) and its value depends on $|\delta \bm{x}|^{-\mathrm{Re\,}\lambda_1/h_0}$. Thus, in this case, the field's spatial structure is filamental i.e. non-differentiable in every direction except the direction along which the filaments grow \cite{Neufeld_etal1999}. On the other hand, for $-\mathrm{Re\,}\lambda_1/h_0>1$ the chemical processes converge faster to their equilibrium value than the trajectories diverge from each other. The maximum of $|Y(t)\,\text{min}\{|\delta \bm{x}|e^{h_0 t},1\}|$ occurs at $t=0$ from where it can be deduced that the field's structure is everywhere smooth. Thus, the H\"older exponent within Regime I is equal to $\gamma_1=\text{min}\{-\mathrm{Re\,}\lambda_1/h_0,1\}$. \subsection*{Regimes II \& III\qquad $\bm{|\delta x|> \delta x_c}$} Consider now length scales that are larger than $\delta x_c$. The corresponding stir down times are smaller than the delay time and thus the chemical dynamics converge at a rate given by $-a$, i.e. the decay rate obtained once the delay term is ignored (see expression (\ref{eqn:YLongShort})). There exist two local maxima for $|Y(t)\,\text{min}\{|\delta \bm{x}|e^{h_0 t},1\}|$; the first one is scale-dependent, the second one is a constant (see Figs. \ref{fig:Product}(b-c)). The value of the first local maximum is given by $\underset{t}{\text{max}}\,|e^{-at}\text{min}\,\{|\delta \bm{x}|e^{h_0 t},1\}|=|\delta \bm{x}|^{\text{min}\{a/h_0,1\}}$ (where $Y(t\leq \tau)=e^{-a t}$ was employed (see expression (\ref{eqn:YLongShort})). It therefore follows that if this first local maximum is a global maximum, the field's scaling behavior is described by a H\"older exponent that satisfies $\gamma_2=\text{min}\{a/h_0,1\}$. This scaling regime is denoted by Regime II. Now focus on the second local maximum which is given by $\text{max}\,|Y(t\geq\tau)|$. Since this is a constant, a flat scaling regime will ensue if the second local maximum is larger than the first local maximum. This scaling regime is denoted by Regime III. However, if the second local maximum is smaller than the first local maximum, Regime III does not appear. To investigate the range of length scales for which Regime III appears, consider in more detail $\text{max}\,|Y(t\geq\tau)|$. Now $|Y(t\geq\tau)|$ has a maximum at either $t=\tau$ or at some $t=t^\ast$, where $t^\ast$ is defined as the value of $t$ for which $dY(t)/dt$ is first equal to $0$ and thus $aY(t^\ast)=-bY(t-t\ast)$. First consider the local maximum of $|Y(t\geq\tau)|$ to occur for $\tau<t^\star\leq 2\tau$. Within this time period and using the method of steps (see \S \ref{subsec:KeyProperties}), $Y(t)$ can be exactly expressed as $Y(t)=e^{-a t}-b(t-\tau)e^{-a(t-\tau)}$. Combining this expression with expression (\ref{eqn:relationa_eigenval}) for $Y(t)$ for $0<t\leq \tau$, $t^\star$ must satisfy $e^{-a t^\star}-b(t^\star-\tau)e^{-a(t^\star-\tau)}=-b/ae^{-a(t^\star-\tau)}$ from where we can deduce that a local maximum of $|Y(t\geq\tau)|$ occurs if $0<1/a+1/be^{-a\tau}<\tau$. In this case, $\text{max}\,|Y(t>\tau)|=|b|/a\,e^{-1-a/be^{-a\tau}}$. Now consider $t^\star>2\tau$. In this case $Y(t\leq 2\tau)$ is monotonically decreasing and thus $\text{max}\,|Y(\tau\leq t\leq 2\tau)|=e^{-a\tau}$. Since $|Y(t^\ast)|=|b|/a|Y(t-t\ast)|$, $\text{max}\,|Y(t\geq 2\tau)|\leq |b|/a\,e^{-a\tau}$ and therefore $\text{max}\,|Y(t\geq \tau)|=e^{-a\tau}$. Finally, if no $t^\star$ exists, then again $\text{max}|Y(t\geq \tau)|=e^{-a\tau}$. All three cases can be summarized by \begin{equation} \label{secondmax} \text{max}|Y(t\geq\tau)|=\text{max}\{e^{-a\tau},\frac{|b|}{a}e^{-1-a/be^{-a\tau}}\}. \end{equation} Comparing the value of the first local maximum, $|\delta \bm{x}|^{\text{min}\{a/h_0,1\}}$, with the value of the second local maximum, given by Eq. (\ref{secondmax}), we are able to obtain the following estimate for $\delta x_2$, the length scale that separates Regime II and III: \begin{equation}\label{eqn:estimatedx2} \delta x_2\sim\text{max}\,|Y(t\geq\tau)|^{\text{max}\{h_0/a,1\}}. \end{equation} The following points can be noted: \begin{enumerate} \item If $|b|/(ae)\ll \delta x_c$, $\delta x_2\ll\delta x_c$ and thus there appears no Regime III. \item If $|b|/(a e) \sim 1$, $\delta x_2\gg\delta x_c$ and thus Regime III will ensue at all length scales larger than $\delta x_c$. \end{enumerate} \subsection*{H\"older Exponents} To summarize, the following set of scaling laws describe the spatial structure of the stationary-state delay reactive scalar field as $|\delta \bm{x}|$ varies: \begin{subequations}\label{eqn:delay_holders} \begin{equation}\label{eqn:corrected_char_length} |\delta c_\infty(\delta\bm{x})|\sim \begin{cases} |\delta \bm{x}|^{\gamma_1}, &\;\text{for $|\delta \bm{x}|<\delta x_c$}\\ \text{flat}, &\;\text{for $\delta x_c<|\delta \bm{x}|<\text{max}\{\delta x_2,\delta x_c\}$}\\ |\delta \bm{x}|^{\gamma_2}, &\;\text{for $|\delta \bm{x}|>\text{max}\{\delta x_2,\delta x_c\}$} \end{cases} \end{equation} where the H\"older exponents $\gamma_1$ and $\gamma_2$ are given by \begin{align} \gamma_1&=\text{min}\{1,-\mathrm{Re\,}\lambda_1/h_0\}, \label{eqn:Holder1}\\ \gamma_2&=\text{min}\{1,a/h_0\} \label{eqn:Holder2}. \end{align} \end{subequations} Therefore, Regime II occurs for $|\delta \bm{x}|>\delta x_2$ and Regime III for $\delta x_2>|\delta \bm{x}|>\delta x_c$. It happens that Regime III will not be present if $\delta x_2$ and $\delta x_c$ are not well separated. Similarly, Regime II will not be present if $\delta x_2$ is not sufficiently small. Expression (\ref{eqn:delay_holders}) represents the second key theoretical result of this paper. The more general case for which several interacting chemical species are present is shown in the Appendix to be a slight variant of this expression. Special cases for which the species are not symmetrically coupled with each other may give rise to structures that are characterized by different H\"older exponents for different species. Such a case is the delay plankton model whose behavior is examined in \S\ref{subsec:numerics_delayplanktonmodel}. \section{Numerical Results: Two Examples}\label{sec:DelayNumericalResults} To complement the theoretical results obtained in the previous section, a set of numerical simulations are here performed, firstly for the single linear delay reactive scalar whose evolution within a fluid parcel was introduced in Eq. (\ref{eqn:1Ddelay}), and secondly for the delay plankton model that \cite{Abraham1998} first used for his numerical investigations. This model, shortly to be described, serves not only as a test-bench of the theory presented in Sec. \ref{sec:DelayTheory} but also as an interesting application of it. In both examples, the fluid parcels are advected by a model strain flow whose velocity field is given by \begin{equation}\label{eqn:v} \bm{v}(\bm{x},t)= \left[ \begin{array}{rl} -\displaystyle{\frac{2 }{T}}\Theta(T/2-t\mod T)\cos(2\pi y+\phi) \\[0.2 cm] -\displaystyle{\frac{2 }{T}}\Theta(t\mod T-T/2)\cos(2\pi x+\theta) \end{array} \right], \end{equation} where $\Theta(t)$ is the Heaviside step function defined to be equal to unity for $t\geq0$ and zero otherwise and $x$ and $y$ are the domain's horizontal and vertical axis respectively. The phase angles $\theta$ and $\phi$ change randomly at each period $T$, varying the directions of expansion and contraction and hence ensuring that all parts of the flow are equally mixed \cite{Bohr_etal1998,Ott1993}. Variation of $T$ has an effect on the magnitude of the flow Lyapunov exponent, $h_0$, without changing the shape of the trajectories and the spatial structure of the flow. It may be shown that $h_0$ is inversely proportional to $T$ with \begin{equation}\label{eqn:Lyapunov_T} h_0\approx 2.33/T, \end{equation} where the constant is numerically determined. A large-scale inhomogeneity is injected into the system by introducing a spatially smooth forcing \begin{equation}\label{eqn:force} C_0(\bm{x})=1-1/2\cos[2\pi(x+y)]), \end{equation} oriented along the diagonal of the domain to avoid having the same preferred alignment to the flow. The space-dependence of the force couples the reaction dynamics with the flow dynamics and results in the formation of complex spatial patterns. A statistical steady state is reached after approximately $20T$. To reconstruct the stationary distributions of the corresponding reactive scalars, an ensemble of fluid parcels whose final positions are fixed onto a grid are followed. Using Eq. (\ref{eqn:v}), the parcels are tracked backwards in time up to a point when their initial concentrations are known. Thereafter, knowing their trajectory, their final concentration is determined by integrating the reaction equations forward in time using a second order Runge-Kutta method. This way, to obtain the concentration fields along a one-dimensional transect, it is not necessary to determine the whole two-dimensional field. The absence of interpolation permits greater accuracy at smaller length scales. The initial concentrations are chosen to be equal to their mean equilibrium values, though as long as the reaction dynamics are stable, the final result should be independent of this choice. \begin{figure}[!] \centering \begin{minipage}{\linewidth} \centerline{\includegraphics[width=6cm]{1DSnapshots3par0par1tau1T.pdf}} \centerline{(a) $\tau=0$} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{\includegraphics[width=6cm]{1DSnapshots3par1par1tau1T.pdf}} \centerline{(b) $\tau=1$} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{\includegraphics[width=6cm]{Intersections3par1_0par1tau1T.pdf}} \centerline{(c) Intersection} \caption{(Color online) Snapshots of reactive scalar distributions whose reactions evolve according to Eq. (\ref{eqn:1Ddelay}) at statistical equilibrium ($t=20T$). The two cases depict (a) a linearly decaying reactive scalar ($a=3$, $b=0$) for which no delay time is present and (b) a linear delay reactive scalar ($a=3$, $b=1$, $\tau=1$). The period $T=1$ such that $h_0\approx 2.33$ with $a>h_0$. The smoothly varying force is diagonally oriented given by Eq. (\ref{eqn:force}). The bars on the right give the concentration values. (c) One-dimensional transects ($y=0.5$) for the linearly decaying reactive scalar (black line) and the delay reactive scalar (gray line) } \label{fig:1DSnapshots} \end{minipage} \end{figure} The stationary distributions of a linearly decaying reactive scalar and a linear delay reactive scalar, with reactions evolving according to Eq. (\ref{eqn:1Ddelay}), are depicted respectively in Figs. \ref{fig:1DSnapshots}(a) and (b). Notice the distinct difference between the two distributions: Fig. \ref{fig:1DSnapshots}(a) contains no delay term whereas Fig. \ref{fig:1DSnapshots}(b) contains a delay term and it is this delay term that is responsible for the filamental behavior of the concentration field. This difference is more easily observed in the corresponding one-dimensional transects shown in Fig. \ref{fig:1DSnapshots}(c). The most common method to characterize the scaling behavior of the distributions is to consider their Fourier power spectra. An alternative method is to consider the concentration difference between points separated by a fixed distance. The latter is called the structure function \cite{MoninYaglom1975} and it is the method we employ here since it allows an easy comparison between the theoretical results of the previous section with the numerical results of this section. The first-order structure function associated with the field $c(\bm{x},t)$ is defined as \begin{equation}\label{eqn:def_structurefunction} S(\delta x)\equiv\langle |\delta c(\delta \bm{x};\bm{x},t) |\rangle\sim \delta x^\gamma, \end{equation} where $\langle\ldots\rangle$ denotes averaging over different values of $\bm{x}$ and $\delta x\equiv|\delta \bm{x}|$. Recall that $\delta c(\delta \bm{x};\bm{x},t)\equiv \delta c(\bm{x}+\delta \bm{x},t)-\delta c(\bm{x},t)$. For the time being we assume that the $\gamma$ appearing in (\ref{eqn:def_structurefunction}) is precisely the H\"older exponent as predicted by previous theoretical arguments. For both the delay reactive scalar and the delay plankton model, the parameters are chosen in such a way that all three scaling regimes, described by expression (\ref{eqn:delay_holders}), emerge within the range of length scales considered. To control this range, the magnitude of the characteristic length scale that separates the Regime I from Regimes II and III, denoted by $\delta x_c$, needs to be considered. Substituting expression (\ref{eqn:Lyapunov_T}) into (\ref{eqn:characteristiclength scale}), the expression for $\delta x_c$ for the model strain flow (\ref{eqn:v}) becomes \begin{equation}\label{eqn:estimate_char} \delta x_c\approx \exp(-2.33 \, \tau/T). \end{equation} Thus, the value of $\delta x_c$ is modified by varying the value of $\tau/T$ (see Table \ref{table:characteristic_length scale} where the value of $\delta x_c$ is calculated for some key values of $\tau/T$). \begin{table}[!] \topcaption{ An estimate for the characteristic length scale, calculated for the model strain flow (\ref{eqn:v}) for $L=1$ using expression (\ref{eqn:estimate_char}).} \label{table:characteristic_length scale} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}} c c c c c } \hline \hline $\tau/T$ & $1$ & $ 2$ & $ 3$ & $4$ \\ $\delta x_c$ & $\approx 10^{-1}$ & $\approx 10^{-2}$ & $\approx 10^{-3}$ & $\approx 10^{-4}$ \\ \hline \hline \end{tabular*} \end{table} \subsection{The Linear Delay Reactive Scalar}\label{subsec:numericsscalarfield} We now examine the scaling behavior of the delay reactive scalar distribution as the value of $\tau/T$ varies. In each case, the first-order structure function is calculated over $10$ evenly spaced intersections. The scaling exponent is obtained from the slope of the first-order structure function and it is then compared to the set of scaling laws (\ref{eqn:delay_holders}). \begin{figure}[!] \centering \begin{minipage}{\linewidth} \centerline{Regime I} \centerline{\includegraphics[width=6cm]{RegimeIRealComplexT110inters.pdf}} \centerline{(a) $-\mathrm{Re\,}\lambda_1/h_0=0.3$} \vfill \centerline{\includegraphics[width=6cm]{RegimeIRealComplexT2.pdf}} \centerline{(b) $-\mathrm{Re\,}\lambda_1/h_0=0.6$} \vfill \centerline{\includegraphics[width=6cm]{RegimeI5tau10tau20T_b.pdf}} \centerline{(c) $-\mathrm{Re\,}\lambda_1/h_0=0.26$} \caption{ (a) First-order structure functions for the linear delay reactive scalar (\ref{eqn:1Ddelay}) averaged over $10$ evenly spaced transects (parallel to the $x$-axis). These are calculated at statistical equilibrium ($t=20T$) for the two sets of parameters $(a,b,\tau)$ that were considered in Fig. \ref{fig:SeriesFundamental1}, both with $\mathrm{Re\,}\lambda_1=-0.68$ but different $\mathrm{Im\,}\lambda_1$: for $(1,-0.16,1)$ (gray solid line) $\lambda_1$ is real while for $(1, 0.9,1)$ (black solid line) $\lambda_1$ is complex. Flow constant is $T=1$. The theoretical prediction is depicted by the dotted line. (b) Same as (a) but $T=2$. (c) Same as (a) but this time $\mathrm{Re\,}\lambda_1=-0.03$ with different $b$ and $\tau$ and $T$: $(1,0.92,5)$ (gray solid line), $(1, 0.7,10)$ (black solid line) and $T=20$. In all cases, $\tau/T\leq 1$. Black dotted lines correspond to theoretical prediction (\ref{eqn:Holder1}).} \label{fig:RegimeIrealComplexRegimeI5tau10tau20T} \end{minipage} \end{figure} \subsection*{Regime I} Initially, $\tau/T\leq 1$ so that $\delta x_c\gtrsim 0.1$ (see Table \ref{table:characteristic_length scale}). This way only Regime I will appear within the range of length scales considered (recall that finite-size effects become important for $\delta x>0.1$). The validity of $-\mathrm{Re\,}\lambda_1/h_0$, the ratio associated with the H\"older exponent within Regime I (see (\ref{eqn:Holder1})), is tested. Three different aspects are examined: the first aspect investigates the impact that the imaginary part of $\lambda_1$ may have on the scalar field. Recall that $\lambda_1$ denotes the root of the characteristic equation (\ref{eqn:1Dchar}) that has the least negative real part. According to expression (\ref{eqn:Holder1}), $\mathrm{Im\,}\lambda_1$ does not contribute to the field's scaling behavior. This is confirmed in the numerical results that are shown in Fig. \ref{fig:RegimeIrealComplexRegimeI5tau10tau20T}(a). There, the first-order structure functions obtained from two parameter sets, chosen so that both share the same $\mathrm{Re\,}\lambda_1$ but different $\mathrm{Im\,}\lambda_1$, are found to share the same scaling exponent (their slopes are equal). In particular, for the first set of parameters, $\lambda_1$ is real while for the second, $\lambda_1$ is complex. The second aspect investigates how the scaling behavior varies as the value of $T$ (and therefore $h_0$) varies. We consider the same set of parameters as the ones in Fig. \ref{fig:RegimeIrealComplexRegimeI5tau10tau20T}(a), where this time $T=2$ thus leading to a larger value for the H\"older exponent (double than before). The corresponding scaling exponents are in good agreement with the theoretical prediction (\ref{eqn:Holder1}) (see Fig. \ref{fig:RegimeIrealComplexRegimeI5tau10tau20T}(b)). Finally, the third aspect explores larger values for both $\tau$ and $T$. For two sets of parameters, both of which share the same $\mathrm{Re\,}\lambda_1$, the scaling exponents are in good agreement with the theoretical prediction (\ref{eqn:Holder1}). \subsection*{Regimes I \& II } The coexistence of the Regimes I and II is now investigated by setting $\tau/T=2$ so that $\delta x_c\approx 10^{-2}$. At the same time, $\delta x_2\sim |b|/(ae)$ is chosen to be of the same order of magnitude as $\delta x_c$. This way, Regime III, whose appearance depends on the value of $\delta x_2$ relative to $\delta x_c$ (see Eq. \ref{eqn:delay_holders}) is limited. Note that because $-\mathrm{Re\,}\lambda_1$ increases as $|b|/a$ increases (to verify consider Eq. (\ref{eqn:1Dchar})), a smaller value of $|b|/(ae)$ results in a larger value for the H\"older exponent within Regime I. Therefore, to obtain an interesting change of behavior from Regime I to Regime II, we are limited on how small we can choose $|b|/(ae)$ to be. To test the validity of the set of scaling laws (\ref{eqn:delay_holders}), we examine the structure functions obtained from two sets of parameters, with different value for $|b|$, shown in Fig. \ref{fig:RegimeI_II}(a) (see also Fig. \ref{fig:Product}(b) for comparison with theory). For the first parameter set the value of $|b|$ is smaller than for the second parameter set which implies that the first parameter set has a larger $-\mathrm{Re\,}\lambda_1$ than the second parameter set. Thus within Regime I, the first parameter set has a larger H\"older exponent. At the same time, $a/h_0>1$ for both parameter sets and thus the H\"older exponent within Regime II is equal to $1$. Comparing the theory to the numerics, we can deduce that there is good agreement. $\delta x_c$ captures sufficiently well the transition between Regimes I and II. This transition occurs for slightly larger length scales for the second parameter set since it possesses a larger value of $\delta x_2$. Within Regime I, the field's scaling exponent is close enough to its theoretical value, though this agreement is expected to become better for smaller length scales (see e.g. Figs. (\ref{fig:RegimeIrealComplexRegimeI5tau10tau20T})). Within Regime II, the scaling exponent is, as expected, equal to $1$. A flatter structure than predicted by theory appears for the intermediate length scales ($10^{-3}<\delta x<10^{-2}$) for which Regime I should continue to hold. It appears that this intermediate structure can be explained by noticing that the rate of exponential increase of the separation between neighboring fluid parcels is distributed. A complete development of this argument is left for future work. \subsection*{Regimes I \& II \& III } The coexistence of the Regimes I, II and III is now investigated by keeping $\tau/T=2$ while increasing the value of $\delta x_2\sim |b|/(ae)$ by an order of magnitude larger than $\delta x_c$. This is achieved by considering the same set of parameters as in Fig. \ref{fig:RegimeI_II}(a) but increasing the value of $|b|$. This increase results in an increase in the value of $\delta x_2$ and a decrease in the value of $-\mathrm{Re\,}\lambda_1$ (the value for $\delta x_c$ remains the same). The structure functions corresponding to three sets of parameters, shown in Fig. \ref{fig:RegimeI_II}(b) (see also Fig. \ref{fig:Product}(c) for comparison with theory), are now examined. As expected, Regime III appears within a wide range of length scales, whose range increases as the value of $|b|$ increases. The value of $\delta x_2$ provides a good estimate for the length scale separating Regime II from Regime III. When $|b|\sim0.75$, $\delta x_2\approx 0.27$ in which case the Regime III appears for all length scales larger than $\delta x_c$, thus displacing Regime II (see Fig. \ref{fig:RegimeI_II}(b)). Similarly to the numerical results shown in Fig. \ref{fig:RegimeI_II}(a), a good agreement between theory and numerics is obtained within Regime I, the agreement being better for smaller length scales (the flat regime also appearing here). As before, the field's scaling behavior within Regime II is smooth. \begin{figure}[!] \begin{minipage}{\linewidth} \begin{minipage}{0.48\linewidth} \centerline{Regimes I \& II} \centerline{\includegraphics[width=7cm]{tworegimesCorrected.pdf}} \centerline{(a) $|b|/(ae)\sim \delta x_c$} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{Regimes I \& II \& III} \centerline{\includegraphics[width=7cm]{threeregimesCorrectedB.pdf}} \centerline{(b) $|b|/(ae) \gg \delta x_c$} \end{minipage} \end{minipage} \caption{Same as Fig. \ref{fig:RegimeIrealComplexRegimeI5tau10tau20T} but this time $\delta x_c\approx 0.01$ ($\tau/T=2$). The set of parameters $(a, b, \tau)$ are: (a) $(1, 0.05, 10)$ (black) with $\delta x_2\approx 0.02$ and $(1, 0.1, 10)$ (black) with $\delta x_2\approx 0.04$. (b) $(1, 0.3, 10)$ (black) with $\delta x_2\approx 0.11$ , $(1, 0.5, 10)$ (dark gray) with $\delta x_2\approx 0.18$ and $(1,0.75,10)$ (light gray) $\delta x_2\approx 0.27$. In all cases $T=5$ which leads to $a>h_0$. Also shown are the predictions for the H\"older exponents (black dashed lines), $\delta x_c$ (black dotted line) and $\delta x_2$, based on the estimate given by (\ref{eqn:estimatedx2}) (dashed-dotted line with different shades for each parameter set). } \label{fig:RegimeI_II} \end{figure} \subsection{The Delay Plankton Model}\label{subsec:numerics_delayplanktonmodel} Having investigated the scaling behavior of the linear delay reactive scalar field, the focus now turns to the delay plankton model. This is a typical nutrient-predator-prey system \cite{Murray1993} where the effect of the former is parameterised by the prey carrying capacity, denoted by $C$. The interactions among the biological species are given by the following set of nonlinear delay-differential equations \begin{subequations}\label{eqn:biology_convenience} \begin{align} \frac{dC}{dt}&=\alpha(C_0(\bm{x})-C),\label{eqn:biology_conveniencea}\\ \frac{dP}{dt}&=P(1-P/C)-PZ,\label{eqn:biology_convenienceb}\\ \frac{dZ}{dt}&=P(t-\tau)Z(t-\tau)-\delta Z^2,\label{eqn:biology_conveniencec} \end{align} \end{subequations} where $P$ stands for phytoplankton and $Z$ for zooplankton, $t$ is a dimensionless time scaled to the phytoplankton production rate $r$ ($t/r$ is the real time) and $\alpha$ denotes the rate at which the carrying capacity relaxes to the background source $C_0(\bm{x})$. The phytoplankton growth is logistic and grazing takes place according to a simple $PZ$ term. Zooplankton death occurs at a rate $\delta$ and is described by a quadratic in $Z$ term, representing grazing due to higher trophic levels. The key feature of this model is the introduction of the time $\tau$ that represents the time it takes for the zooplankton to mature ($\tau/r$ in real time). For although it is reasonable to assume an instantaneous change in the prey population once prey and predator are encountered, it is not reasonable to assume an instantaneous change in the predator population. The stationary distributions for $C$, $P$ and $Z$, attained when coupled to the strain model flow (\ref{eqn:v}), are depicted for a particular set of parameters in Fig. \ref{fig:SnapshotsDelayPlanktonModel}. Before analyzing any numerical simulations, the particular plankton dynamics need first to be examined. While the scaling behavior of a general system of delay reactive scalar fields has been set out in the Appendix, certain non-generic features are easier to address for each model in question. For the delay plankton model, the non-generic feature is the existence of asymmetrical couplings between the phytoplankton's carrying capacity and the subsystem comprising of the phytoplankton and the zooplankton. \cite{Hernandez-Garcia_etal2002} considered the case of a zero delay time and deduced that the phytoplankton and zooplankton should always share the same small-scale structure. The numerical results that \cite{TzellaHaynes2007} obtained show that the same holds for a non-zero delay time, provided the length scales remain sufficiently small. However, on larger scales, a second scaling regime appears in which the zooplankton structure is flat while the phytoplankton has a structure similar to its carrying capacity. Although the appearance of a second scaling regime is inherent to any system of delay reactive scalar fields, the decoupling among the species is particular to the delay plankton model. \begin{figure}[!] \centering \begin{minipage}{\linewidth} \centerline{\includegraphics[width=6cm]{C20tau2delta20T.pdf}} \centerline{(a) Carrying Capacity} \vfill \centerline{\includegraphics[width=6cm]{P20tau2delta20T.pdf}} \centerline{(b) Phytoplankton} \vfill \centerline{\includegraphics[width=6cm]{Z20tau2delta20T.pdf}} \centerline{(c) Zooplankton} \caption{(Color online) Snapshots of the biological distributions at statistical equilibrium ($t=20T$) for the delay plankton model (\ref{eqn:biology_convenience}) stirred by the model strain flow (\ref{eqn:v}) with $\tau=20$, $\alpha=0.25$, $\delta=2$ and $T=20$. As before the force is diagonally oriented, described by (\ref{eqn:force}). } \label{fig:SnapshotsDelayPlanktonModel} \end{minipage} \end{figure} \begin{figure}[!] \begin{minipage}{\linewidth} \begin{minipage}{0.45\linewidth} \centerline{\includegraphics[width=7cm]{TauEvalsdiff_source.pdf}} \end{minipage} \hfill \begin{minipage}{0.45\linewidth} \centerline{\includegraphics[width=7cm]{TauHoldersdiff_source.pdf}} \end{minipage} \caption{(a) The value of $\mathrm{Re\,}\lambda_1$, associated with the rate of the slowest decaying eigenfunction of the linearized phytoplankton-zooplankton subsystem, calculated and plotted as a function of $\tau$ for $C_0=1$ (black solid line), $C_0=0.5$ (gray dashed line) and $C_0=1.5$ (gray dashed-dotted line) ($\delta =2$). Its value is determined by considering the roots of the characteristic Eq. (\ref{eqn:delayplankton_char}). (b) The value of $\text{min}\{-\mathrm{Re\,}\lambda_1/h_0,1\}$, the theoretical value for the H\"older exponent shared between the phytoplankton and the zooplankton, plotted as a function of $\tau$ for $h_0\approx 0.117$ ($T=20$) and $C_0=1$ (black solid line), $C_0=0.5$ (gray dashed line) and $C_0=1.5$ (gray dashed-dotted line). } \label{fig:RealLambda1} \end{minipage} \end{figure} To fully explain the scaling behavior of the delay plankton model, the theory of Sec. \ref{sec:DelayTheory} must be extended in order to accommodate the particularities of this model. In the absence of advection and within a certain range of parameters, the delay plankton model has a single fixed point of equilibrium, given by \begin{equation}\label{eqn:fixedpoint} C^{\star}=C_0(\bm{x}), \; P^{\star}=\delta C^{\star}/(\delta+C^{\star}) \; \text{and} \; Z^{\star}=P^{\star}/\delta. \end{equation} This point is stable for $\tau=0$. For $0.5\leq C_0(\bm{x})\leq 1.5$ and $\delta=2$, as in the simulations performed here, this point remains stable for any $\tau> 0$. Linearizing the delay plankton model around this point of equilibrium results in the following expressions for the matrices $\bm{A}$ and $\bm{B}$: \begin{subequations}\label{eqn:matricesAB} \begin{align} \bm{A}= \left(\begin{matrix} \alpha & 0 & 0 \\ -(P^\ast/C^\ast)^2 & P^\ast/C^\ast & P^\ast\\ 0 & 0 & 2P^\ast \end{matrix} \right)&\\ \intertext{and} \bm{B}=-P^\ast\left(\begin{matrix} 0 & 0 & 0 \\ 0 & 0 & 0\\ 0 & 1/\delta & 1 \end{matrix} \right)&, \end{align} \end{subequations} where $\bm{A}$ and $\bm{B}$ are the matricial equivalents of $a$ and $b$ for the one-dimensional linear delay reactive scalar (\ref{eqn:1Ddelay}) (for further details see App. (\ref{app:key})). Certain matrix coefficients (i.e. $-(P^\ast/C^\ast)^2$, $P^\ast/C^\ast$, $2P^\ast$) were simplified using (\ref{eqn:fixedpoint}). From Eq. (\ref{eqn:charsystem}), the characteristic matrix is given by $\bm{H}(\lambda)=\lambda\bm{I}+\bm{A}+\bm{B}e^{-\lambda \tau}$. Thus, using Eq. (\ref{eqn:matricesAB}), \begin{equation}\label{eqn:matrices_char_matNPZ} \begin{split} &\bm{H}(\lambda)=\\ &\left(\begin{matrix} \lambda+\alpha & 0 & 0 \\ -\left(P^\ast/C^\ast \right)^2 & \lambda+P^\ast/C^\ast & P^\ast\\ 0 & -e^{-\lambda\tau}P^{\ast}/\delta & \lambda-P^{\ast} e^{-\lambda\tau}+2P^\ast \end{matrix} \right). \end{split} \end{equation} It follows that the characteristic equation corresponding to the linearized delay plankton model satisfies (see Eq. (\ref{eqn:charsystem})) \begin{subequations}\label{eqn:delayplankton_char} \begin{equation} h(\lambda)\equiv\text{det}\bm{H}(\lambda)= (\lambda+\alpha)\,g(\lambda)=0 , \end{equation} where $g(\lambda)=0$ is the characteristic equation associated with the phytoplankton-zooplankton subsystem. \begin{equation} g(\lambda)= \left|\begin{matrix} \lambda +P^{\ast}/C^{\ast} & P^{\ast}\\ -e^{-\lambda\tau}P^{\ast}/\delta & \lambda-P^{\ast} e^{-\lambda\tau}+2P^{\ast} \end{matrix} \right|. \end{equation} \end{subequations} As in the one-dimensional case, the number of roots are infinite for $g(\lambda)=0$ (and therefore for $h(\lambda)=0$). At the same time, the magnitude of $\mathrm{Re\,}\lambda_1$, the root with the least negative real part, decreases as $\tau$ increases with $\mathrm{Re\,}\lambda_1\rightarrow 0$ as $\tau\rightarrow\infty$. Its value is determined for fixed $C_0$ and $\delta$ and plotted in Fig. \ref{fig:RealLambda1}(a) as a function of $\tau$ for $\delta=2$ and three key values of $C_0(\bm{x})$: $1$, $1.5$ and $0.5$ i.e. its average, maximum and minimum (see Eq. (\ref{eqn:force})). Notice that the difference between the $\mathrm{Re\,}\lambda_1$ calculated for these three values of $C_0(\bm{x})$ is minor. It is therefore expected that the value for the least negative chemical Lyapunov exponent associated with the nonlinear dynamics of the delay plankton model is close to $-\mathrm{Re\,}\lambda_1$. In the theoretical considerations made in Sec. \ref{sec:DelayTheory}, the scaling behavior of a linear delay reactive scalar was described by the set of scaling laws (\ref{eqn:delay_holders}). A similar set of scaling laws holds for a system of nonlinearly interacting scalars (see App. \ref{app:scaling}): the H\"older exponent within Regime I is governed by the ratio of the least negative chemical Lyapunov exponent, -$\mathrm{Re\,}\lambda_1$, to the flow Lyapunov exponent, $h_0$; within Regime II, the H\"older exponent is governed by $-a_1/h_0$, where $a_1$ is the slowest decay rate associated with the reduced system that is obtained once all delay terms are ignored. As for the single delay reactive scalar, the appearance of a flat scaling regime, Regime III, depends on whether $\delta x_2$, the length scale associated with this regime, is larger than $\delta x_c$, the transition lengthscale. Note that the value of $\delta x_2$ is not necessarily the same for each species (see \S\ref{app:scaling}). This set of scaling laws was deduced for the general case in which the product of the {\it fundamental matrix}, the matricial equivalent of the fundamental solution (see \S\ref{app:key}), with the direction of the forcing in the chemical space has non-zero entries (see Eq. (\ref{eqn:chemicaldifference_delay})). If that is not the case, the set of scaling laws (\ref{eqn:delay_holders}) may need to be modified and different regimes for different species are expected. Note however that in all cases the value of $\delta x_c$ is not affected as its value only depends on $\tau$ and not on the particular chemical dynamics. To examine the existence of zero entries for the linearized delay plankton model, consider first the form of the eigenfunctions that comprise its fundamental matrix. This matrix, denoted by $\bm{M}_{\bm{Y}}(t)$, can be written as an infinite sum of eigenfunctions, each proportional to $e^{\lambda_i t}\text{adj}\bm{H}(\lambda_i)$, where $\text{adj}$ is short for adjoint (see Eq. (\ref{eqn:fundamental_matrix2})). In the delay plankton model, the forcing is given by the source $C_0(\bm{x})$. Since this is applied only to the carrying capacity, the product of $\text{adj}\bm{H}(\lambda_i)$ with the forcing direction is given by \begin{equation}\label{eqn:adj_char_matNPZ} \text{adj}\bm{H}(\lambda_i)\cdot \left( \begin{matrix} 1\\ 0\\ 0 \end{matrix} \right) = \left( \begin{matrix} \phantom{m}g(\lambda_i)\\ m_1(\lambda_i)\\ m_2(\lambda_i) \end{matrix} \right) \end{equation} with \begin{subequations} \begin{align} m_1(\lambda_i)&=(\lambda_i-P^{\ast}[e^{-\lambda_i\tau}-2])(P^{\ast}/C^{\ast})^2,\\ m_2(\lambda_i)&=e^{-\lambda_i\tau}{P^\ast}^3(\delta C^\ast)^{-1}, \end{align} \end{subequations} where to deduce the above, Eqs. (\ref{eqn:matrices_char_matNPZ}) and (\ref{eqn:delayplankton_char}) were employed. Examining the behavior of Eq. (\ref{eqn:adj_char_matNPZ}) as a function of $\lambda_i$ where $h(\lambda_i)=0$ and $i=1\ldots\infty$, it can be deduced that as long as these are distinct (achieved by appropriately choosing the parameter range), the only $\lambda_i$ for which $g(\lambda_i)\neq 0$ is $\lambda_i=-\alpha$. Therefore, a single eigenfunction governs the scaling behavior of $C$ from where it can be inferred that a single H\"older exponent characterizes its spatial structure. Its value is given by \begin{equation}\label{eqn:HolderNdelay} \gamma_C=\text{min}\{1,\alpha/h_0\}. \end{equation} This result is hardly surprising as it is easy to observe that the carrying capacity evolves independent of the rest of the species and as the much studied linearly decaying scalar with chemical Lyapunov exponent equal to $-\alpha$. On the other hand $m_1(\lambda_i)$, $m_2(\lambda_i)\neq 0$ for all $\lambda_i$, where $i=1\ldots\infty$ and thus no special considerations are necessary for the phytoplankton and zooplankton; their scaling behavior within Regime I is shared and governed by the least negative chemical Lyapunov exponent, $\mathrm{Re\,}\lambda_1$. However, within Regime II a different scenario takes place. The fundamental matrix corresponding to this regime may be exactly written as $\bm{M}_{\bm{Y}}(t)=\exp[-\bm{A}t]$ (see App. \ref{app:key}). Using a singular value decomposition, $\bm{M}_{\bm{Y}}(t)$ can be re-written in terms of $\bm{\hat{a}}_i e^{a_it}\bm{\hat{a}}_i^{\dag}$ where $\bm{\hat{a}}_i$ and $\bm{\hat{a}}_i^{\dag}$ correspond to the normalized right and left eigenvectors of $\bm{A}$ with eigenvalue $a_i$ where $i=1\ldots 3$. To understand the scaling behavior of the phytoplankton and zooplankton, it is necessary to consider for each $i$, the product of $\bm{\hat{a}}_i e^{a_it}\bm{\hat{a}}_i^{\dag}$ with the forcing direction. The eigenvalues of $\bm{A}$ are given by \begin{subequations} \begin{align} \{a_1, a_2, a_3\}&=\{-\alpha, -P^{\ast}/C^{\ast}, -2P^{\ast}\}, \\ \intertext{while the product of $\bm{\hat{a}}_i e^{a_it}\bm{\hat{a}}_i^{\dag}$ with the forcing direction is given by} \left\{\bm{\hat{a}}_i e^{-a_i t} \bm{\hat{a}}_i^{\dag}\cdot \left( \begin{matrix} 1 \\ 0 \\ 0 \end{matrix} \right) \right\} &= \left\{ e^{-\alpha t} \left( \begin{matrix} \cdot \\ \cdot \\ 0 \end{matrix} \right) , e^{- P^\ast/C^\ast t} \left( \begin{matrix} 0 \\ \cdot\\ \cdot \end{matrix} \right) , e^{- 2 P^\ast t} \left( \begin{matrix} 0 \\ 0 \\ 0 \end{matrix} \right) \right\}. \end{align} \end{subequations} where $(\cdot)$ indicates that the dependence on (non-zero) constants has been suppressed for brevity as their magnitude does not increase or decrease in a systematic way and therefore they do not contribute to the fields' scaling laws (see \S\ref{app:scaling}). It follows that, within Regime II, two terms that are decaying exponentially with rates $-\alpha$ and $-\lambda_{P}=-P^{\ast}/C^{\ast}$ contribute to the scaling behavior of the phytoplankton. The term that corresponds to the smallest decay rate will dominate the scaling behavior of the phytoplankton (see also App. \ref{app:scaling}). Note that in all the simulations performed here, $\alpha<P^\ast/C^\ast$ where the value of $P^\ast/C^\ast$ is calculated for $0.5\leq C_0(\bm{x})\leq 1.5$, the range of values of $C_0(\bm{x})$ (see Eqs. (\ref{eqn:force}) and (\ref{eqn:fixedpoint})). It is therefore expected that the scaling behavior of the phytoplankton is dominated by $-\alpha$, the same rate that determines its carrying capacity. Conversely, none of these terms contribute to the scaling behavior of the zooplankton, implying that within Regime II, the zooplankton is decoupled from the biological forcing and thus evolves like a passive tracer. Therefore, within this regime, its spatial structure is flat. As a consequence Regime III appears in the scaling behavior of the phytoplankton only. The value for $\delta x_2$ separating Regimes II and III maybe estimated using Eq. (\ref{eqn:estimatedx2system}). In all numerical simulations performed here, $\delta x_2\lesssim \delta x_c$ and therefore this flat regime is not prevalent in the scaling behavior of the phytoplankton. The following set of expressions for the H\"older exponents associated with the phytoplankton, $\gamma_P$, and the zooplankton, $\gamma_Z$, describe the distributions scaling behavior within the two regimes: \begin{list}{}{\topsep 10pt} {\usecounter{Lcount} \setlength{\rightmargin}{\leftmargin}} \item For Regime I, \begin{subequations}\label{eqn:HolderPZRegimeI_II} \begin{equation}\label{eqn:HolderPZRegimeI} \phantom{X} \gamma_{PZ}=\gamma_{P}=\gamma_{Z}=\text{min}\{\gamma_C,-\mathrm{Re\,}\lambda_1/h_0\}. \end{equation} \item For Regime II, \begin{equation}\label{eqn:HolderPZRegimeII} \begin{split} \quad \gamma_{P}\neq\gamma_{Z}\;\text{with}\; &\gamma_{P}=\text{min}\{\gamma_{C},-\lambda_{P}/h_0\},\\ &\gamma_{Z}=0. \end{split} \end{equation} \end{subequations} \end{list} To summarize, within Regime I, $P$ and $Z$ share the same small-scale structure characterized by the H\"older exponent $\gamma_{PZ}$. This structure is either shared by $C$ i.e. $\gamma_{PZ}=\gamma_{C}$ (see Eq. (\ref{eqn:HolderNdelay}) for $\gamma_C$), or is more filamental than $C$ i.e. $\gamma_{PZ}<\gamma_{C}$. Within Regime II, the small-scale structure of $Z$ is flat (zero H\"older exponent) while that of $P$ is either shared with $C$ or is more filamental than $C$. The numerical results obtained from a set of simulations performed firstly for a varying value of $\tau$, and secondly for a varying value of $T$ are now analyzed. \begin{figure}[!] \begin{minipage}{\linewidth} \centerline{Varying $\tau$} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=5.5cm]{inter1tau2delta20T.pdf}} \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \centerline{\includegraphics[width=6cm]{dcstruct1tau2delta20T.pdf}} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{(a) $\tau=1$} \end{minipage} \begin{minipage}{0.47\linewidth} \centerline{\includegraphics[width=5.3cm]{inter10tau2delta20T.pdf}} \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \centerline{\includegraphics[width=5.8cm]{dcstruct10tau2delta20T.pdf}} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{(b) $\tau=10$} \end{minipage} \begin{minipage}{0.48\columnwidth} \centerline{\includegraphics[width=5.3cm]{inter20tau2delta20T.pdf}} \end{minipage} \hfill \begin{minipage}{0.48\columnwidth} \centerline{\includegraphics[width=5.8cm]{dcstruct20tau2delta20T.pdf}} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{(c) $\tau=20$} \end{minipage} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=5.3cm]{inter30tau2delta20T.pdf}} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=5.8cm]{dcstruct30tau2delta20T.pdf}} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{(d) $\tau=30$} \end{minipage} \caption{(Color online) Representative intersections ($y=0.5$) (left) and their corresponding first-order structure functions averaged over 500 evenly spaced intersections (right) at statistical equilibrium ($t=20T$) for the delay plankton model (\ref{eqn:biology_convenience}) advected by (\ref{eqn:v}) for $\delta=2$, $T=20$ and $\alpha=0.25>h_0\approx 0.117$. Graphs show carrying capacity in black, phytoplankton in light gray (green) and zooplankton in dark gray (red). The value of $\delta x_c$ is marked by a vertical dotted black line (if $\delta x_c<0.5$) and the value of $\delta x_2$ is marked if it is larger than $\delta x_c$. A dotted line for each slope of gradient equal to the theoretical value of the H\"older exponent is drawn for reference. }\label{fig:VariationTau} \end{minipage} \end{figure} \subsection*{Variation of $\bm{\tau}$} In the set of numerical results shown in Fig. \ref{fig:VariationTau}, the evolution of the concentration fields (calculated over an intersection) and their first-order structure functions (calculated over $500$ evenly spaced horizontal intersections) corresponding to the zooplankton, phytoplankton and its carrying capacity, are examined as a function of $\tau$. Note that the structure functions have been offset to emphasise that for small $\tau$ all species share the same behavior at all length scales. For larger $\tau$, the phytoplankton and zooplankton share the same structure at small length scales while at larger length scales the phytoplankton shares the same structure as its carrying capacity. Starting from a small value for $\tau$ for which only Regime I appears and in which all the planktonic distributions are smooth (Fig. \ref{fig:VariationTau}(a)), the behavior of both the phytoplankton and the zooplankton becomes increasingly filamental as the value of $\tau$ increases (Fig. \ref{fig:VariationTau}(b-d)). This behavior is in agreement with the prediction that the magnitude of their shared chemical Lyapunov exponent decreases as $\tau$ increases, approaching zero for large values of $\tau$ (see Fig.\ref{fig:RealLambda1}(a)). A comparison between theory and numerics within Regime I may be made by consulting Fig. \ref{fig:RealLambda1}(b) where the H\"older exponent, given by $\text{min}\{-\mathrm{Re\,}\lambda_1/h_0,1\}$, is calculated and plotted as a function of $\tau$ for the same values of $C_0(\bm{x})$ as in Fig. \ref{fig:RealLambda1}(a). As a reference, a line of the same slope as the theoretical value for the H\"older exponent is drawn for each case depicted in Fig. \ref{fig:VariationTau}(a-d). The agreement between theory and numerics is very close. At the same time as the value of $\tau$ increases, the value of the transition length scale decreases according to the theoretical expression (\ref{eqn:estimate_char}). This leads to the appearance of Regime II. Within the latter regime, the theoretical prediction is confirmed: the distribution of the phytoplankton is smooth and similar to the distribution of its carrying capacity while that of the zooplankton is flat, equivalent to the distribution of a passive (non-reactive) tracer. The theoretical value for $\delta x_c$ predicts sufficiently well the transition between the first and second scaling regime. For Figs. \ref{fig:VariationTau}(a-c)), $\delta x_2<\delta x_c$, thus explaining why no Regime III is observed for the phytoplankton (where to estimate $\delta x_2$, Eq. (\ref{eqn:estimatedx2system}) was used). The only exception is shown in Fig. \ref{fig:VariationTau}(d) for which $\tau=30$. For this case $\delta x_2\sim\delta x_c$ and thus within a short region of length scales, a flat regime is predicted to appear for the phytoplankton. Indeed a flat regime is observed but as in the case of the single delay scalar (see \S\ref{subsec:numericsscalarfield}), this flat regime is extended to length scales that lie within Regime I (though still close to $\delta x_c$). A larger value of $\delta x_2$ is obtained by further increasing the value of $\tau$. This is clearly depicted in Fig. \ref{fig:VariationT}(a) where Regime III appears for a substantial range of length scales. For scales larger than $\delta x_2$, Regime II appears. \subsection*{Variation of $\bm{T}$} The evolution of the concentration fields and their first-order structure functions, are now examined as a function of the stirring strength of the flow, the latter parameterised by the value of $T$, and shown in Fig. (\ref{fig:VariationT}). Starting from Fig. (\ref{fig:VariationT}(a)), as the value of $T$ increases, so does the value for $\delta x_c$ (see Eq. (\ref{eqn:estimate_char})) along with the range of length scales for which Regime I appears. Again, the agreement between theory and numerics is close with $\delta x_c$ providing a good prediction for when the transition from Regime I to either Regime III (see Fig. \ref{fig:VariationT}(a)) or Regime II (see Figs. \ref{fig:VariationT}(b-c)) occurs. \begin{figure}[!] \begin{minipage}{\linewidth} \centerline{Varying $T$} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=5.3cm]{inter40tau2delta20T.pdf}} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=5.8cm]{dcstruct40tau2delta20T.pdf}} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{(a) $T=20$} \end{minipage} \begin{minipage}{0.47\linewidth} \centerline{\includegraphics[width=5.3cm]{inter40tau2delta30T.pdf}} \end{minipage} \hfill \begin{minipage}{0.47\linewidth} \centerline{\includegraphics[width=5.8cm]{dcstruct40tau2delta30T.pdf}} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{(b) $T=30$} \end{minipage} \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=5.3cm]{inter40tau2delta40T.pdf}} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \centerline{\includegraphics[width=5.8cm]{dcstruct40tau2delta40T.pdf}} \end{minipage} \vfill \begin{minipage}{\linewidth} \centerline{(c) $T=40$} \end{minipage} \caption{(Color online) Same as Fig. \ref{fig:VariationTau} but this time the flow parameter $T$ varies ($\tau=40$, $\delta=2$). } \label{fig:VariationT} \end{minipage} \end{figure} \section{Summary and Conclusions}\label{sec:DelayConclusions} This paper has considered the spatial properties of chaotically advected delay reactive scalar fields, i.e. scalar fields whose reactions explicitly contain a delay time. The investigation was motivated by the need for a theoretical explanation for previous numerical results obtained for a delay plankton model \cite{Abraham1998,TzellaHaynes2007} but the results are relevant to other chemical and biological systems \cite{Roussel1996,Murray1993}. The system considered had stable reaction dynamics in which spatial inhomogeneity is forced by a spatially smooth source and in which the reacting species are advected by a two-dimensional, unsteady and incompressible flow. The case of reactions described by a single linear delay equation was considered in detail as a simple prototype and the results were then extended to a reaction described by a system of nonlinearly interacting delay equations. Two main conclusions were drawn concerning the scaling behavior of the delay reactive scalar fields. The first was that, no matter how large the value of the delay time, at sufficiently small length scales the scaling behavior is characterized by a H\"older exponent whose value depends on the ratio of the slowest decay rate associated with the reaction dynamics, i.e. the least negative chemical Lyapunov exponent, to the flow Lyapunov exponent. Thus, within this scaling regime, denoted as Regime I, the introduction of a delay time into the reactions results in a scaling behavior that is a straightforward generalization of that for which there is no delay time. For the particular case of the delay plankton model, this implies that the phytoplankton and zooplankton share the same scaling behavior at small scales. On the other hand, when the stirring of the flow is sufficiently strong or the delay time is sufficiently large, the scaling behavior undergoes a change beyond a transition lengthscale. The expression for the transition length scale was deduced to depend on both the stirring strength and the delay time, exponentially decreasing as a function of their product. This change of behavior is inherent to the delay system and may be described by three different scenarios: The first scenario occurs when a second scaling regime, denoted as Regime II, is created to accompany the first scaling regime. This new scaling regime appears at all small-scales that are larger than the transition length scale. The scaling behavior within this second regime is essentially captured by a reduced reaction system in which all reaction terms that contain a delay time are ignored. The value of the corresponding H\"older exponent depends on the ratio of the slowest decay rate associated with the reduced reactive processes to the flow Lyapunov exponent. For the particular case of the delay plankton model, this result explains why the zooplankton assumes a similar distribution to a passive (non-reactive) scalar while the phytoplankton assumes a different less-filamental distribution. A second scenario occurs when the second scaling regime is preceded by a flat scaling regime, denoted as Regime III. In this case there are three scaling regimes present: Regimes I, II and III. For this to happen, the transition length scale needs to be small compared to the ratio of the reaction terms that contain a delay time to those terms that do not. As this ratio increases, so does the range of length scales for which Regime III appears. When this ratio reaches the order of unity, a last scenario occurs in which the Regime III appears at all small-scales that are larger than the transition length scale. In this case Regime II does not appear. We believe that the investigation presented here resolves the main issues concerning the small-scale spatial structure of chaotically advected delay reactive scalar fields. Although the models under consideration are highly simplified, they can be readily extended to include any number of interacting species or space-dependent productivity and death rates. As long as the reactions are stable, the above conclusions remain unchanged. There are, however, details that need further examination. This paper has avoided the implications of a distribution of finite-time flow and chemical Lyapunov exponents. Some of the implications of a distribution of finite-time flow Lyapunov exponents have been addressed by \cite{Neufeld_etal2000a}. The implications of a distribution of finite-time chemical Lyapunov exponents, avoided in this paper by basing discussion on solutions of model chemical systems with constant coefficients, could be incorporated in a similar way. It is believed that including these effects may give a better description of the fields' scaling behavior within Regime I for length scales close to the transition length scale. The primary theoretical predictions of this paper are the parameter dependence of the scaling behavior in three different regimes and the transition length scales between those regimes. This makes it possible to develop a quantitative evaluation of the theory, for example, as applied to observations of ocean plankton distributions at the mesoscale; one of the principal motivations for the line of investigation in this paper. Depending on the time it takes for the zooplankton to mature and the stirring induced by the straining activity of the mesoscale eddies, three, instead of one, scaling regimes may characterize the plankton distributions. Given the differing spatial distributions exhibited by the plankton at the open mesoscale ocean \cite{MackasBoyd1979,Tsuda1995,MartinSrokosz2002}, it is worth taking into account the existence of these two new scaling regimes when trying to interpret oceanic measurements at a large range of length scales. A degree of care should be taken however as the ocean is highly complex and the presence of small-scale forcing is ubiquitous in the ocean, reflecting not only the individual zooplankton behavior but also the presence of strong localized upwelling. Because the impact that these processes have on larger scales may be significant \cite{MahadevanArcher2000,Martin_etal2002}, it is important to build the complexity of the idealised models considered here by including both more realistic dynamics, in which vertical effects and frontal circulation are taken into account, as well as some of the characteristics of the individual zooplankton behavior such as diurnal vertical migration. Finally, the distinct role that a delay time plays on the formation of structures in reactive scalar distributions is expected to prompt further research on the subject. But it should be emphasized the results presented in this paper have potential application beyond the field of ocean sciences, to any system involving fluid flow and chemical or biological interactions.\\ \noindent \textbf{Acknowledgments.} The authors are grateful to B. Legras, J. H. P. Dawes and A. P. Martin for their useful comments as well as A. Iserles for his insight. AT is currently supported by the Marie Curie Individual fellowship HydraMitra No. 221827.
2,869,038,155,759
arxiv
\subsection*{Our results and techniques} We begin our investigation with the revenue objective and give an exact characterization of the optimal mechanism for a single agent with a public budget. While in the absence of budgets the optimal mechanism is a fixed sale price and therefore deterministic, with budgets the optimal mechanism may need to randomize and offer multiple buying options to the agent. This complicates the design of optimal mechanisms in more general settings involving multiple agents or private budgets. We therefore consider approximations. When budgets are known publicly, we obtain constant factor approximations in nearly all settings where constant factor approximations are known for unconstrained mechanism design. This includes, for example, all single-parameter settings with a downwards closed feasibility constraint, but also multi-parameter settings with ``unit-demand'' agents and a matroid feasibility constraint (see, e.g., \cite{CHMS10}). Our mechanisms are for the most part direct reductions to unconstrained settings, and are extremely simple. For private budgets, the problem becomes much harder and we focus on settings with single-dimensional values. We design a novel mechanism based on ``lotteries'' that obtains a good approximation whenever each agent's value distribution satisfies the monotone hazard rate (MHR) condition (see Section~\ref{sec:def} for a definition). Our mechanism's novelty lies in offering each agent a carefully constructed set of different buying options such that the best option for the agent is to either spend his entire budget or a fraction of the monopoly price for that agent. The MHR assumption is a frequently used assumption in mechanism design literature and many natural distributions satisfy it. In fact the mechanism obtains a good approximation more generally under mild technical conditions on the values and budgets. We believe that our techniques should extend to provide good approximations for arbitrary distributions. Next we examine the welfare objective. While for revenue, the budget of an agent is a natural upper bound on the contribution of that agent to the revenue and allows us to ``cap'' values at the budget, for welfare this doesn't work. In fact, a mechanism can generate a non-trivial guarantee on welfare even when budgets are $0$. Consider a setting with two unit demand buyers and two items. Consider the following mechanism: the mechanism asks each agent to give a preference list of the items. If the top choices of the buyers are different, then each buyer gets allocated his top choice and welfare is maximized. Otherwise, the mechanism ignores the preferences of the buyers and computes the allocation that maximizes the social welfare ex-ante. Note that this mechanism is truthful. When agents' values for the items are i.i.d., the obtained social welfare from this example is at least $3/4$ of the maximum social welfare we can obtain with no budget constraint. On the other hand, a mechanism that ``ignores'' values above the budget (i.e. does not distinguish between them in the allocation function) cannot obtain an approximation better than $1/2$. The gap between the two mechanisms increases as the number of agents grows. We again focus on single-parameter settings and public budgets, but with arbitrary downwards closed feasibility constraints. For these settings, we show a tradeoff between an approximation on budget and an approximation on welfare: for any $\epsilon$, we can get a $1/{\epsilon}$ approximation to welfare with respect to the welfare that an optimal mechanism can get when budgets are scaled down by a factor of $1-{\epsilon}$. This mechanism has an extremely simple form: it replaces every value larger than its budget by its expectation conditioned on being larger than the budget, and runs the VCG mechanism on these modified values. Moreover, if we are willing to sacrifice EPIR in favor of the less restrictive IIR, we can convert this mechanism into a 4-approximate IIR mechanism (with no approximation on budgets). Finally, if the value distributions satisfy the MHR condition, we achieve a $2(1+e)$-approximation to welfare via an EPIR mechanism by reducing budget-feasible welfare maximization to budget-feasible revenue maximization. One nice property of our reductions from budget feasible mechanism design to unconstrained mechanism design is that they are for the most part oblivious to the feasibility constraint imposed on the mechanism. They therefore work for a broad range of feasibility constraints and add minimal complexity to the mechanism design problem. \subsection*{Related work} Several works in economics have studied characterizations of optimal BIC IIR budget-feasible mechanisms (e.g., \cite{PV08, LR96, CG2000, M00}). However, these works are generally weak in the kinds of settings they consider (typically just single-item auctions) and the kinds of value distributions they allow\footnote{E.g., \cite{PV08} and \cite{M00} make the assumption that value distributions have a monotone hazard rate as well as a nondecreasing density function, unnatural conditions that few distributions satisfy simultaneously.}. Laffont and Robert \cite{LR96} considered single item settings where bidders have a private value and public common budget. Che and Gale~\cite{CG2000} considered the setting with a single item and a single buyer, but allowed both the value and the budget to be private. Pai and Vohra~\cite{PV08} gave a more general result in which they designed an optimal auction for a single item and multiple buyers with private i.i.d. values and private budgets. Bhattacharya et al.~\cite{BGGM10} were the first to study settings beyond single-item auctions and focused on revenue maximization. They considered a setting with heterogeneous items and additive values, and exhibited a (large) constant factor DSIC approximation mechanism as well as an all-pay auction which admits truthtelling as a BNE and in that BNE obtains a $4$-approximation. However, these results required the value distributions to satisfy the MHR condition. The mechanisms are LP-based. In contrast most of our mechanisms are easy to compute, work for general distributions, enforce EPIR, and achieve small approximation factors. In prior-free settings few results are known for revenue maximization. Borgs et al.~\cite{BCIMS05} looked at multi unit auctions for homogeneous goods where agents have private values and budgets and considered worst case competitive ratio (see also \cite{Abrams06}). They designed a mechanism based on random sampling that maximizes revenue when the number of bidders is large. Social welfare maximization has also been considered under budget constraints. Maskin~\cite{M00} considered the setting of a single item and multiple buyers with public budgets. He defined and showed how to compute the constrained efficient mechanism, the truthful feasible mechanism under budget constraints that maximizes the expected social welfare (however, the result holds only for some distribution functions \cite{PV08}). In prior-free settings for multi unit homogeneous items, Nisan et al.~\cite{DNL08} studied Pareto efficient DSIC mechanisms with budget constraints. They showed that if the budgets are private there is no Pareto optimal incentive compatible mechanism; for public budgets they showed that there exists a unique mechanism based on the \emph{clinching auction}. Chen et al.~\cite{CDG10} considered a setting with multiple goods and unit demand buyers and showed how to compute competitive prices that enforce truthfulness under budget constraints if such prices exist. Finally, the work of Alaei et al.~\cite{AJM10} stands out in their study of ``soft'' budgets constraints, where buyers pay an increasing interest rate for payments made above their budgets. They showed how to exactly compute the smallest competitive prices in this setting that result in an incentive compatible mechanism with an outcome in the core.
2,869,038,155,760
arxiv
\section{Introduction and Statement of Results} A level $p$ modular function $f(\tau)$ is a holomorphic function on the complex upper half-plane which satisfies \[ f\left(\frac{a\tau+b}{c\tau+d}\right)=f(\tau)\text{ for all }\left(\begin{smallmatrix}a & b\\ c & d\end{smallmatrix}\right)\in\Gamma_{0}(p)\] and is meromorphic at the cusps of $\Gamma_{0}(p)$. Equivalently, $f(\tau)$ is a weakly holomorphic modular form of weight $0$ on $\Gamma_{0}(p)$. Such a function will necessarily have a $q$-expansion of the form $f(\tau)=\sum_{n=n_{0}}^{\infty}a(n)q^{n}$, where $q=e^{2\pi i\tau}$. Of particular interest in the study of modular forms is the classical $j$-invariant, $j(\tau)=q^{-1}+744 + \sum_{n=1}^{\infty} c(n)q^{n}$, which is a modular function of level $1$. The coefficients $c(n)$ of the $j$-function, like the Fourier coefficients of many other modular forms, are of independent arithmetic interest; for instance, they appear as dimensions of a special graded representation of the Monster group. In 1949 Lehner showed \cite{Lehner:1,Lehner:2} that \[ c(2^{a}3^{b}5^{c}7^{d}n)\equiv0\pmod{2^{3a+8}3^{2b+3}5^{c+1}7^{d}},\] proving that the coefficients $c(n)$ are often highly divisible by small primes. Similar results have recently been proven for other modular functions in~\cite{Griffin}, and for modular forms of level 1 and small weight in~\cite{Doud:padic}, \cite{DoudJenkinsLopez}. It is natural to ask whether such congruences hold for the Fourier coefficients of modular functions of higher level, such as those studied by Ahlgren~\cite{Ahlgren:theta} in his work on Ramanujan's $\theta$-operator. Lehner's results for $j(\tau)$ are in fact more general; in~\cite{Lehner:2} he pointed out that for $p=2,3,5,7$, similar congruences hold for the coefficients of level $p$ modular functions which have integral coefficients at both cusps and have poles of order less than $p$ at the cusp at infinity. In this paper, for $p\in\{2,3,5,7\}$, we examine canonical bases for spaces of level $p$ modular functions which are holomorphic at the cusp $0$. To construct these bases, we introduce the level $p$ modular function $\psi^{(p)}(\tau)$, defined as \[ \psi^{(p)}(\tau)=\left(\frac{\eta(\tau)}{\eta(p\tau)}\right)^{\frac{24}{p-1}} \text{ where }\eta(\tau) = q^{\frac{1}{24}}\prod_{n=1}^{\infty}(1-q^{n}).\] The integer $\frac{24}{p-1}$ for $p=2,3,5,7$ will appear frequently, so we will denote it $\lambda^{(p)}$, or simply $\lambda$ where no confusion arises. The function $\psi^{(p)}(\tau)$ is a modular function of level $p$ with a simple pole at $\infty$ and a simple zero at 0. We will also use the modular function \[\phi^{(p)}(\tau) = (\psi^{(p)}(\tau))^{-1}.\] Following Ahlgren \cite{Ahlgren:theta}, and using the notation of Duke and Jenkins \cite{Duke:zeros}, for $p=2,3,5,7$ we construct a basis $\{f_{0,m}^{(p)}(\tau)\}_{m=0}^{\infty}$ for the space of level $p$ modular functions which are holomorphic at 0 as follows: \[ f_{0,0}^{(p)}(\tau)=1,\] \[f_{0,m}^{(p)}(\tau)=q^{-m}+O(1) = \psi^{(p)}(\tau)^{m}-Q(\psi^{(p)}(\tau)),\] where $Q(x)$ is a polynomial of degree $m-1$ with no constant term, chosen to eliminate all negative powers of $q$ in $\psi^{(p)}(\tau)^{m}$ except for $q^{-m}$. Since $\psi^{(p)}(\tau)$ vanishes at $0$ and the polynomial $Q$ has no constant term, we see that the functions $f_{0,m}^{(p)}$ also vanish at $0$ when $m>0$. We write\[ f_{0,m}^{(p)}=q^{-m}+\sum_{n=0}^{\infty}a_{0}^{(p)}(m,n)q^{n}\] so that for $n\geq0$, the symbol $a_{0}^{(p)}(m,n)$ denotes the coefficient of $q^{n}$ in the $m^{th}$ basis element of level $p$. Note that the function $f_{0, m}^{(p)}$ corresponds to Ahlgren's $j_m^{(p)}$. For an example of some of these functions, consider the case $p=2$: \begin{align*} f_{0,1}^{(2)}(\tau) & =\psi^{(2)}(\tau)\\ & =q^{-1}-24+276q-2048q^{2}+11202q^{3}-49152q^{4}+\ldots\\ f_{0,2}^{(2)}(\tau) & =\psi^{(2)}(\tau)^{2}+48\psi^{(2)}(\tau)\\ & =q^{-2}-24-4096q+98580q^{2}-1228800q^{3}+10745856q^{4}+\ldots\\ f_{0,3}^{(2)}(\tau) & =\psi^{(2)}(\tau)^{3}+72\psi^{(2)}(\tau)^{2}+900\psi^{(2)}(\tau)\\ & =q^{-3}-96+33606q-1843200q^{2}+43434816q^{3}-648216576q^{4}+\ldots\end{align*} The function $f_{0,m}^{(p)}$ is a level $p$ modular function that vanishes at $0$ (if $m\neq0$) and has a pole of order $m$ at $\infty$. The conditions at the cusps determine this function uniquely; if two such functions exist, their difference is a holomorphic modular function, which must be a constant. Since both functions vanish at 0, this constant must be $0$. The functions comprising these bases for $p=2,3,5,7$ have divisibility properties which bear a striking resemblance to the divisibility properties of $j(\tau)$; in many cases they are identical. As an example of some of the divisibility properties we encounter with this basis, we experimentally examine the $2$-adic valuation of the even indexed coefficients of $f_{0,m}^{(2)}(\tau)$ for $m=1,3,5,7$ in Table \ref{tab:2-Adic-Table}. As the data in the table suggest, the $2$-divisibility which $j(\tau)$ exhibits gives us a lower bound on the $2$-divisibility of the odd-indexed $p=2$ basis elements. \begin{table}[h] \noindent \begin{centering} \label{Flo:2-Adic-Float} \par\end{centering} \noindent \begin{centering} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c}{} & & \multicolumn{1}{c}{$a_{0}^{(2)}(m,2)$} & \multicolumn{1}{c}{$a_{0}^{(2)}(m,4)$} & \multicolumn{1}{c}{$a_{0}^{(2)}(m,6)$} & \multicolumn{1}{c}{$a_{0}^{(2)}(m,8)$} & \multicolumn{1}{c}{$a_{0}^{(2)}(m,10)$} & $a_{0}^{(2)}(m,12)$\tabularnewline \hline & $1$ & $11$ & $14$ & $13$ & $17$ & $12$ & $16$\tabularnewline \cline{2-8} & $3$ & $13$ & $16$ & $15$ & $19$ & $14$ & $18$\tabularnewline \cline{2-8} $m$ & $5$ & $12$ & $15$ & $14$ & $18$ & $13$ & $17$\tabularnewline \cline{2-8} & $7$ & $14$ & $17$ & $16$ & $20$ & $15$ & $19$\tabularnewline \cline{2-8} & min & $11$ & $14$ & $13$ & $17$ & $12$ & $16$\tabularnewline \hline \multicolumn{8}{|c|}{}\tabularnewline \hline $j(\tau)$ & & $11$ & $14$ & $13$ & $17$ & $12$ & $16$\tabularnewline \hline \end{tabular} \par\end{centering} \noindent \centering{}\caption{\label{tab:2-Adic-Table}$2$-adic valuation of $a_{0}^{(2)}(m,n)$ compared to corresponding coefficients in $j(\tau)$} \end{table} Note that these functions form a basis for $M_0^\infty(p)$, the space of modular forms of weight 0 and level $p$ with poles allowed only at the cusp at $\infty$. A full basis for the space $M_0^!(p)$ of weakly holomorphic modular forms of weight $0$ and level $p$ is generated by the $f_{0, m}^{(p)}(\tau)$ and the functions $(\phi^{(p)}(\tau))^n$ for integers $n\geq 1$. Recall that the concluding remarks of Lehner's second paper \cite{Lehner:2} state that the coefficients of certain level $p$ modular functions having a pole of order less than $p$ at $\infty$ have the same $p$-divisibility properties as the coefficients $c(n)$ of $j(\tau)$. More precisely, we have the following theorem. \begin{thm}[Lehner] \label{thm:Lehner-Main} Let $p\in\{2,3,5,7\}$ and let $f(\tau)$ be a modular function on $\Gamma_{0}(p)$ having a pole at $\infty$ of order $<p$ and $q$-series of the form \[f(\tau)=\sum_{n=n_{0}}^{\infty}a(n)q^{n},\] \[ f(-1/p\tau) = \sum_{m=m_0}^\infty b(n) q^n,\] where $a(n), b(n) \in\mathbb{Z}$. Then the coefficients $a(n)$ satisfy the following congruence properties: \[ \begin{array}{rll} a(2^{a}n)\equiv0 & \pmod{2^{3a+8}} & \text{if }p=2\\ a(3^{a}n)\equiv0 & \pmod{3^{2a+3}} & \text{if }p=3\\ a(5^{a}n)\equiv0 & \pmod{5^{a+1}} & \text{if }p=5\\ a(7^{a}n)\equiv0 & \pmod{7^{a}} & \text{if }p=7.\end{array}\] \end{thm} Note that Lehner's original statement of this theorem mistakenly states that a function on $\Gamma_{0}(p)$ inherits the $p$-divisibility property for \emph{every }prime in $\{2,3,5,7\}$, not just the prime matching the level. A necessary condition in the statement of Lehner's theorem is that the function must have an integral $q$-expansion at $0$. This condition is quite strong; in fact, neither the function $\phi^{(p)}(\tau)$ nor any of its powers satisfy it, although the functions $f_{0, m}^{(p)}(\tau)$ do. Further, Lehner's theorem assumes that the order of the pole at $\infty$ must be less than $p$. In this paper, we remove this restriction on the order of the pole to show that every function in the $f_{0,m}^{(p)}$ basis has divisibility properties similar to those in Theorem \ref{thm:Lehner-Main}. Specifically, we prove the following theorem. \begin{thm} \label{thm:Andersen-Main}Let $p\in\{2,3,5,7\}$, and let \[ f_{0,m}^{(p)}(\tau)=q^{-m}+\sum\limits _{n=0}^{\infty}a_{0}^{(p)}(m,n)q^{n}\] be an element of the basis described above, with $m=p^{\alpha}m'$ and $(m',p)=1$. Then, for $\beta > \alpha$, \[ \begin{array}{rll} a_{0}^{(2)}(2^{\alpha}m',2^{\beta}n)\equiv0 & \pmod{2^{3(\beta-\alpha)+8}} & \text{if }p=2\\ a_{0}^{(3)}(3^{\alpha}m',3^{\beta}n)\equiv0 & \pmod{3^{2(\beta-\alpha)+3}} & \text{if }p=3\\ a_{0}^{(5)}(5^{\alpha}m',5^{\beta}n)\equiv0 & \pmod{5^{(\beta-\alpha)+1}} & \text{if }p=5\\ a_{0}^{(7)}(7^{\alpha}m',7^{\beta}n)\equiv0 & \pmod{7^{(\beta-\alpha)}} & \text{if }p=7.\end{array}\] \end{thm} Note that for basis elements $f_{0,m}^{(p)}$ with $(m,p)=1$, these divisibility properties match those in Theorem \ref{thm:Lehner-Main}; in fact, Lehner's proof is easily extended to prove the congruences in these cases. For basis elements with $m=p^{\alpha}m'$ and $\alpha\ge1$, the divisibility is ``shifted.'' This shifting occurs in the $(\beta-\alpha)$ factor in the exponent of the modulus. For the coefficients $a_0^{(p)}(p^\alpha m', p^\beta n)$ with $\alpha > \beta$, computations suggest that similar congruences should hold. Additionally, it appears that powers of the function $\phi^{(p)}(\tau)$ have Fourier coefficients with slightly weaker divisibility properties, despite the fact that their Fourier coefficients at $0$ are not integral. It would be interesting to more fully understand these congruences. \section{Preliminary Lemmas and Definitions} In this section, we provide the necessary definitions and background for the proof of the main theorem. For a prime $p$ we define the level $p$ Hecke operator $U_{p}$ by\[ U_{p}f(\tau)=\frac{1}{p}\sum_{\ell=0}^{p-1}f\left(\frac{\tau+\ell}{p}\right),\] using the notation $U_{p}^{n}f=U_{p}U_{p}\cdots U_{p}f$ for repeated application of $U_{p}$. When $f$ has the Fourier expansion $f(\tau)=\sum_{n=n_{0}}^{\infty}a(n)q^{n}$, this operator takes the form\[ U_{p}f(\tau)=\sum_{n=n_{0}}^{\infty}a(pn)q^{n},\] essentially {}``pulling out'' all of the coefficients of $f$ whose index is divisible by $p$. This operator preserves modularity: if $f$ is a level $p$ modular function, then $U_{p}f$ is also a level $p$ modular function. For the primes $p=2,3,5,7$ the topological genus of $\Gamma_{0}(p)\backslash\mathcal{H}$ is zero, so the field of level $p$ modular functions is generated by a single modular function called a Hauptmodul. For the primes in consideration, one such function is $\psi^{(p)}(\tau)$. Note that the modular function $\phi^{(p)}(\tau)=\psi^{(p)}(\tau)^{-1}=q+O(q^{2})$ is also a Hauptmodul. Further, for these primes, the fundamental domain for $\Gamma_{0}(p)$ has precisely two cusps, which may be taken to be at $\infty$ and at $0$. Hence, we are most concerned with the behavior of these functions at $\infty$ and at $0$. To switch between cusps, we make the substitution $\tau\mapsto-1/p\tau$. The following lemma gives relations for $\psi^{(p)}(\tau)$ and $\phi^{(p)}(\tau)$ at $0$, and makes clear that powers of $\phi^{(p)}$ do not satisfy Lehner's integrality condition. \begin{lem} \label{lem:Psi-At-0}The functions $\psi^{(p)}(\tau)$ and $\phi^{(p)}(\tau)$ satisfy the relations \begin{align} \psi^{(p)}(-1/p\tau) & =p^{\lambda/2}\phi^{(p)}(\tau)\label{eq:Psi-At-0}\\ \phi^{(p)}(-1/p\tau) & =p^{-\lambda/2}\psi^{(p)}(\tau)\label{eq:Phi-At-0}\end{align} \end{lem} \begin{proof} The functional equation for $\eta(\tau)$ is $\eta(-1/\tau)=\sqrt{-i\tau}\eta(\tau)$. Using this, we find that \[ \psi^{(p)}\left(\frac{-1}{p\tau}\right) = \left(\frac{\eta(-1/p\tau)} {\eta(-1/\tau)}\right)^{\lambda} = \left(\frac{\sqrt{-ip\tau} \eta(p\tau)}{\sqrt{-i\tau} \eta(\tau)}\right)^{\lambda} = (\sqrt{p})^{\lambda} \left(\frac{\eta(p\tau)}{\eta(\tau)}\right)^{\lambda} = p^{\lambda/2}\phi^{(p)}(\tau).\] The second statement follows after replacing $\tau$ by $-1/p\tau$ in the first statement. \end{proof} We next state a well-known lemma which gives a formula for determining the behavior of a modular function at $0$ after $U_{p}$ has been applied. A proof can be found in \cite{Apostol:modular}. \begin{lem} \label{lem:Main-Up-Formula}Let $p$ be prime and let $f(\tau)$ be a level $p$ modular function. Then\begin{equation} p(U_{p}f)(-1/p\tau)=p(U_{p}f)(p\tau)+f(-1/p^{2}\tau)-f(\tau).\label{eq:Main-Up-Formula}\end{equation} \end{lem} Lehner's original papers included the following lemma and its proof, which gives two important equations. The first gives a formula for $U_{p}\phi^{(p)}$ as a polynomial with integral coefficients in $\phi^{(p)}$; the second gives an algebraic relation which is satisfied by $\phi^{(p)}(\tau/p)$. \begin{lem} \label{lem:Modular-Eq-Phi}Let $p\in\{2,3,5,7\}$. Then there exist integers $b_{j}^{(p)}$ such that \[ \begin{array}{cc} \text{(a)} & U_{p}\phi^{(p)}(\tau)=p\sum\limits _{j=1}^{p}b_{j}^{(p)}\phi^{(p)}(\tau)^{j}.\end{array}\] Further, let $h^{(p)}(\tau)=p^{\lambda/2}\phi^{(p)}(\tau/p).$ Then\[ \begin{array}{cc} \text{(b)} & \big(h^{(p)}(\tau)\big)^{p}+\sum\limits _{j=1}^{p}(-1)^{j}g_{j}(\tau)\big(h^{(p)}(\tau)\big)^{p-j}=0\end{array}\] where $g_{j}(\tau)=(-1)^{j+1}p^{\lambda/2+2}\sum\limits _{\ell=j}^{p}b_{\ell}\phi^{(p)}(\tau)^{\ell-j+1}$.\end{lem} \begin{proof} (a) Since $\phi$ vanishes at $\infty$, $U_{p}\phi$ also vanishes at $\infty$; we will now consider its behavior at $0$. Using (\ref{eq:Main-Up-Formula}) and replacing $\tau$ by $p\tau$ in (\ref{eq:Psi-At-0}) we obtain\begin{align*} U_{p}\phi(-1/p\tau) & =U_{p}\phi(p\tau)+p^{-1}\phi(-1/p^{2}\tau)-p^{-1}\phi(\tau)\\ & =U_{p}\phi(p\tau)+p^{-1}\psi(p\tau)-p^{-1}\phi(\tau)\\ & =O(q^{p})+p^{-\lambda/2-1}q^{-p}+O(1)-p^{-1}q+O(q^{2})\\ p^{\lambda/2+1}U_{p}\phi(-1/p\tau) & =q^{-p}+O(1)\end{align*} The right side of this equation is a level $p$ modular function with integer coefficients, so we may write it as a polynomial in $\psi(\tau)$ with integer coefficients. The polynomial will not have a constant term since the left side vanishes at $0$. Therefore,\[ p^{\lambda/2+1}U_{p}\phi(-1/p\tau)=\sum_{j=1}^{p}c_{j}\psi(\tau){}^{j}.\] Now, replacing $\tau$ by $-1/p\tau$, we find\[ p^{\lambda/2+1}U_{p}\phi(\tau)=\sum_{j=1}^{p}c_{j}p^{\lambda j/2}\phi(\tau){}^{j}.\] After cancelling the $p^{\lambda/2+1}$, we find that $U_{p}\phi(\tau)=\sum\limits _{j=1}^{p}c_{j}'\phi(\tau)^{j}$ and we compute the coefficients $c_{j}'$ (the authors used \noun{mathematica}). The computation is finite, and we find that each coefficient $c_{j}'$ has a factor of $p$, so the coefficients $b_{j}^{(p)}$ are integral. A complete table of values of the $b_{j}^{(p)}$ is found in Table \ref{tab:b_j-Table}. \begin{table}[h] \noindent \begin{centering} \label{Flo:b_j-Float} \par\end{centering} \noindent \begin{centering} \begin{tabular}{|c|c|c|c|c|c|} \cline{3-6} \multicolumn{1}{c}{} & & \multicolumn{4}{c|}{$p$}\tabularnewline \cline{3-6} \multicolumn{1}{c}{} & & 2 & 3 & 5 & 7\tabularnewline \hline & 1 & $3\cdot2^{2}$ & $10\cdot3^{1}$ & $63\cdot5^{0}$ & $82\cdot7^{0}$\tabularnewline \cline{2-6} & 2 & $2^{10}$ & $4\cdot3^{6}$ & $52\cdot5^{3}$ & $176\cdot7^{2}$\tabularnewline \cline{2-6} & 3 & & $3^{10}$ & $63\cdot5^{5}$ & $845\cdot7^{3}$\tabularnewline \cline{2-6} $j$ & 4 & & & $6\cdot5^{8}$ & $272\cdot7^{5}$\tabularnewline \cline{2-6} & 5 & & & $5^{10}$ & $46\cdot7^{7}$\tabularnewline \cline{2-6} & 6 & & & & $4\cdot7^{9}$\tabularnewline \cline{2-6} & 7 & & & & $7^{10}$\tabularnewline \hline \end{tabular} \par\end{centering} \noindent \centering{}\caption{\label{tab:b_j-Table}Values of $b_{j}^{(p)}$} \end{table} (b) We again apply (\ref{eq:Main-Up-Formula}) to $\phi(\tau)$, this time using what we know from $(a)$.\[ pU_{p}\phi(-1/p\tau)=pU_{p}\phi(p\tau)+\phi(-1/p^{2}\tau)-\phi(\tau)\] \[ p^{2}\sum_{j=1}^{p}b_{j}\phi(-1/p\tau)^{j}=p^{2}\sum_{j=1}^{p}b_{j}\phi(p\tau)^{j}+\phi(-1/p^{2}\tau)-\phi(\tau).\] We now use Lemma \ref{lem:Psi-At-0} with the knowledge that $\psi(\tau)=\phi(\tau)^{-1}$ to obtain \[p^{2}\sum_{j=1}^{p}b_{j}p^{-\lambda j/2}\phi(\tau)^{-j}-p^{2}\sum_{j=1}^{p}b_{j}\phi(p\tau)^{j}+\phi(\tau)-p^{-\lambda/2}\phi(p\tau)^{-1}=0.\] After replacing $\tau$ by $\tau/p$ and multiplying by $p^{\lambda/2}$, this becomes\begin{equation} p^{\lambda/2+2}\sum_{j=1}^{p}b_{j}\big(h(\tau)^{-j}-\phi(\tau)^{j}\big)+h(\tau)-\phi(\tau)^{-1}=0.\label{eq:Intermediate-Modular-Eq-Phi}\end{equation} We now divide by $h^{-1}-\phi$. Note two facts: \[ h^{-j}-\phi^{j}=(h^{-1}-\phi)\sum_{\ell=0}^{j-1}h^{-\ell}\phi^{j-\ell-1}\] \[ \frac{h-\phi^{-1}}{h^{-1}-\phi}=\frac{h(h\phi-1)}{\phi(1-h\phi)}=-\frac{h}{\phi}.\] So (\ref{eq:Intermediate-Modular-Eq-Phi}) becomes\[ p^{\lambda/2+2}\sum_{j=1}^{p}b_{j}\sum_{\ell=0}^{j-1}h^{-\ell}\phi^{j-\ell-1}-\phi^{-1}h=0\] which, after multiplying by $\phi h^{p-1}$, becomes\[ p^{\lambda/2+2}\sum_{j=1}^{p}b_{j}\sum_{\ell=0}^{j-1}h^{p-\ell-1}\phi^{j-\ell}-h^{p}=0.\] We now change the order of summation and rearrange terms to obtain the desired formula:\[ h(\tau)^{p}=\sum_{j=1}^{p}\big(p^{\lambda/2+2}\sum_{\ell=j}^{p}b_{\ell}\phi(\tau)^{\ell-j+1}\big)h(\tau){}^{p-j}.\] \end{proof} The next lemma states that when you apply $U_{p}$ to a certain type of polynomial in $\phi_{p}$, you get a similar polynomial back which has picked up a power of $p$. The details of this lemma are found in both \cite{Lehner:1} and \cite{Lehner:2}, scattered throughout the proofs of the main theorems. For our purposes, it will be more useful in the following form. \begin{lem} \label{lem:Phi-Polynomials}Let $p\in\{2,3,5,7\}$ and let $R^{(p)}$ denote the set of polynomials in $\phi^{(p)}$ of the form\[ d_{1}\phi^{(p)}(\tau)+\sum_{n=2}^{N}d_{n}p^{\gamma}\phi^{(p)}(\tau)^{n}\] \[ \begin{array}{ll} \text{where }\gamma= & \begin{cases} 8(n-1) & \text{if }p=2\\ 4(n-1) & \text{if }p=3\\ n & \text{if }p=5\\ n & \text{if }p=7.\end{cases}\end{array}\] Then $U_{p}$ maps $R^{(p)}$ to $p^{\delta}R^{(p)}$ where $\delta=3,2,1,1$ for $p=2,3,5,7$, respectively. That is, applying $U_{p}$ to a polynomial of the above form yields a polynomial of the same form with an extra factor of $p^{\delta}$.\end{lem} \begin{proof} Consider the function\[ d_{1}U_{p}\phi(\tau)+\sum_{n=2}^{r}d_{n}p^{\gamma}U_{p}\phi(\tau)^{n}.\] For the first term, Lemma \ref{lem:Modular-Eq-Phi}(a) shows that $U_{p}\phi(\tau)\in p^{\delta}R_{p}$ since, by inspection, the $b_{j}^{(p)}$ integers are divisible by sufficiently high powers of $p$. For the remaining terms, we will prove\begin{equation} p^{\gamma}U_{p}\phi^{n}=p^{\delta}r\label{eq:Up-Phi^t}\end{equation} where $r\in R_{p}$. The result will immediately follow. By the definition of $U_{p}$ we have\begin{equation} U_{p}\phi^{n}=p^{-1}\sum_{\ell=0}^{p-1}\phi\left(\frac{\tau+\ell}{p}\right)^{n}=p^{-1-\lambda t/2}\sum_{\ell=0}^{p-1}h_{\ell}(\tau)^{n}\label{eq:Up-Def-W-Sum}\end{equation} where $h_{\ell}(\tau)=p^{\lambda/2}\phi\left(\frac{\tau+\ell}{p}\right)$ is related to $h$ from Lemma \ref{lem:Modular-Eq-Phi}(b). Let $S_{n}$ be the sum of the $n^{th}$ powers of the $h_{\ell}$ so that \[ S_{n}=\sum_{\ell=0}^{p-1}h_{\ell}^{n}.\] Define the polynomial $F(x)=\sum_{j=0}^{p}(-1)^{j}g_{j}(\tau)x^{p-j}$ where the $g_{j}(\tau)$ are as in Lemma \ref{lem:Modular-Eq-Phi}. In the same lemma, if we replace $\tau$ with $\tau+\ell$, the $g_{j}(\tau)$ are unaffected since $\phi(\tau+1)=\phi(\tau)$. Therefore, that lemma tells us that the $p$ roots of the polynomial $F(x)$ are precisely the $h_{\ell}$. Using Newton's formula for the $n^{th}$ power sum of the roots of a polynomial, we obtain\begin{equation} S_{n}=\sum_{\ell=0}^{p-1}h_{\ell}^{n}=\sum_{j=1}^{n}(-1)^{j+1}g_{j}S_{n-j}\label{eq:Newtons-Formula}\end{equation} where $g_{j}=0$ for $j>p$ and $S_{0}=n$. We now proceed case-by-case. The $p=2$ case illustrates the method, so we will only include the intermediate steps in the $p=3,5,7$ cases. \begin{caseenv} \item $p=2$. Then, using (\ref{eq:Up-Def-W-Sum}), equation (\ref{eq:Up-Phi^t}) is equivalent to\[ 2^{8(n-1)}\big(2^{-1-12n}S_{n}\big)=2^{3}r,\text{ or}\] \begin{equation} S_{n}=2^{4n+12}r.\label{eq:St-Powers-of-2}\end{equation} We now use (\ref{eq:Newtons-Formula}) to calculate $S_{1}$ and $S_{2}$:\[ S_{1}=g(1)\] \[ S_{2}=g_{1}S_{1}-2g_{2}=g_{1}^{2}-2g_{2}.\] From Lemma \ref{lem:Modular-Eq-Phi} we can compute the values of the $g_{j}$. Using the $b_{j}$ values from the table in that lemma, we have\[ g_{1}=2^{14}(b_{1}\phi_{2}+b_{2}\phi_{2}^{2})=2^{16}(3\phi_{2}+2^{8}\phi_{2}^{2})\] \[ g_{2}=-2^{14}b_{2}\phi_{2}=-2^{24}\phi_{2}.\] We can now see that \[ S_{1}=g_{1}=2^{16}(3\phi_{2}+2^{8}\phi_{2}^{2})\] \[ S_{2}=2^{32}(3\phi_{2}+2^{8}\phi_{2}^{2})^{2}+2^{25}\phi_{2}=2^{20}(2^{5}\phi_{2}+2^{12}3^{2}\phi_{2}^{2}+2^{21}3\phi_{2}^{3}+2^{28}\phi_{2}^{4}).\] Thus (\ref{eq:St-Powers-of-2}) is satisfied for $n=1,2$. We proceed by induction. Assume (\ref{eq:St-Powers-of-2}) is satisfied for all integers $<n$. We show that it is satisfied for $n$. For ease of computation, we introduce the set \[ R^{*}=2^{8}R^{(2)}=\left\{ \sum_{i=1}^{m}d_{i}2^{8i}\phi_{2}^{i}\big|d_{i}\in\mathbb{Z},m\in\mathbb{Z}^{+}\right\} \] which, the reader will notice, is a ring. From (\ref{eq:Newtons-Formula}) we obtain\[ S_{n}=g_{1}S_{n-1}-g_{2}S_{n-2}=2^{8}r_{1}^{*}\cdot2^{4n}r_{2}^{*}+2^{16}r_{3}^{*}\cdot2^{4(n-1)}r_{4}^{*}=2^{4n+8}r_{5}^{*}=2^{4n+16}r\] where $r_{i}^{*}\in R^{*}$ and $r\in R^{(2)}$. \item $p=3$. We want to show \begin{equation} S_{n}=3^{2n+7}r\label{eq:St-Powers-of-3}\end{equation} where $r\in R^{(3)}$. We compute the $g_{j}$ and $S_{n}$ as follows, using the $b_{j}$ from the table:\[ \begin{array}{ccc} g_{1}=3^{9}(3^{9}\phi_{3}^{3}+3^{5}4\phi_{3}^{2}+10\phi_{3}), & g_{2}=3^{14}(-3^{4}\phi_{3}^{2}-4\phi_{3}), & g_{3}=3^{18}\phi_{3},\end{array}\] \[ \begin{array}{ccc} S_{1}=g_{1}, & S_{2}=g_{1}^{2}-2g_{2}, & S_{3}=g_{1}{}^{3}-3g_{1}g_{2}+3g_{3}.\end{array}\] From this, we obtain\[ S_{1}=3^{9}(3^{9}\phi_{3}^{3}+3^{5}4\phi_{3}^{2}+10\phi_{3})\] \[ S_{2}=3^{14}(8\phi_{3}+3^{5}34\phi_{3}^{2}+3^{9}80\phi_{3}^{3}+3^{13}68\phi_{3}^{4}+3^{18}8\phi_{3}^{5}+3^{25}\phi_{3}^{6})\] \[ S_{3}=3^{19}(\phi_{3}+3^{5}40\phi_{3}^{2}+3^{8}1174\phi_{3}^{3}+3^{15}136\phi_{3}^{4}+3^{18}581\phi_{3}^{5}+3^{25}16\phi_{3}^{6}+3^{27}58\phi_{3}^{7}+3^{32}4\phi_{3}^{8}+3^{35}\phi_{3}^{9})\] which proves (\ref{eq:St-Powers-of-3}) for $n=1,2,3$. For the inductive step, let $R^{*}$ be the ring $3^{4}R^{(3)}$ so that\begin{align*} S_{n} & =g_{1}S_{n-1}-g_{2}S_{n-2}+g_{3}S_{n-3}\\ & =3^{5}r_{1}^{*}3^{2n+1}r_{2}^{*}+3^{10}r_{3}^{*}3^{2n-1}r_{4}^{*}+3^{14}r_{5}^{*}3^{2n-3}r_{6}^{*}\\ & =3^{2n+6}r_{7}^{*}\\ & =3^{2n+10}r\end{align*} where $r_{i}^{*}\in R^{*}$ and $r\in R^{(3)}$. \item $p=5$. We want\begin{equation} S_{n}=5^{2n+2}r\label{eq:St-Powers-of-5}\end{equation} where $r\in R^{(5)}$. Computing the $S_{n}$ we find\[ \begin{array}{ccc} S_{1}=5^{5}r_{1} & S_{2}=5^{8}r_{2} & S_{3}=5^{10}r_{3}\end{array}\] \[ \begin{array}{cc} S_{4}=5^{13}r_{4} & S_{5}=5^{16}r_{5}\end{array}\] for some $r_{1},\ldots,r_{5}\in R^{(5)}$. This proves (\ref{eq:St-Powers-of-5}) for $n=1,\ldots,5$. For the inductive step, let $R^{*}$ be the ring $5R^{(5)}$ so that\begin{align*} S_{n} & =g_{1}S_{n-1}-g_{2}S_{n-2}+g_{3}S_{n-3}-g_{4}S_{n-4}+g_{5}S_{n-5}\\ & =5^{4}r_{1}^{*}5^{2n-1}r_{2}^{*}-\ldots+5^{14}r_{9}^{*}5^{2n-9}r_{10}^{*}\\ & =5^{2n+3}r_{11}^{*}\\ & =5^{2n+4}r\end{align*} where $r_{i}^{*}\in R^{*}$ and $r\in R^{(5)}$ . \item $p=7$. We want\begin{equation} S_{n}=7^{n+2}r\label{eq:St-Powers-of-7}\end{equation} where $r\in R^{(7)}$. Computing the $S_{n}$ we find\[ \begin{array}{cccc} S_{1}=7^{4}r_{1} & S_{2}=7^{6}r_{2} & S_{3}=7^{7}r_{3} & S_{4}=7^{9}r_{4}\end{array}\] \[ \begin{array}{ccc} S_{5}=7^{11}r_{5} & S_{6}=7^{13}r_{6} & S_{7}=7^{15}r_{7}\end{array}\] for some $r_{1},\ldots,r_{7}\in R^{(7)}$. This proves (\ref{eq:St-Powers-of-7}) for $n=1,\ldots,7$. For the inductive step, let $R^{*}$ be the ring $7R^{(7)}$ so that\begin{align*} S_{n} & =\sum\limits _{i=1}^{7}(-1)^{i+1}g_{i}S_{n-i}\\ & =7^{3}r_{1}^{*}7^{n}r_{2}^{*}-\ldots+7^{13}r_{13}^{*}7^{n-6}r_{14}^{*}\\ & =7^{n+3}r_{15}^{*}\\ & =7^{n+4}r\end{align*} where $r_{i}^{*}\in R^{*}$ and $r\in R^{(7)}$ . \end{caseenv} \end{proof} \section{Proof of the Theorem} To remind the reader of the main result of the paper, we include it here. \begin{thm*} Let $p\in\{2,3,5,7\}$, and let $f_{0,m}^{(p)}(\tau)=q^{-m}+\sum a_{0}^{(p)}(m,n)q^{n}$ be an element of the basis described above, with $m=p^{\alpha}m'$ and $(m',p)=1$. Then, for $\beta > \alpha$, \[ \begin{array}{rll} a_{0}^{(2)}(2^{\alpha}m',2^{\beta}n)\equiv0 & \pmod{2^{3(\beta-\alpha)+8}} & \text{if }p=2\\ a_{0}^{(3)}(3^{\alpha}m',3^{\beta}n)\equiv0 & \pmod{3^{2(\beta-\alpha)+3}} & \text{if }p=3\\ a_{0}^{(5)}(5^{\alpha}m',5^{\beta}n)\equiv0 & \pmod{5^{(\beta-\alpha)+1}} & \text{if }p=5\\ a_{0}^{(7)}(7^{\alpha}m',7^{\beta}n)\equiv0 & \pmod{7^{(\beta-\alpha)}} & \text{if }p=7.\end{array}\] \end{thm*} The proof is in three cases. The first illustrates the method for the simplest basis elements, namely those with $(m,p)=1$. The second demonstrates the ``shifting'' property at its first occurence, $f_{0,p}^{(p)}$. The third is the general case; it builds inductively upon the methods of the first two cases. \subsection{Case 1: $(m,p)=1$} \begin{proof} This proof is almost identical to Lehner's proof of Theorem 3 in \cite{Lehner:2}, however it applies not only to functions which have poles of order bounded by $p$, but to all basis elements with $(m,p)=1$. For ease of notation, let $f(\tau)=f_{0,m}^{(p)}(\tau)$. We will demonstrate the method with $m=1$, then generalize it to all $m$ relatively prime to $p$. First, we will write $U_{p}f(\tau)$ as a polynomial in $\phi(\tau)$ with integral coefficients, all of which are divisible by the desired power of $p$. Since $U_{p}$ isolates the coefficients whose index is divisible by $p$, we will have proven the theorem for $\beta=1$. We will then apply $U_{p}$ repeatedly to the polynomial, showing that the result is always another polynomial in $\phi$ with integral coefficients, all of which are divisible by the desired higher power of $p$. Consider the level $p$ modular function $g(\tau)=pU_{p}f(\tau)+p^{\lambda/2}\phi(\tau)$. Notice that $g(\tau)$ is holomorphic at $\infty$ since both $U_{p}f(\tau)$ and $\phi(\tau)$ are holomorphic there. The $q$-expansion at $0$ for $g(\tau)$ is given by\[ g(-1/p\tau)=p(U_{p}f)(-1/p\tau)+p^{\lambda/2}\phi(-1/p\tau)\] which, by Lemmas \ref{lem:Psi-At-0} and \ref{lem:Main-Up-Formula} becomes\[ g(-1/p\tau)=p(U_{p}f)(p\tau)+f(-1/p^{2}\tau)-f(\tau)+\psi(\tau).\] When we notice that $f(\tau)=\psi(\tau)$ in this $m=1$ case, we obtain\begin{align*} g(-1/p\tau) & =p(U_{p}f)(p\tau)+\psi(-1/p^{2}\tau)-\psi(\tau)+\psi(\tau)\\ & =p(U_{p}f)(p\tau)+p^{\lambda/2}\phi(p\tau),\end{align*} which is holomorphic at $\infty$. Hence, $g(\tau)$ is a holomorphic modular function on $\Gamma_{0}(p)$, so it must be constant. Therefore, \begin{equation} U_{p}f(\tau)=c_{0}-p^{\lambda/2-1}\phi(\tau)\label{eq:Up-f1=00003DPhi}\end{equation} for some constant $c_{0}$. The proof is complete for $\beta=1$. Note: the prime $13$, having genus zero, would work in this construction; however, in that case $\lambda=\frac{24}{13-1}=2$, so $13^{\lambda/2-1}=1$, and we gain no new information. We now iterate the above process to prove the theorem for $\beta>1$. Notice that \[ U_{p}(U_{p}f(\tau))=c^{(p)}-p^{\lambda/2-1}U_{p}\phi(\tau).\] We know from Lemma \ref{lem:Modular-Eq-Phi} that $U_{p}\phi$ is a polynomial in $\phi$; in fact, by inspection of the $b_{j}^{(p)}$ values we see that we may write\begin{align*} U_{2}\phi^{(2)}(\tau) & =2^{3}\big(d_{1}^{(2)}\phi^{(2)}(\tau)+\sum_{n=2}^{2}d_{n}^{(2)}2^{8(n-1)}\phi^{(2)}(\tau)^{n}\big)\\ U_{3}\phi^{(3)}(\tau) & =3^{2}\big(d_{1}^{(3)}\phi^{(3)}(\tau)+\sum_{n=2}^{3}d_{n}^{(3)}3^{4(n-1)}\phi^{(3)}(\tau)^{n}\big)\\ U_{5}\phi^{(5)}(\tau) & =5\big(d_{1}^{(5)}\phi^{(5)}(\tau)+\sum_{n=2}^{5}d_{n}^{(5)}5^{n}\phi^{(5)}(\tau)^{n}\big)\\ U_{7}\phi^{(7)}(\tau) & =7\big(d_{1}^{(7)}\phi^{(7)}(\tau)+\sum_{n=2}^{7}d_{n}^{(7)}7^{n}\phi^{(7)}(\tau)^{n}\big)\end{align*} for some integers $d_{n}^{(p)}$. This shows that the second $U_{p}$ iteration is divisible by the correct power of $p$. Further, it gives us a polynomial of a suitable form to iterate the process using Lemma \ref{lem:Phi-Polynomials}. In each of the polynomials above, notice that $U_{p}\phi(\tau)=p^{\delta}r$ for some $r\in R^{(p)}$. Using Lemma \ref{lem:Phi-Polynomials}, we find that\[ U_{p}(U_{p}\phi)(\tau)=p^{2\delta}r'\] for some $r'\in R^{(p)}$, and further\[ U_{p}^{\beta}\phi(\tau)=p^{\beta\delta}r^{(\beta)}\] for some $r_{\beta}\in R^{(p)}$. This completes the proof for $m=1$. Now, if $(m,p)=1$, then $U_{p}f(\tau)$ is holomorphic at $\infty$, just as it was with $m=1$. Moving to the cusp at $0$ we find that $(U_{p}f)(-1/p\tau)$ can be written as a polynomial in $\psi(\tau)$ which appears as a polynomial in $\phi(\tau)$ when we return to $\infty$. Similar to (\ref{eq:Up-f1=00003DPhi}), we obain the equality\[ U_{p}f(\tau)=c_{0}+\sum_{i=1}^{M}p^{\lambda i/2-1}c_{i}\phi(\tau)^{i}\] for some $c_{i}\in\mathbb{Z}$ and $M\in\mathbb{Z}^{+}$. The only difference between this equation and (\ref{eq:Up-f1=00003DPhi}) is that in this more general case, we find that $U_{p}f$ is a higher degree polynomial in $\phi$. This formula can easily be iterated as before to obtain the desired result. \end{proof} \subsection{Case 2: $m=p$} \begin{proof} Again, for ease of notation, denote $f_{0,p}^{(p)}(\tau)$ by $f(\tau)$. For the $m=p$ case, we will proceed as before; however, we will find that $U_{p}f(\tau)$ has poles at both $\infty$ and $0$, and that $U_{p}f(\tau)$ does not possess any interesting divisibility properties, but $U_{p}^{2}f(\tau)$ does. This property will manifest itself as the {}``shifting'' previously mentioned. Notice first that $U_{p}f(\tau)=q^{-1}+O(1)$ has a simple pole at $\infty$. Therefore, we shall deal primarily with the function $U_{p}f(\tau)-\psi(\tau)$, which is holomorphic at $\infty$. We can use Lemmas \ref{lem:Psi-At-0} and \ref{lem:Main-Up-Formula} to view this function at $0$: \begin{align*} p(U_{p}f)(-1/p\tau)-p\psi(-1/p\tau) & = p(U_{p}f)(p\tau)+f(-1/p^{2}\tau)-f(\tau)-p^{\lambda/2+1}\phi(\tau) \\ & =pq^{-p}+O(1)+O(1)-q^{-p}+O(1)+O(q)\\ & =c_{0}+\sum_{i=1}^{p}c_{i}\psi(\tau)^{i} \end{align*} for some integers $c_{i}$. Replacing $\tau$ by $-1/p\tau$, we obtain \begin{equation} (U_{p}f)(\tau)=\frac{c_{0}}{p}+\psi(\tau)+\sum_{i=1}^{p}c_{i}p^{\lambda i/2-1}\phi(\tau)^{i}.\label{eq:Case-2-Upf} \end{equation} The $\psi(\tau)$ term in the equation makes any attempt at $p$-divisibility fail; for example, computation shows that the $7^{th}$ coefficient of $\psi^{(2)}(\tau)$ is odd. However, $\psi(\tau)$ satisfies Lehner's divisibility properties, so $U_{p}f$ inherits its $p$-divisibility from $\psi(\tau)$. So the function\[ U_{p}^{2}f(\tau)=c_{0}+U_{p}\psi(\tau)+\sum_{i=1}^{p}c_{i}p^{\lambda i/2-1}U_{p}\phi(\tau)^{i}\] has the same $p$-divisibility as $f_{0,1}^{(p)}$; hence, the shift. \end{proof} \subsection{Case 3: $m=p^{\alpha}m'$} \begin{proof} We prove this case using induction on $\alpha$. Case 1 showed that the theorem is true for all $m'$ relatively prime to $p$, so the $\alpha=0$ base case is complete. Assume Theorem \ref{thm:Andersen-Main} holds for all $m$ of the form $m=p^{\ell}m'$ with $\ell<\alpha$. We will show it holds for $m=p^{\alpha}m'$. To simplify notation, let $f_{\alpha}(\tau)=f_{0,p^{\alpha}m'}^{(p)}(\tau)$. Since $f_{\alpha}(\tau)=q^{-p^{\alpha}m'}+O(1)$, we find that $U_{p}f_{\alpha}(\tau)=q^{-p^{\alpha-1}m'}+O(1)$ has a pole of order $p^{\alpha-1}m'$ at $\infty$. So we focus our attention on $U_{p}f_{\alpha}(\tau)-f_{\alpha-1}(\tau)$, which is holomorphic at $\infty$. Using (\ref{eq:Main-Up-Formula}) we examine this function at $0$:\begin{align*} p(U_{p}f_{\alpha})\left(\frac{-1}{p\tau}\right) - pf_{\alpha-1}\left(\frac{-1}{p\tau}\right) & =p(U_{p}f_{\alpha})(p\tau)+f_{\alpha}\left(\frac{-1}{p^{2}\tau}\right) - f_{\alpha}(\tau)-pf_{\alpha-1}\left(\frac{-1}{p\tau}\right)\\ & =pq^{-p^{\alpha}m'}+O(1)+O(1)-q^{-p^{\alpha}m'}-O(1)-O(1)\\ & =(p-1)q^{-p^{\alpha}m'}+O(1).\end{align*} As before, we write this function as a polynomial in $\psi(\tau)$ with integral coefficients $c_{i}$:\[ p(U_{p}f_{\alpha})(-1/p\tau)-pf_{\alpha-1}(-1/p\tau)=c_{0}+\sum_{i=1}^{p^{\alpha}m'}c_{i}\psi(\tau)^{i},\] which, after switching back to the $q$-expansion at $\infty$, becomes\begin{equation} U_{p}f_{\alpha}(\tau)=\frac{c_{0}}{p}+f_{\alpha-1}(\tau)+\frac{1}{p}\sum_{i=1}^{p^{\alpha}m'}c_{i}p^{\lambda i/2}\phi(\tau)^{i}.\label{eq:Case-3-Upf}\end{equation} Notice that (\ref{eq:Case-3-Upf}) looks very similar to (\ref{eq:Case-2-Upf}), where $\psi(\tau)$ is replaced by $f_{\alpha-1}(\tau)$, so $U_{p}f_{\alpha}(\tau)$ inherits whatever divisibility properties $f_{\alpha-1}(\tau)$ has. Our inductive hypothesis states that $f_{\alpha-1}(\tau)$ exhibits Lehner's divisibility properties only after $U_{p}$ is applied $\alpha-1$ times. Therefore, applying $U_{p}$ to (\ref{eq:Case-3-Upf}) $\alpha-1$ times, we obtain\[ U_{p}^{\alpha}f_{\alpha}(\tau)=\frac{c_{0}}{p}+U_{p}^{\alpha-1}f_{\alpha-1}(\tau) +\frac{1}{p}\sum_{i=1}^{p^{\alpha}m'}c_{i}p^{\lambda i/2}U_{p}^{\alpha-1}\phi(\tau)^{i}\] showing that $U_{p}^{\alpha}f_{\alpha}(\tau)$ exhibits Lehner's divisibility properties. \end{proof} \bibliographystyle{amsplain}
2,869,038,155,761
arxiv
\section{Introduction} \label{sec:1} One important way in which non-centrosymmetric superconductors differ from conventional superconductors is in the response to magnetic fields. In particular, the removal of inversion symmetry leads to new terms in the free energy that give rise to magneto-electric effects. These effects are closely related to the appearance of magnetic field generated helical phase in which the superconducting order develops a periodic spatial variation. Here we review this physics beginning with a detailed examination of the phenomenological theory followed by an overview of microscopic treatments of these problems which include an overview an of the interplay of the helical phase and Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phases \cite{ff,lo}. \section{Phenomenology of Single Component Superconductors} \label{sec:2} This section reviews the phenomenology relating Lifshitz invariants in the the free energy to magnetoelectric effects, vortex structures, and the helical phase. \subsection{Ginzburg Landau free energy} \label{subsec:2} A key new feature of non-centrosymmetric superconductors is the existence of Lifshitz invariants in the Ginzburg Landau (GL) free energy \cite{MS94,Edel96,Y02,agt03,KAS05,MinSam08}. These give rise to magnetoelectric effects \cite{LNE85,Edel95,Y02,Fuj05,LuY08,LuY09}, helical phases \cite{agt03,DF03,KAS05,dim07,agt07}, and novel magnetic properties \cite{LNE85,KAS05,OIM06,yip07,yip-cond-mat,LuY08} discussed in this chapter. To examine the consequences of these invariants we initially consider a GL theory for a single component order parameter (for example, an $s$-wave superconductor) and add the most general Lifshitz invariant allowed by broken inversion symmetry. Specific Lifshitz invariants are tabulated in Table 1 for different point group symmetries of the material in question. Since the primary goal is to reveal the new physics arising from these invariants, we ignore the role of any anisotropy that might appear in the usual GL free energy. Under these conditions the GL free energy under consideration is (we work in units such that $\hbar=c=1$): \begin{equation} F=\int d^3r\left\{\alpha|\eta|^2 +K\eta^*{\bf D}^2\eta + K_{ij}B_i[\eta^*(D_j\eta)+\eta(D_j\eta)^*] +\frac{\beta}{2}|\eta|^4+\frac{B^2}{8\pi}\right\}, \label{glfree} \end{equation} where $\alpha=\alpha_0(T-T_c)$, $D_i=-i\nabla_i-2eA_i$ and ${\bf B}=\nabla\times {\bf A}$. From this free energy, the GL equations can be found by varying the above with respect to ${\bf A}$ and $\eta$. This results in the following: \begin{equation} \alpha\eta+\beta|\eta|^2\eta+K{\bf D}^2\eta+K_{ij}[2h_i(D_j\eta)+i\eta\nabla_jB_i]=0 \end{equation} and \begin{equation} {\bf J}_i=\frac{1}{4\pi}[\nabla\times({\bf B}-4\pi {\bf M})]_i=2eK[\eta^*(D_i\eta)+\eta(D_i\eta)^*]+4eK_{ji}B_j|\eta|^2 \label{GLcurrent} \end{equation} where \begin{equation} {\bf M}_i=-K_{ij}[\eta^*(D_j\eta)+\eta(D_j\eta)^*] \label{mag1}. \end{equation} These equations are joined by the boundary conditions (which follow from the surface terms that arise from integration by parts in the variation of $F$): \begin{equation} [K\hat{n}_i(D_i\eta)+K_{ij}B_i\hat{n}_j\eta]_{boundary}=0 \label{boundary} \end{equation} where $\hat{n}_j$ is the component of the surface normal along $\hat{j}$, and the usual Maxwell boundary conditions on the continuity of the normal component of ${\bf B}$ and the transverse components of ${\bf H}={\bf B}-4\pi {\bf M}$ (the appearance of ${\bf M}$ due to the Lifschitz invariants makes this boundary condition non-trivial). Note that adding the complex conjugate of Eq.~\ref{boundary} multiplied by $\eta^*$ to Eq.~\ref{boundary} multiplied by $\eta$ yields ${\bf J}\cdot {\hat n}|_{boundary}=0$. \begin{table} \begin{tabular}{|c|c|} \hline Point Group & Lifshitz Invariants\\ \hline $O$ & $K(B_xj_x+B_yj_y+B_zj_z)$ \\ $T$ & $K(B_xj_x+B_yj_y+B_zj_z)$\\ $D_6$ & $K_1(B_xj_x+B_yj_y+B_zj_z)+K_2B_zj_z$ \\ $C_{6v}$ & $K(B_xj_y-B_yj_x)$ \\ $C_6$ & $K_1(B_xj_x+B_yj_y+B_zj_z)+K_2B_zj_z+K_3(B_xj_y-B_yj_x)$ \\ $D_4$ & $K_1(B_xj_x+B_yj_y+B_zj_z)+K_2B_zj_z$ \\ $C_{4v}$ & $K(B_xj_y-B_yj_x)$ \\ $D_{2d}$ & $K(B_xj_y-B_yj_x)$ \\ $C_4$ & $K_1(B_xj_x+B_yj_y+B_zj_z)+K_2B_zj_z+K_3(B_xj_y-B_yj_x)$ \\ $S_4$ & $K_1(B_xj_x-B_yj_y)+K_2(B_yj_x+B_xj_y)$ \\ $D_3$ & $K_1(B_xj_x+B_yj_y+B_zj_z)+K_2B_zj_z$ \\ $C_{3v}$ & $K(B_xj_y-B_yj_x)$ \\ $C_3$ & $K_1(B_xj_x+B_yj_y+B_zj_z)+K_2B_zj_z+K_3(B_xj_y-B_yj_x)$ \\ $D_2$ & $K_1B_xj_x+K_2B_yj_y+K_3B_zj_z$ \\ $C_{2v}$ & $K_1B_xj_y+K_2B_yj_x$ \\ $C_2$ & $K_1B_xj_x+K_2B_yk_y+K_3B_zj_z+K_4B_yj_x+K_5B_xj_y$ \\ $C_s$ & $K_1B_zk_x+K_2B_zj_j+K_3B_xj_z+K_4B_yj_z$\\ $C_1$ & all components allowed \\ \hline \end{tabular} \caption{Allowed Lifshitz invariants for different point groups. Here $j_i=\eta^*(D_i\eta)+\eta(D_i\eta)^*$.} \end{table} The appearance of ${\bf M}$ in Eq.~\ref{mag1} and the associated magnetization current leads to new physics in non-centrosymmetric superconductors. Also note, as is the case for centrosymmetric superconductors, the boundary conditions are valid on a length scale greater that $\xi_0$, the zero-temperature coherence length. In the following few subsections, we present the solution to some common problems to provide insight into the role of the Lifshitz invariants. \subsection{Solution with a spatially uniform Magnetic field: Helical Phase} \label{subsec:3} In situations when the magnetic field is spatially uniform, the GL equations describing the physics can be greatly simplified by introducing the following new order parameter: \begin{equation} \tilde{\eta}=\eta \exp\big(i{\bf q}{\cdot {\bf x}}\big)=\eta \exp\big(i\frac{iB_jK_{jk}x_k}{K}\big). \label{hel} \end{equation} The GL free energy for $\tilde{\eta}$ no longer has any Lifshitz invariants and is \begin{equation} F=\int d^3r\left\{\Big[\alpha-B_lK_{lm}B_jK_{jm}\Big]|\tilde{\eta}|^2 +K_1\tilde{\eta}^*{\bf D}^2\tilde{\eta} +\frac{\beta}{2}|\tilde{\eta}|^4+\frac{B^2}{8\pi}\right\}. \label{newGL} \end{equation} The resulting new GL equations are now those of a single component superconductor with a magnetic field induced enhancement of $T_c$ (this magnetic field enhancement is discussed in more detail in Chapter 1). These new GL equations follow from a minimization of Eq.~\ref{newGL} with respect to ${\bf A}$ and $\tilde{\eta}$. Note that the phase factor introduced above cancels the additional current contribution from the Lifshitz invariants in Eq.~\ref{GLcurrent} and also cancels the related Lifshitz invariant contribution to the boundary condition. Furthermore, the magnetization that follows from Eq.~\ref{newGL} by taking the derivative with respect to $B_i$ coincides with that due Eq.~\ref{mag1} found prior to the redefinition of the order parameter. This modified free energy of Eq.~\ref{newGL} immediately implies that some results from the usual GL theory apply. In particular:\\ \noindent i) the vortex lattice solution near the upper critical field is the same as that of Abrikosov.\\ ii) the surface critical field $H_{c3}$ is the same as that of DeGennes. The order $B^2$ corrections to $T_c$ do not change $H_{c3}$ to leading order in $(T_c-T)/T_c$.\\ iii) the critical current in this wires will show no unusual asymmetry (this conclusion differs from that of Ref.~\cite{Edel96}).\\ \subsubsection{Helical Phase} The main new feature that appears in a uniform magnetic field is the spatial modulation of the order parameter. Since $\eta$ develops a helical spatial dependence in the complex plane, the resulting thermodynamic phase has been named the helical phase. Since helicity of the order parameter is related to its phase, an interference experiment based on the Josephson effect would provide the most reliable test to observe this. Indeed, such an experiment has been proposed \cite{KAS05}. In particular, consider the example of a 2D non-centrosymmetric superconductor (with a Rashba spin-orbit interaction) with a Zeeman field applied in the 2D plane. Then consider a Josephson junction between this and another thin film superconductor that is centrosymmetric. For a magnetic field applied in the plane of the film {\it perpendicular} to the junction and with the non-centrosymmetric superconductor oriented so that the helicity ${\bf q}$ is perpendicular to the field ; we find this gives rise to an interference effect analogous to the standard Fraunhofer pattern. For this experiment, the film must be sufficiently thin so that the magnetic field and the magnitude of the order parameter are spatially uniform. To illustrate this, consider the following free energy of the junction \begin{equation} H_J=-t\int dx[\Psi_1({\bf x})\Psi_2^*({\bf x})+c.c.] \end{equation} where the integral is along the junction. The resulting Josephson current is \begin{equation} I_J=Im\Big [ t\int dx\Psi_1({\bf x})\Psi_2^*({\bf x})\Big ] \end{equation} Setting the junction length equal to $2L$, and integrating yields a maximum Josephson current of \begin{equation} I_J=2t|\Psi_{1}^0||\Psi_{2}^0|\frac{|\sin(qL)|}{|q L|} \end{equation} This demonstrates that the Josephson current will display an interference pattern for a field {\it perpendicular} to the junction. Note that in the usual case the Fraunhofer pattern would be observed for a magnetic field perpendicular to the thin film for which a finite flux passes through the junction. \subsubsection{Magnetoelectric Effect} Amongst the early theoretical studies of non-centrosymmetric superconductors, it was pointed out that a supercurrent must be accompanied by a spin polarization of the carriers \cite{Edel95}. Within the macroscopic theory given above, this spin polarization is described by the magnetization in Eq.~\ref{mag1}. This magnetization appears when the supercurrent is non-vanishing due to a finite phase gradient. Subsequent to this proposal, it was suggested that the converse effect would also appear: a Zeeman field would induce a supercurrent \cite{Y02}. This would follow from the expression for the current of Eq.~\ref{GLcurrent} when the usual GL current ($2eK[\eta^*(D_i\eta)+\eta(D_i\eta)^*]$) vanishes. However, the latter proposal does not include the possibility discussed above that the order parameter develops a spatial modulation in the presence of a spatially homogeneous magnetic field (which leads to a nonvanishing $2eK[\eta^*(D_i\eta)+\eta(D_i\eta)^*]$). Indeed, this new equilibrium state ensures that the resultant supercurrent is vanishing. Nevertheless, as pointed out in Ref.~\cite{DF03}, it is possible to create this current using a geometry similar to that used to observe Little-Parks oscillations. In particular, the supercurrent has two contributions, one is the current due to the Lifshitz invariants and the other is the usual GL current $2eK[\eta^*(D_i\eta)+\eta(D_i\eta)^*]$. In the helical phase, these two contributions exactly cancel. By wrapping the superconductor in a cylinder, the condition that the order parameter is single valued does not allow the helical phase to fully develop since arbitrary spatial oscillations are not allowed. Consequently, when a magnetic field is applied along the cylindrical axis, a non-zero current can flow. The resulting current will develop a periodic dependence on the applied magnetic field \cite{DF03}. \subsection{London Theory and Meissner State} \label{subsec:4} We now turn to situations in which the magnetic field is not spatially uniform. The Lifshitz invariants lead to new physics for both the single vortex solution and for the usual penetration depth problem. To see this, we begin with the London limit and set $\eta=|\eta|e^{i\theta}$ and assume that the magnitude $|\eta|$ is fixed. The GL free energy is then minimized with respect to $\theta$ and ${\bf A}$. The minimization with respect to $\theta$ yields \begin{equation} K_1\nabla\cdot(\nabla\theta-2e{\bf A})+K_{ij}\nabla_iB_j=0 \end{equation} which is equivalent to the continuity equation for the current ($\nabla \cdot {\bf J}=0$). The minimization with respect to ${\bf A}$ yields \begin{equation} {\bf J}_i= \frac{1}{4\pi}[\nabla\times ({\bf B}-4\pi {\bf M})]_i=-\frac{1}{4\pi \lambda^2}[{\bf A}_i-\frac{1}{2e}\nabla_i\theta- \sum_j \sigma_{ji}{\bf B}_j] \label{London} \end{equation} with \begin{equation} 4\pi {\bf M}_i=\frac{1}{\lambda^2}\sum_j\sigma_{ij}({\bf A}_j-\frac{1}{2e}\nabla_j\theta), \label{mag} \end{equation} $1/\lambda^2=8\pi (2e)^2K|\eta|^2$ and $\sigma_{ij}=16\pi e \lambda^2 K_{ij}$. We take the surface normal is along the $\hat{z}$ direction and that the applied field is oriented along the $\hat{y}$ direction. Note that by applying an appropriate rotation to the fields in the free energy, this geometry results in no loss of generality. We assume that there are spatial variations only along the direction of the surface normal ($z$). We therefore have from $\nabla\cdot {\bf B}=0$ that $B_z=0$. We further choose ${\bf A}=[A_x(z),A_y(z),0]$ so that ${\bf B}=(-\partial A_y/\partial z,\partial A_x/\partial z,0)$ and work in a gauge where $\nabla\theta=0$. The three components of Eq.~\ref{London} yields \begin{eqnarray} \frac{\partial B_y}{\partial z}=&&\frac{1}{\lambda^2}\frac{\partial}{\partial z}[\sigma_{yy}A_y+\sigma_{zy}A_z]+\frac{1}{\lambda^2}A_x-\frac{1}{\lambda^2}\sigma_{xx}B_x \label{lon1}\\ \frac{\partial B_x}{\partial z}=&&\frac{1}{\lambda^2}\frac{\partial}{\partial z}[\sigma_{xx}A_x+\sigma_{zx}A_z]-\frac{1}{\lambda^2}A_y-\frac{1}{\lambda^2}\sigma_{yy}B_x \label{lon2}\\ 4\pi J_z=&&0=A_z-\sigma_{zx}B_x-\sigma_{zy}B_y\label{lon3}. \end{eqnarray} Note that contributions from $\sigma_{xy}$ and $\sigma_{yx}$ cancel in the above. Taking derivatives of Eq.~\ref{lon1} and \ref{lon2} with respect to $z$, using Eq.~\ref{lon3} to eliminate $A_z$, we find \begin{eqnarray} (1-\frac{\sigma_{zy}^2}{\lambda^2})\frac{\partial^2 B_y}{\partial z^2}=&&\frac{1}{\lambda^2}B_y-\frac{\sigma_{xx}+\sigma_{yy}}{\lambda^2}\frac{\partial B_x}{\partial z}+\frac{\sigma_{zy}\sigma_{zx}}{\lambda^2}\frac{\partial^2 B_x}{\partial z^2}\\ (1-\frac{\sigma_{zx}^2}{\lambda^2})\frac{\partial^2 B_x}{\partial z^2}=&&\frac{1}{\lambda^2}B_x+\frac{\sigma_{xx}+\sigma_{yy}}{\lambda^2}\frac{\partial B_y}{\partial z}+\frac{\sigma_{zy}\sigma_{zx}}{\lambda^2}\frac{\partial^2 B_y}{\partial z^2}. \end{eqnarray} The above must be solved with the boundary conditions $B_i(z\rightarrow \infty)=0$ and \begin{eqnarray} H_y=&&B_y(z=0)-4\pi M_y(z=0)\\ 0=&&B_x(z=0)-4\pi M_x(z=0) \end{eqnarray} where $H_y$ is the applied field. $M_x,M_y$ can be found using Eq.~\ref{mag} and Eq.~\ref{lon1}, \ref{lon2}, and \ref{lon3} to eliminate $A_x,A_y,$ and $A_z$ in favor of $B_x$ and $B_y$ and their derivatives. By setting $B_i=B_{i0}\exp(-\delta z/\lambda)$, the solution can be found analytically. The general form of the solution is quite involved, so here we present the solution for point groups $O$ and $C_{4v}$. \subsubsection{$O$ point group} A representative material is Li$_2$Pt$_3$B \cite{LiPt-PdB,yua06}. This problem has been solved in Refs.~\cite{LNE85,LuY08}. In this case there is only one Lifshitz invariant: $K_1{\bf B}\cdot{\bf j}$. Since this is a scalar under rotations the solution is the same for any orientation of the surface normal. The equations for ${\bf B}$ become: \begin{eqnarray} \frac{\partial^2 B_y}{\partial z^2}=&&\frac{1}{\lambda^2} B_y+ \frac{\delta}{\lambda^2}\frac{\partial B_x}{\partial z}\\ \frac{\partial^2 B_x}{\partial z^2}=&&\frac{1}{\lambda^2} B_x- \frac{\delta}{\lambda^2}\frac{\partial B_y}{\partial z}. \end{eqnarray} where $\delta=-2\sigma_{xx}$ (note $\sigma_{xx}=\sigma_{yy}$ in this case). This coupled set of equations can solved for $B_{\pm}=B_x\pm iB_y$ \cite{LNE85,LuY08} with the result that to first order in $\delta/\lambda$: \begin{eqnarray} B_y=&&H_y\big[\cos\frac{\delta z}{\lambda^2}+\frac{\delta}{\lambda}\sin\frac{\delta z}{\lambda^2}\big]e^{-z/\lambda}\\ B_x=&&H_y\big[\frac{\delta}{\lambda}\cos\frac{\delta z}{\lambda^2}-\sin\frac{\delta z}{\lambda^2}\big]e^{-z/\lambda}. \end{eqnarray} Physically, this implies that the the magnitude of the $B_x$ is discontinuous as it crosses the surface (though not that of $B_y$) and that ${\bf B}$ also rotates inside the superconductor. Note that in a slab geometry, $B_x$ is of opposite sign on the two sides of the slab. It may be possible to observe this through muon spin resonance experiments. \subsubsection{$C_{4v}$ point group} A representative material is CePt$_3$Si \cite{Bauer04}. In this case, the single Lifshitz invariant is generated by a Rashba spin-orbit coupling and is given by $K_1\hat{z}\cdot {\bf B}\times {\bf j}$. This implies $\sigma=\sigma_{xy}=-\sigma_{yx}\ne0$. The solution of the London problem now depends upon surface orientation and has been considered in Ref.~\cite{yip07}. We consider two situations here: the surface normal along and perpendicular to $\hat{z}$ (the four-fold symmetry axis). Consider first the normal along the $\hat{z}$ direction (in this case the applied field is $H_y$ and we find that $B_x=0$), then we have the usual London equation \begin{equation} \frac{\partial^2 B_y}{\partial z^2}=\frac{1}{\lambda^2}B_y\\ \end{equation} with the unusual boundary condition $H_y|_{z=0}=(B_y+\frac{\sigma}{\lambda}B_y)|_{z=0}$. This yields the solution \begin{equation} B_y(z)=\frac{H_y}{1+\frac{\sigma}{\lambda}}e^{-z/\lambda} \end{equation} These equations show that there is no rotation of ${\bf B}$ across the sample surface. However, the magnetic induction ${\bf B}$ is discontinuous as the surface is crossed. Again, in a slab geometry, the discontinuity in $B_y$ is opposite for the two sides of the slab. For the surface normal perpendicular to the $\hat{z}$ direction, the situation is different. To be concrete, consider the normal along the $\hat{x}$ direction and the applied field along the $\hat{y}$ direction (for the field along the $\hat{z}$ direction the usual London Equations result). In this case, it is again permissible to set $B_z=0$ and solve for $B_y$ to find \begin{equation} B_y=\frac{H_y}{1-\frac{\sigma^2}{\lambda^2}}e^{-z/\tilde{\lambda}} \end{equation} where $\sigma=\sigma_{xy}$ and $\tilde{\lambda}=\lambda(1-\frac{\sigma^2}{\lambda^2})$. \subsection{Spatial structure of a single vortex} \label{subsec:5} The London theory can also be used to examine the field distribution of a vortex in a strongly type II superconductor. Again, the lack of inversion symmetry introduce some new physics. Here we focus (as above) on two examples with point groups $O$ and $C_{4v}$ and provide the solutions of Refs.~\cite{yip07,yip-cond-mat,LuY08,LuY09}. The approach used in these publications is to consider the parameter $\sigma_{ij}/\lambda$ to be small and then the Lifshitz invariants perturb the usual London solution. When there are no Lifshitz invariants, the solution to the London equations are $\theta=-\phi$ ($\phi$ is the polar angle) and the field is applied along the ${\hat n}$ direction \begin{equation} {\bf B}=\frac{1}{2e\lambda^2}K_0(r/\lambda) \hat{n} \end{equation} where $K_0(x)$ is a modified Bessel function. The perturbative solutions depend upon the specific form of the Lifshitz invariants and we turn to a discussion of two cases in turn. \subsubsection{$O$ point group} The solution in this case was found in Ref.~\cite{LuY08,LuY09}. The modified London equation is (the problem does not depend upon field direction) \begin{equation} \nabla\times \nabla \times {\bf A}+\frac{1}{\lambda^2}{\bf A}=\frac{\nabla \phi}{2e\lambda^2}+2\frac{\delta}{\lambda^2}\nabla \times {\bf A}-\frac{\pi\delta}{e\lambda^2}\delta^2({\bf r})\hat{z}. \end{equation} The new term implies that, in addition to the field along $\hat{z}$, there is an additional component along $\hat{\phi}$. The authors of Ref.~\cite{LuY08,LuY09} find that to first order in $\delta/\lambda$ the additional field is \begin{eqnarray} B_{\phi}^{(1)}(x=r/\lambda)=&\frac{\delta}{e \lambda^3}\Big\{K_1(x)\int_0^xx'dx'I_1(x')K_1(x')+I_1(x)\int_x^{\infty}x'dx'[K_1(x')]^2\Big\} \nonumber \\ &- \frac{\delta}{2e\lambda^3}K_1(x) \end{eqnarray} where $I_1$ and $K_1$ are modified Bessel functions of the first kind. \subsubsection{$C_{4v}$ point group} The solution in this case was found in Ref.~\cite{yip07,yip-cond-mat}. The fields that appear due to the Lifschitz invariants depend in this case upon the orientation of the field. For the field along the $\hat{y}$ direction, it is found that the solution for ${\bf B}$ is given by (correct to first order in $\sigma/\lambda$) \cite{yip07,yip-cond-mat} \begin{equation} {\bf B}=\frac{1}{2e\lambda^2}K_0(|{\bf r}+\frac{\sigma}{\lambda}\hat{z}|/\lambda) \hat{y}. \end{equation} Physically, this implies that the maximum value of $B_y$ is shifted from the vortex center. This shift has also been seen in a full numerical solution of the Ginzburg Landau equations \cite{OIM06}. For the field along the $\hat{z}$ direction (the four-fold symmetry axis), the ${\bf B}$ field is unchanged and there is an induced magnetization along the radial direction \cite{yip07} (this radial magnetization was also found in the vortex lattice solution near $H_{c2}$ \cite{KAS05}). \subsection{Vortex Lattice Solutions} \label{subsec:6} For fields near the upper critical field, there have been a variety of studies on the Abrikosov vortex lattice \cite{KAS05,hia08,mat08,hia09}. Some of these studies predict multiple phase transitions in the vortex lattice state \cite{hia08,mat08,hia09}. These studies are based on microscopic weak-coupling theories and involve an interplay of paramagnetism, orbital diamagnetism, gap symmetry, band structure, and spin-orbit coupling \cite{hia08,mat08,hia09}. While this chapter will not address these vortex lattice transitions, we will address some of the microscopic issues in the next chapter. Here we focuss on the GL theory, for which the predictions are more straightforward. In particular, near the upper critical field, the magnetic field is approximately uniform and the considerations above imply that the vortex lattice is hexagonal (perhaps distorted by uniaxial anisotropy). Consequently (following the arguments of Section II B) , the order parameter solution near the upper critical field is $\eta({\bf r})=cnst \exp(i{\bf q}\cdot {\bf r}) \phi_0(x,y)$ where $\phi_0(x,y)$ is a lowest Landau level (LLL) solution. This solution, combining a phase factor and a (LLL) solution, has been called the helical vortex phase. The primary consequence of this solution is that the upper critical field is enhanced due to the presence of the Lifshitz invariants \cite{KAS05}. We note that due to the degeneracy of the LLL solution, there is ambiguity in the existence of the phase factor. In particular, the LLL solution ${\tilde \phi}_0(x,y)=e^{i\tau_yx/l_H^2}\phi_0(x,y-\tau_y)$ ($l_H$ is the magnetic length) is degenerate with $\phi_0(,x,y)$, consequently in some circumstances the wavevector ${\bf q}$ can be removed in favor of a shift of origin. This can be done whenever ${\bf q}$ is perpendicular to the applied magnetic field (this is the case for $C_{4v}$ point group symmetry but not for $O$ point group symmetry). We feel that is still meaningful to speak of the helical vortex phase for the point group $C_{4v}$ because the same phase factor implies an increase of the in-plane critical field in two-dimensions for which this ambiguity does not exist. The name helical vortex phase reveals the link between the solutions in two and three dimensions. In addition to studies near the upper critical field, there has been one numerical study of the time dependent GL equations in the vortex phase \cite{OIM06}. This study found the surprising result that the vortices flow spontaneously, in spite of the lack of an applied current. The claim is that the paramagnetic supercurrent (the magnetization current $\nabla\times {\bf M}$) is the origin of this spontaneous flux flow. We note that in this study the following boundary condition was used: ${\bf B}_{outside}={\bf B}_{inside}$. This differs from the continuity of ${\bf H}={\bf B}-4\pi {\bf M}$ discussed above. In the problem that was studied, ${\bf M}$ is non-trivial and an examination of its neglect in the boundary condition can be seen to be equivalent to having a current flow. We argue that this current is cause the spontaneous flux flow. We note that the boundary conditions discussed here should be used in problems where the minimum length scale is $\xi_0$, the zero temperature coherence length. However, at lengths scale smaller than this, a microscopic theory is required and the single particle quantum mechanical wavefunctions will obey quite different boundary conditions. \subsection{Multi-component order parameters} \label{subsec:7} There have not been as many studies on Lifshitz invariants in non-centrosymmetric superconductors in cases when the order parameter contains more than one complex degree of freedom. There has been one noteworthy result, which is the appearance of the helical phase when no magnetic fields are applied \cite{yua06,MinSam08}. In particular, if the ground state of the multi-component order parameter breaks time-reversal symmetry \cite{SU91,Book}, then the lack of both parity and time-reversal symmetries allows the helical phase to appear. As an example, consider the three dimensional irreducible representation of the point group $O$, with an order parameter ${\vec \eta}$ where the components transform as the $(x,y,z)$ component of a vector. The following Lifschitz invariant exists \cite{MinSam08} \begin{equation} iK(\eta_1^*D_y\eta_3+\eta_2^*D_z\eta_1+\eta_3^*D_x\eta_2 -c.c.). \end{equation} This Lifschitz invariant leads to a ground state order parameter ${\vec\eta}=e^{iqz}(1,i,0)$. The state ${\vec\eta}=(1,i,0)$ breaks time reversal symmetry and thus mimics the role of the magnetic field in the single component case. \section{Microscopic Theory} \label{sec:3} The phenomenological arguments of the previous section have also been the subject of many microscopic calculations. These calculations, while all related, focus on and extend different aspects of the phenomenological theory above. In particular, four points of contact exist between the phenomenological theories and the microscopic theories. These are: direct calculations of the Lifshitz invariants in the free energy in Eq.~ \ref{glfree}; calculations of the magnetization in Eq.~\ref{mag1}; calculations of the current in Eq.~\ref{GLcurrent}; and calculations of the helical wavevector ${\bf q}$ in Eq.~\ref{hel}. We briefly review the first three of these and then turn to a more complete overview of microscopic studies of the helical phase since this turns out to be closely linked to FFLO phases. \subsection{Contact between microscopic and macroscopic theories: Lifshitz Invariants} \label{subsec:8} The direct calculation of the Lifschitz invariants in Eq.~\ref{glfree} has been carried out by a few authors \cite{KAS05,Edel96,Y02,MinSam08} and can be found in Chapter 1 of this book. In particular, the non-interacting Hamiltonian is \begin{equation} \label{H_0} H_0=\sum\limits_{{\bf k}}\sum_{\alpha\beta=\uparrow,\downarrow} [\xi({\bf k})\delta_{\alpha\beta}+\mbox{\boldmath$\gamma$}({\bf k}) \cdot \mbox{\boldmath$\sigma$} _{\alpha\beta}] a^\dagger_{{\bf k}\alpha}a_{{\bf k}\beta} \end{equation} where $ a^\dagger_{{\bf k}\alpha} $ ($a_{{\bf k}\alpha}$) creates (annihilates) an electronic state $ | {\bf k} \alpha \rangle $, $\xi({\bf k})=\varepsilon({\bf k})-\mu$ denotes the spin-independent part of the spectrum measured relative to the chemical potential $ \mu$, $\alpha,\beta=\uparrow,\downarrow$ are spin indices, $ \mbox{\boldmath$\sigma$}$ are the Pauli matrices, and the sum over ${\bf k}$ is restricted to the first Brillouin zone. In the helicity basis, this Hamiltonian is diagonalized with energy bands given by \begin{equation} \xi_{\pm}({\bf k})=\xi({\bf k})\pm |\mbox{\boldmath$\gamma$}({\bf k})| \end{equation} with the Hamiltonian \begin{equation} \label{H_0 band} H_0=\sum_{{\bf k}}\sum_{\lambda=\pm}\xi_\lambda({\bf k})c^\dagger_{{\bf k}\lambda}c_{{\bf k}\lambda} , \end{equation} where the two sets of electronic operators are connected by a unitary transformation, \begin{equation} a_{{\bf k}\alpha}=\sum_{\lambda}u_{\alpha\lambda}({\bf k})c_{{\bf k}\lambda}, \label{trans} \end{equation} with \begin{equation} \label{Rashba_spinors} ( u_{\uparrow\lambda}({\bf k}),~~ u_{\downarrow\lambda}({\bf k})) = \frac{( |\mbox{\boldmath$\gamma$}|+\lambda\gamma_z ,~~ \lambda (\gamma_x+i\gamma_y) )}{\sqrt{2|\mbox{\boldmath$\gamma$}|(|\mbox{\boldmath$\gamma$}|+\lambda\gamma_z)}}. \end{equation} In the limit that only one of the bands cross the the Fermi energy (this can be realized for superconductivity at the surface of a topological insulator \cite{san10}), the following weak-coupling result for the coefficients defining the Lifishitz invariants of Eq.~\ref{glfree} is found \begin{equation} K_{ij}=-\frac{\mu_B N_0S_3}{2}\langle\phi^2({\bf k}) \hat{\bf{\gamma}}_i({\bf k})v_j({\bf k})\rangle \label{lif-1} \end{equation} where $N_0$ is the density of states of the band at the chemical potential, $\phi({\bf k})$ describes the superconducting state and is an even function belonging to one of one-dimensional representations of the point group of the crystal, $\langle...\rangle$ means the averaging over the Fermi surface, $\mu_B$ is the Bohr magneton, and \begin{equation} S_3(T)=\pi T\sum_n\frac{1}{|\omega_n|^3}=\frac{7\zeta (3) }{4\pi^2T^2}. \label{S_3} \end{equation} Eq.~\ref{lif-1} is valid when there is only a single band present. When two bands are present (as is often the case), and assuming that $\phi({\bf k})$ is the same for both bands, then Eq.~\ref{lif-1} must be multiplied by the factor \begin{equation} \delta N = (N_+-N_-)/(N_++N_-). \end{equation} where $N_{\pm}$ are the density of states of the two bands ($N_0=N_++N_-$). Microscopic calculations of the Lifshitz invariants are limited to the regime near $T_c$ where the GL theory is valid. \subsection{Contact between microscopic and macroscopic theories: current and magnetization} \label{subsec:9} In the limit of small magnetic fields (${\bf B}$) and small phase gradients ($\nabla \theta$) in the superconducting order parameter, it it possible to find microscopic extensions to Eq.~\ref{GLcurrent} and Eq.~\ref{mag1} that are valid for all temperatures. This has been carried out in Refs.\cite{Y02,Edel95}. Here, we follow the notation of Ref.~\cite{Y02}. In the clean limit, for 2D cylindrical bands with a Rashba interaction ($\mbox{\boldmath$\gamma$}({\bf k})=\alpha \hat{n}\times {\bf p}({\bf k})$) Eq.~\ref{GLcurrent} and Eq.~\ref{mag1} can be rewritten as \begin{eqnarray} J_x=&\rho_s\frac{\hbar \nabla_x \theta}{2m}-\kappa B_y \nonumber\\ M_y=&\frac{\kappa}{2} \hbar \nabla_x \theta \end{eqnarray} where $M_y$ is the magnetic moment, $\rho_s$ is the superfluid density, and \begin{equation} \kappa(T)=\frac{\mu}{4\pi \hbar^2}[p_{F+}\{1-Y(T,\Delta_+\}-p_{F-}\{1-Y(T,\Delta_-)\}] \label{kap} \end{equation} where $p_{F,\pm}$ are the Fermi momenta for the two bands, $\Delta_{\pm}$ are the gaps on the two bands, $\mu$ is the Fermi energy, and $Y(x)$ is the Yoshida function. Note that Eq.~\ref{kap} is proportional to $\delta N$ in the limit $\delta N <<1$. The role of Fermi liquid corrections has also been examined \cite{Fuj05} in this context. This study has found the that the only Fermi liquid corrections that alter the current contribution from the Lifshitz invariants are ferromagnetic correlations. If there are no ferromagentic correlations, then Eq.~\ref{kap} is unchanged. This is important in heavy Fermion materials, where the effective mass enhancement suppresses the usual supercurrent but does not change Eq.~\ref{kap} \cite{Fuj05}. \subsection{Microscopic Theory of the Helical and FFLO Phases} \label{subsec:10} The helical phase has received a great deal of attention from the microscopic point of view \cite{BG02,KAS05,Sam08,hia09,mat08,hia09,DF03,dim07,agt07,san10}. One reason for this is that it is closely related to the FFLO phase \cite{ff,lo} in which the superconducting order parameter develops a periodic spatial structure. The interplay between these two phases is not trivial. It is perhaps not surprising that spatially oscillating superconductor solutions readily appear in non-centrosymmetric superconductors when magnetic fields are applied. In particular, a state with momentum ${\bf k}$ at the Fermi surface will generally not have a degenerate partner at $ - {\bf k} $ with which to form a Cooper pair when both parity and time reversal symmetries are broken. The state ${\bf k} $ would rather pair with a degenerate state $ -{\bf k} + {\bf q}$ and in this way generate a spatially oscillating superconducting order parameter. \begin{figure} \sidecaption \includegraphics[scale=.65]{figure1.eps} \caption{A magnetic field directed as shown together with a Rashba spin-orbit interaction shifts the center of the large and small Fermi surfaces by $\pm{\bf q}/2$. The smaller dot represent the point $(0,0)$ (center of Fermi surfaces without field) and the two larger dots represent the points $(0,-{\bf q}/2)$ and $(0,{\bf q}/2)$ (centers of the new Fermi surfaces). Pairing occurs between states of ${\bf k}+{\bf q}/2$ and $-{\bf k}+{\bf q}/2$, leading to a gap function that has a spatial variation $\Delta({\bf x})=\Delta_0\exp(i{\bf q}\cdot{\bf x})$. From Ref.~\cite{agt07}.} \label{fig1} \end{figure} \vglue 0.5 cm The microscopic origin of the spatially oscillating states can be understood by an examination of the single particle eigenstates when a Zeeman field ${\bf H}$ is included (for now we ignore the vector potential ${\bf A}$) \begin{equation} H_Z=-\sum_{{\bf k},\alpha,\beta}\mu_B{\bf H}\cdot \mbox{\boldmath$\sigma$} _{\alpha\beta} a^\dagger_{{\bf k}\alpha}a_{{\bf k}\beta}.\end{equation} The single particle excitations now become \begin{equation} \xi_{\pm}({\bf k},{\bf H})=\xi({\bf k})\pm \sqrt{\mbox{\boldmath$\gamma$}^2({\bf k})-2\mu_B\mbox{\boldmath$\gamma$}({\bf k})\cdot {\bf H}+ \mu_B^2{\bf H}^2}. \label{e3} \end{equation} In the limit $|\mbox{\boldmath$\gamma$}|>>|H|$, this becomes (we ignore the small regions of phase space for which $\mbox{\boldmath$\gamma$}=0$) \begin{equation} \xi_{\pm}({\bf k},{\bf H}) \approx \xi({\bf k}) \pm\mu_B\hat{\mbox{\boldmath$\gamma$}}({\bf k})\cdot{\bf H}. \end{equation} The origin of pairing states with non-zero ${\bf q}$ (that is $\Delta({\bf x})\propto e^{i{\bf q}\cdot {\bf x}}$) follow from this expression. As an example, consider a Rashba interaction $\mbox{\boldmath$\gamma$}=\gamma_{\perp}(k_y\hat{x}-k_x\hat{y})$ for a cylindrical Fermi surface and a magnetic field along $\hat{x}$. In this case, as shown in Fig.~\ref{fig1}, the Fermi surfaces remain circular and the centers are shifted along the $\hat{y}$ direction. A finite center of mass momentum Cooper pair is stable because the same momentum vector ${\bf q}$ can be used to pair {\it every} state on one of the two Fermi surfaces. In the more general case, for a non-zero ${\bf q}$ state to be stable, the paired states should be degenerate: $\xi_{\pm}({\bf k}+{\bf q},{\bf H})=\xi_{\pm}(-{\bf k}+{\bf q},{\bf H})$, this gives the condition $\hbar {\bf q}\cdot {\bf v}_F=\mu_B{\bf H}\cdot\hat{\mbox{\boldmath$\gamma$}}({\bf k})$. This differs from the condition for the usual FFLO phase, for which $\hbar {\bf q}\cdot {\bf v}_F=\mu_B|{\bf H}|$. The optimal paring state corresponds to finding ${\bf q}$ that satisfies the pairing condition for the largest possible region on the Fermi surface. \begin{figure} \includegraphics[scale=.6]{two-phase.EPS} \caption{Typical phase diagram showing both multiple-${\bf q}$ and single-${\bf q}$ (helical phase) phases as a function of Zeeman field in a clean non-centrosymmetric superconductor for two different values of $\delta N$. These calculations where carried out with a Rashba spin-orbit interaction and a 3D spherical Fermi surface (a 2D cylindrical Fermi surface gives similar results). For fields $H\mu_B/T_c <1.5$, $q\approx \delta N H\mu_B/ v_F$ (for the $\delta N=0$, this leads to ${\bf q}=0$), while for higher fields $q\approx H\mu_B/ v_F$. From Ref.~\cite{agt07}.} \label{fig2} \end{figure} \vglue 0.5 cm The above paragraph also reveals the origin of the interplay between the helical and FFLO phases. In particular, the two Fermi surface sheets prefer pairing states with opposite sign of ${\bf q}$. Choosing a particular ${\bf q}$ allows pairing on one Fermi surface, but not on the other. This naturally leads to competition between single-${\bf q}$ (helical) and multiple-${\bf q}$ (FFLO-like) states. Which state appears depends upon the details of the system. Without going into further microscopic details, which can be found in Refs.~\cite{BG02,KAS05,Sam08,hia09,mat08,DF03,dim07,agt07,san10}, we summarize some of the main results here. One important result is that since there are two sources of the modulation ${\bf q}$ (FFLO-like physics and Lifshitz invariants), there are two typical values for the magnitude of $q$ \cite{dim07,agt07,Sam08,hia09} that both appear in different regions of the temperature/magnetic field phase diagram. In particular $q\approx H\mu_B/ v_F$ stems from FFLO-like physics related to Fig.~\ref{fig1} and is the value of $q$ in the high-field regime (in clean materials). While $q\approx \delta N H\mu_B/ v_F$ stems from the Lifschitz invariants and is the typical magnitude of $q$ in the low-field regime \cite{dim07,agt07,Sam08}. As shown in Fig.~\ref{fig2}, in the clean limit, both single-${\bf q}$ and multiple-${\bf q}$ phases exist \cite{agt07,dim07}. However, the multiple-${\bf q}$ phase become less stable as $\delta N$ increases \cite{agt07}. We note that in the case of superconductivity at the surface of a topological insulator, which is akin to $\delta N=1$, only the single-${\bf q}$ exists \cite{san10}. In the dirty limit the multiple-${\bf q}$ phases no longer appear, while the single-${\bf q}$ phase with $q\approx \delta N H\mu_B/ v_F$ is robust \cite{Sam08,dim07}. Finally we note that when the vector potential is also included then novel vortices and vortex phases may appear \cite{dim07,agt08,mat08,hia08,hia09}. \section{Conclusions} \label{sec:4} In this chapter we have examined the role of Lifshitz invariants that appear in the Ginzburg Landau free energy of non-centrosymmetric superconductors. These invariants lead to magnetoelectric effects, novel London physics in the Meissner state, new structure in individual vortices, and a helical phase in which the order parameter develops a periodic spatial variation. Additionally, we have provided an overview of theoretical developments in the microscopic description of this physics. \begin{acknowledgement} The author would like to thank S. Fujimoto, K. Samokhin, and M. Sigrist for useful discussions. This work was supported by NSF grant DMR-0906655. \end{acknowledgement} \input{referenc} \end{document}
2,869,038,155,762
arxiv
\section{Introduction} The Google Landmark Dataset v2 (GLDv2) has become a popular dataset for evaluating performance of architectures and methods used for solving large-scale instance-level recognition tasks \cite{weyand2020google}. The original dataset consists of over five million images with over 200,000 classes, originating from local experts who uploaded to Wikimedia Commons. Besides its size, the dataset poses several interesting challenges such as long-tailed distribution of classes, intra-class variability and noisy labels. Since 2019, GLDv2 has been used to asses and test state-of-the art instance-level recognition methods as part of the Google Landmark Competition hosted on kaggle.com. The winning solution of 2019 led to a cleaned version of GLDv2, which is a subset with 1.5 million images containing 81,313 classes and will be denoted by GLDv2c in the following. Furthermore, GLDv2x will be used to denote the subset of GLDv2 which is restricted to the 81,313 classes present in GLDv2c but was not cleaned. The yearly competition is divided into a recognition and retrieval task. The recognition track is about correctly classifying a landmark for a set of test images, where a significant amount of non-landmarks used as distractors are present. It is evaluated using Global Average Precision (GAP) \cite{perronnin2009family, weyand2020google} as metric. For retrieval the task is to find similar images in a database with respect to a given query image and is evaluated using mean Average Precision@100 (mAP). In contrast to recent years the 2021 competition is evaluated with special focus on a more representative test set\footnote{see \cite{kim2021towards} for details}. The competition follows a code competition setup, where participants are asked to upload their solution code rather than raw predictions. The submitted code will infer on a hidden test set offline, where resources available in the offsite environment are restricted to 12h runtime using a two-core CPU and a P100 GPU. During the competition participants are evaluated on the test set and a part of the predictions is used to calculate the above mentioned metrics and show the best score for each participant on a public leaderboard. After the competition the score with respect to the remaining part is released and used to determine and display the final scoring (private leaderboard). For training our models we used pytorch with mixed precision using 8xV100 NVIDIA GPUs with distributed data parallel (DDP). Moreover, we use several implementations and pretrained weights from timm \cite{rw2019timm}. Our training code will be available online\footnote{\fontsize{7}{9}\selectfont{https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place}}. \section{Methodology} \subsection{Validation strategy} For recognition track we use the same local validation as in the last years winners solution \cite{henkel2020supporting}, which leverages the 2019 test set and respective post-competition released test labels. The retrieval task is assessed in a similar way using the 2019 test query and index dataset together with the post-competition released retrieval solution. However, in our local validation we only considered index images that are matches of any query image to significantly reduce the computation time for evaluation. With this approach we achieved a very good correlation between local validation and leaderboard. For both tasks we tracked the respective competition metric during experiments at the end of every training epoch. \subsection{Modeling} \label{sec:modeling} For both tracks we developed deep learning based architectures, that learn an image descriptor, a high dimensional feature vector, which enables to differentiate between a larger number of classes yet allows for intra-class variability. Although historically global- and local landmark descriptors are trained separately and predictions are combined in a two-stage fashion, attempts are made to not only include the training of local descriptors in a single architecture (e.g. \cite{noh2017large}, \cite{cao2020unifying}) but also omit spatial verification and fuse global and local descriptors within a single-stage model (see \cite{yang2021dolg}). Given a tight competition timeline and restricted inference run-time we focused on single-stage models resulting in a single image descriptor. However, our modeling efforts put local features in the focus as they are especially important for landmark identification. In the following we present two architectures especially suited for large-scale image recognition/ retrieval with noisy data and high intra-class variability. Both conceptually share a large part of an EfficientNet \cite{tan2019efficientnet} based convolutional neural network (CNN) encoder and a sub-center arcface classification head with dynamic margins \cite{deng2020sub}, which was shown to be superior to classical arcface as demonstrated by the 3rd place 2020 recognition solution \cite{ha2020google}. For training we resize all image to a square size and apply shift, scale, rotate and cutout augmentation using albumentations \cite{albu_2020}. We use Adam optimizer with weight decay and learning rate and batch size varying per model. We follow a cosine annealing learning rate schedule with one warm-up epoch. \begin{figure*}[h!] \begin{center} \includegraphics[width=4.5in]{figs/dolg_arch_v2.png} \end{center} \vspace*{-5mm} \caption{\small Model architecture DOLG-EfficientNet-B5} \label{dolg_arch} \end{figure*} \subsubsection{DOLG-EfficientNet with sub-center arcface} We implemented DOLG \cite{yang2021dolg}, but with some adjustments to improve the performance. Firstly, we used an EfficientNet encoder, which was pretrained on ImageNet. We added the local branch after the third EfficientNet block and extract 1024-dimensional local features using three different dilated convolutions, where dilation parameters differ per model. The local features of the three dilated convolutions are concatenated in feature dimension and self-attended using spatial-2d-attention. The local features are then fused orthogonally with the global feature vector, which resulted from a GeM pooling \cite{radenovic2018fine} of the fourth EfficientNet block output projected to 1024 dimensions.\footnote{see \cite{yang2021dolg} for details of local branch and orthogonal fusion module} The fused features are aggregated using average pooling before they are fed into a neck consisting of a fully connected layer followed by batch-norm and parameterized ReLU activation (FC-BN-PReLU)\footnote{see neck Option-D from \cite{deng2019arcface} for detailed description and rational} resulting in a 512-dimensional single descriptor. For training, this single descriptor is fed into a sub-center (k=3) arcface head with dynamic margins predicting 81,313 classes. Our DOLG-EfficientNet models are trained following a 3-step procedure. Firstly, the models are trained for ten epochs on GLDv2c using a small image size. Then training is continued for 30-40 epochs on the more noisy GLDv2x using a medium image size. Finally, the models are finetuned for a few epochs on a large image size also using GLDv2x. \subsubsection{Hybrid-Swin-Transformer with sub-center arcface} The second architecture leverages recent advances in using transformers for computer vision problems. We appended a vision transformer to a CNN-encoder resulting in a Hybrid-CNN-Transformer model. As such the CNN-part can be interpreted as a local feature extractor, while the transformer-part acts as a graph neural net on those local features aggregating them to a single vector. More precisely we used a Swin-Transformer\cite{liu2021swin} as it especially flexible at various scales. As input for the transformer we use the output of a few blocks of EfficientNet which is flatten, position embedded and projected to match the transformer dimensions and extended with a virtual token representing the aggregated information. After passing through the Swin-Transformer we fed the features of the virtual token into a FC-BN-PReLU neck to derive the final descriptor. The hybrid model is trained using an sub-center arcface head with 81,313 classes and dynamic margins. When combining an EfficientNet encoder and a Swin-Transformer which both have been pre-trained on ImageNet individually, a careful training recipe is important to avoid overflow and other training issues resulting in NaNs especially when using mixed precision. We used the following four-step approach. Firstly, initialize the transformer-neck-head part by training on a small image size using GLDv2c for 10 epochs. Next, exchange the transformers orignal patch embedding with a 2-block EfficientNet encoder, freeze the previously transformer-neck-head part and train the newly added CNN part for one epoch on medium size images. thirdly, unfreeze and train the whole model for 30-40 epochs using GLDv2x. Finally, insert a further pretrained EfficientNet block between the CNN and Swin-Transformer and finetune the whole model for a few epochs using large images and GLDv2x. \begin{figure*}[h!] \begin{center} \includegraphics[width=4.5in]{figs/swin_arch_v2.png} \end{center} \vspace*{-5mm} \caption{\small Hybrid-Swin-Transformer, exemplary shown for EfficientNet-B5-Swin-Base224} \label{swin_arch} \end{figure*} \subsection{Submission ensemble} The winning submission for recognition track is an ensemble of eight models including three DOLG and three Hybrid-Swin-Transformer models with varying EfficientNet backbones and input image sizes. We additionally trained two pure EfficientNet models following code and instructions provided by the 3rd place team of google landmark recognition competition 2020 \cite{ha2020google} where models are trained on the full GLDv2. For the winning submission of the retrieval track we used nearly the same ensemble but exchanged one of the pure EfficientNet models with an EfficientNet trained following the procedure of 2nd place team of google landmark recognition competition 2020 \cite{bestfitting2020}. Table \ref{model_ensemble_overview} gives an overview of backbones, image sizes, data used and resulting scores. Instead of increasing the image size for the last training step, we reduced the stride in the first convolutional layer from (2,2) to (1,1) for some models, being especially beneficial for small original images. \begin{table}[h] \centering \begin{adjustbox}{width=\columnwidth,center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline model & \makecell{image\\size} & stride & data & \makecell{private score\\recognition} & \makecell{public score\\recognition} & \makecell{private score\\retrieval} & \makecell{public score\\retrieval} \\ \hline DOLG-EfficientNet-B5 & 768 & 2 & GLDv2x & 0.476 & 0.497 & 0.478 & 0.464 \\ DOLG-EfficientNet-B6 & 768 & 2 & GLDv2x & 0.476 & 0.479 & 0.474 & 0.454 \\ DOLG-EfficientNet-B7 & 448 & 1 & GLDv2x & 0.465 & 0.484 & 0.470 & 0.458\\ EfficientNet-B3-Swin-Base-224 & 896 & 2 & GLDv2x & 0.462 & 0.487 & 0.481 & 0.454\\ EfficientNet-B5-Swin-Base-224 & 448 & 1 & GLDv2x & 0.462 & 0.482 & 0.476 & 0.443\\ EfficientNet-B6-Swin-Base-384 & 384 & 1 & GLDv2x & 0.467 & 0.492 & 0.487 & 0.462\\ EfficientNet-B3 & 768 & 2 & GLDv2 & 0.463 & 0.487 & &\\ EfficientNet-B6 & 512 & 2 & GLDv2 & 0.470 & 0.484 & 0.454 & 0.441\\ EfficientNet-B5 & 704 & 2 & GLDv2x & & & 0.459 & 0.428\\ Ensemble Recognition & & & & 0.513 & 0.534 & &\\ Ensemble Retrieval & & & & & & 0.537 & 0.518\\ \hline \end{tabular} \end{adjustbox} \caption{Overview of model ensemble} \label{model_ensemble_overview} \end{table} \subsection{Inference} For both tracks we used an retrieval approach for prediction, where for each query image most similar reference images are searched in database of index images using cosine similarity between L2-normalized image descriptors. For the recognition task the train set is used as index image database and the landmark label of the most similar train images are used as prediction for a given test image. In contrast, for retrieval an additional database of index images to retrieve most similar images from is pre-defined. For recognition track, to add more intra-class variety, we expand the offsite train set with full GLDv2 and landmark images from WIT \cite{srinivasan2021wit}, both filtered to contain only landmarks of the offline train set. For retrieval we further extend with the index set of the 2019 retrieval competition. \subsubsection{Retrieval post-processing} In order to retrieve most similar index images given a query image using our ensemble we rank all index images using cosine similarity of the 512-dimensional descriptor for each model individually, resulting in query-index pair scores. We then re-rank the index images by a discriminative re-ranking procedure derived from the one introduced in the retrieval task of the 2019 Google Landmark competition by the winning team \cite{ozaki2019largescale}, where in a first step we label the query and index images using their top3 cosine similarity to a given train set. However, in contrast to the hard re-ranking procedure illustrated in \cite{ozaki2019largescale}, we used a soft up-rank procedure by adding the top3 index-train cosine similarity to the query-index scores if labels of query and index image match. We saw further benefit when additionally performing a soft down-ranking procedure. We implemented the down-ranking by substracting 0.1 times the top3 index-train cosine similarity if labels of query and index image do not match. For each model in our ensemble we extract the top750 index image ids and related scores for each query image using the above method and aggregate the resulting 6000 scores by summing per each image id before we take the top100 as a final prediction. \subsubsection{Recognition post-processing} We use our ensemble to extract eight 512-dimensional vectors for each train and test image. Vectors are then averaged per model type (DOLG-EfficientNet, Hybrid-Swin-Transformer, pure EfficientNet) resulting in three 512-dimensional vectors which are then concatenated leading to a 1536-dimensional image descriptor. We use cosine similarity to find the closest training images for each test image and apply the post-processing from \cite{henkel2020supporting} for re-ranking and non-landmark identification, which results in the final predictions. \section{Conclusion} We presented several improvements to previous approaches for large-scale landmark identification leading to winning both tracks of the 2021 Google landmark competition. We showed how deep orthogonal features or vision transformers help to efficiently leverage local information extracted with a CNN backbone and stressed the superiority of sub-center arcface when confronted with long-tailed class distributions and intra-class variability. We confirmed the applicability of the re-ranking and non-landmark identification of \cite{henkel2020supporting} to the more balanced 2021 test set and explained a novel soft discriminative up- and down-ranking procedure for the retrieval task. \newpage \bibliographystyle{abbrv}
2,869,038,155,763
arxiv
\section{Introduction} Circumstellar material holds clues about the mass-loss history of massive stars. Indeed, as the winds interact with the interstellar medium (wind-blown bubbles, bow shocks), they leave a characteristic signature that depends on the wind properties. Moreover, the material ejected during short eruptive phases is visible as nebulae around massive stars. The analysis of these features reveals which material was ejected and in which quantity. With the recent reduction in mass-loss rates, these episodes of enhanced mass-loss have gained more attention, as they seem more crucial than ever in the evolution of massive stars. Another reason to study the close environment of massive stars is to better understand the evolution of supernova remnants (SNRs). Indeed, the famous rings of SN1987A may only be understood if one considers the previous mass-loss episodes of the progenitor. Morphology is not the only SNR parameter which is affected, as the SNR dynamics in an homogeneous medium or in winds and circumstellat ejecta is not identical. For its study, the IR provides several key diagnostics. Continuum emission in this range is provided by heated dust, which may have a range of temperatures depending of the framework (very close hot features, large, old, and cool bubbles). In addition, IR lines probe the many phases of the material: molecules (e.g. PAHs) for the neutral material, ionized metals for HII regions,... This summary of SpS5 - part III examines each case of circumstellar environment in turn, and concludes with the potential offered by current and future facilities. \section{Blue supergiants} Circumstellar structures around BSGs have been predominantely identified as bow shocks around runaway stars. Originally discovered with IRAS (e.g. Van Buren \& McCray, 1988, ApJ, 329, L93), such structures have also been seen with MSX and WISE (Peri et al. 2012). A more general survey of BSGs, i.e. not targeting runaway stars, with objects selected from Crowther et al. (2006) and Przybilla et al. (2010), reveals IR material around six of the 45 targets at 22$\mu$m with WISE, also mostly in the form of bow shocks (Wachter, in prep). Several examples of bipolar nebulae around BSGs are also known (e.g. Sher 25, Smartt et al. 2002; HD 168625, Smith 2007). However, this material could have also been ejected during an LBV phase, since LBVs can exhibit BSG spectra, and we will therefore concentrate on the bow shocks. \begin{figure} \centering \includegraphics[width=6cm]{velax1.png} \includegraphics[width=6cm]{bd43.png} \caption{{\it Left:} H$\alpha$ emission (greyscale) of Vela X-1 with PACS 70$\mu$m emission contours shown on top. {\it Right:} Colour composite image of bow shock of BD+43$^{\circ}$3654 (WISE 12$\mu$m in blue, PACS 70$\mu$m in green, and PACS 160$\mu$m in red). The direction of proper motion is indicated by the arrow in both cases. From Cox et al. (in prep.).} \label{Cox} \end{figure} Runaway stars have large stellar velocities (above 30\,km\,s$^{-1}$) resulting from dynamical interactions in (dense) clusters or from a supernova explosion in a binary system. These stars can thus travel at supersonic speeds through the local medium giving rise to ``bow shocks'' as their stellar winds interact with the surrounding medium, which has been previously ionised by stellar photons from the hot star (Weaver 1977). The occurrence of such bow shocks has been shown to depend primarily on the ISM conditions (Huthoff \& Kaper 2002). For example, even a runaway star may travel at subsonic speeds in the tenuous interior of a superbubble, where the sound speed can be as much as 100\,km\,s$^{-1}$, hence no (detectable) bow shock will be produced in that case. The filling factor of ISM with $v_\mathrm{sound} \leq 10$\,km\,s$^{-1}$ is 20\% and 75\% of O-stars have velocities $\geq$10\,km\,s$^{-1}$, so the expected fraction of O-stars with bow shocks is $\sim$15\%. This is remarkably similar to the values derived from IRAS and WISE observations (Noriega-Crespo et al. 1997, Peri et al. 2012). Once formed, the size, shape and morphology of a bow shock depends on both stellar (wind kinetic energy and stellar velocity) and interstellar parameters (density and temperature). In particular the ratio $v_\star/v_\mathrm{wind}$ indicates whether or not instabilities are likely to develop (Dgani et al. 1996), and the stand-off distance between the star and the apex of the shock is determined from the pressure balance between the stellar wind and the ISM (see analytical results by Wilkin 1996 and simulations by e.g. Comeron \& Kaper 1998, Blondin \& Koerwer 1998). Independent estimates of the wind parameters can thus be inferred from bow shocks, which serves as a useful check for atmosphere models, but the values are sensitive to the ISM properties, which are not always known with precision. \begin{figure} \includegraphics[width=14cm]{Betelgeuse_PK.pdf} \caption{{\it Left:} Interferometric image of the photosphere of Betelgeuse obtained by Haubois et al. (2009), showing its inhomogeneous surface brightness. {\it Center:} VLT/NACO adaptive optics tricolor composite image (RGB=KHJ) obtained by Kervella et al. (2009), showing the emission from a compact molecular envelope. {\it Right:} VLT/VISIR image at $10.49\,\mu$m of the dust thermal emission obtained by Kervella et al. (2011). North is up, East to the left, and the field of view is given in the upper right corner of each image. } \label{Betel} \end{figure} Currently, a small survey with Herschel-PACS of 5 runaways with known bow-shocks is ongoing: $\alpha$\,Cam, $\zeta$\,Oph, $\tau$\,CMa, Vela X-1 and BD+43$^{\circ}$3654 (Cox et al., in preparation). For Vela X-1, the peak emission of the dust emission is co-spatial with the most prominent H$\alpha$ arc seen in the supposed direction of space motion (Fig. \ref{Cox}): it is concluded that the outer shock is radiative, but the inner shock is adiabatic, though some H$\alpha$ emission possibly related to (part of) the inner termination shock is also detected. From the analysis of its ``puffed-up'' bow shock (Fig. \ref{Cox}), the mass-loss rate of BD+43$^{\circ}$3654 (O4If) was found to be 10$^{-4}$\,M$_{\odot}$\,yr$^{-1}$: this is very high (by 2 orders of magnitude) in view of current mass-loss rate estimates of such stars, but the exact value strongly depends on ISM density, which need to be refined. The dust temperature, $\sim$~45~K, is compatible with heating by stellar photons only, suggesting there is no additional shock-heating of grains. The thickness of a bow shock ($\sim$~1~pc) suggests a Mach number close to unity, implying a ISM temperature of 10$^3$ -- 10$^4$~K. \section{Red supergiants} Circumstellar structures on scales of a few arcseconds or less around RSGs have been revealed through interferometric techniques (e.g. Monnier et al. 2004). Stencel et al. (1988, 1989) reported the IRAS detection of resolved shells with typical radii of a few arcminutes around RSGs for a significant fraction (25\%) of their sample. However, higher resolution Spitzer images fail to confirm several of these extended structures (Wachter, in prep), indicating that a systematic survey is needed to ascertain the occurrence of large scale circumstellar shells around RSGs. \begin{figure} \begin{center} \includegraphics[width=12cm]{spire_vycma2.pdf} \caption{Continuum subtracted Herschel SPIRE spectrum of VY CMa from 294$\mu$m to 192$\mu$m. Multitude of molecular lines have been detected. From Matsuura et al. (in prep.).} \label{Mik} \end{center} \end{figure} A few (famous) cases have however been studied in depth. One of these is Betelgeuse, a cool (3600K), large (700 R$_{\odot}$), rather massive (10--15 M$_{\odot}$), luminous ($>10^5$ L$_{\odot}$), and nearby (150 pc) star. Because of its distance, Betelgeuse can be probed on almost all scales, providing a unique panorama of stellar surroundings (Fig. \ref{Betel}). Space-based and interferometric instruments (e.g. HST, IOTA/Ionic and VLTI/Pionier) revealed the photosphere, notably the expected non-uniformities due to large convection cells. Adaptive optics imaging in the near-IR (e.g. NACO) and (radio or IR) interferometers unveiled the properties of the internal, compact molecular envelope (1--10 R$_*$). Precursors of dust have been found there, as well as an extended ``plume'' reaching 6R$_*$ and maybe linked to a hot spot on the photosphere. High-res imaging (e.g. VLT/VISIR) shows the envelope at intermediate scales (10--100 R$_*$), where the dust forms (a possible signature of silicates has been found). At these small and intermediate scales, Betegeuse presents a complex circumstellar envelope (with knots and filaments) at all wavelengths, which implies an inhomogeneous spatial distribution of the material lost by the star. Finally, at the largest scale, IR imagers such as Herschel unveil the cool external envelope (100-10000 R$_*$), where a bow shock with the ISM is detected (Cox et al. 2012). \begin{figure} \begin{center} \includegraphics[width=10cm]{Guy-WS1-Figure.png} \caption{{\it Top left:} A WISE color composite of 12 $\mu$m (blue; blue contours) and 22 $\mu$m (red; yellow contours) of WS1, discovered and initially characterized by Gvaramadze et al. (2012). The contours for each band help illuminate the morphology of the nebular material, which has an overall SE-NW elongation, reminiscent of bipolar structure. {\it Bottom left:} Optical photometric monitoring since discovery show both the R and I light curves have brightened by about 1 magnitude over the last year. Arrows indicate times when same night spectroscopy were secured. {\it Right panel:} Optical spectroscopic monitoring indicates evolution to cooler temperature with near disappearance of the He I 5876 $\rm \AA$ and 6678 $\rm \AA$ and changes in the $\rm H_\alpha$ line profile. Figures from Stringfellow et al. (in preparation).} \label{Guy} \end{center} \end{figure} Herschel has also probed the envelope of other red supergiants (Groenewegen et al. 2011). Turning in particular to the case of VY CMa (Matsuura et al., in prep.), the potential of IR spectroscopy is obvious. Herschel-SPIRE reveals a rich spectrum, with a dust continuum and hundreds of lines dues to molecules (one third linked to water, others to CO, CS, SiO,...), which constrain the envelope's properties. For example, the isotopic ratio $^{12}$C/$^{13}$C is found to be 6.5, in agreement with observations of other RSGs but at odds with theoretical predictions which are four times higher at least. Very strong emission of submm molecular lines can be explained if a temperature gradient is present in the envelope, e.g. because of dust formation at a certain radius. \section{Luminous Blue Variables} Because of their spectacular eruptions, LBVs are the most well-known cases of massive stars with ejecta. It is not yet certain, however, at what stage (BSG? after a RSG phase or not?) this material is ejected, and how (multiple events?). LBVs are rare: in the list of Clark et al. (2005), there are only 12 confirmed and 23 candidate LBVs. IR has played a key role in recent years. The search, through surveys like MIPSGAL, of round-shaped nebulae with luminous central stars resulted in the discovery of many new nebulae: 62 shells in Wachter et al. (2010), 115 shells and bipolar nebulae in Gvaramadze et al. (2010), 416 structures in Mizuno et al. (2010). Many of these nebulae are preferentially detected with Spitzer 24$\mu$m band, indicating relatively cold material. Identifying shell-like structures is only the first step. To ascertain a cLBV status, the central object needs to be studied spectroscopically. This was done for many of these new detections (c.f., Gvaramadze et al. 2010; Wachter et al. 2010; Stringfellow et al. 2012a,b, and in preparation). The classification does not rely on the presence of a particular line, but rather on the morphological resemblance of the spectra to spectra of known LBVs - while not 100\% perfect (some peculiar O and WR stars display similar features), this method has the advantage of being simple and rather robust. A more definitive answer can be provided through photometric and/or spectroscopic monitorings. Indeed, as their name indicate, LBVs should be {\it variable}. Near-simultaneous photometric and spectroscopic monitoring in the optical (and IR) of about a dozen newly identified candidate LBVs has revealed that WS1 (discovered by Gvaramadze et al. 2012) is indeed a bona fide LBV, presently displaying what appears to be S Dor type variability as shown in Fig. \ref{Guy} (Stringfellow et al. 2012, in preparation). \begin{figure} \begin{center} \includegraphics[width=12cm]{wra_spec_22.jpg} \caption{PACS spectrum (central spaxel) of WRAY 15-751 nebula, showing the lines from the ionized and neutral regions (from Vamvatira-Nakou et al., submitted).} \label{Chloi} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=4cm]{WR16wise.pdf} \includegraphics[width=4.5cm]{WR8wise.pdf} \includegraphics[width=3.5cm]{WR35bwise.pdf} \caption{Examples of the three stages in WR nebula morphologies: from left to right - bubble (WR16), clumpy phase (WR8), and mixed phase (WR35b). From Toal\'a et al. (in prep.).} \label{Martin} \end{center} \end{figure} IR is also useful in revealing details of particular objects. For example, a Herschel survey of LBVs undertaken at Li\`ege yielded as first result a characterization of the surroundings of WRAY 15-751 (Vamvatira-Nakou et al., submitted). IR photometry can only be explained if the star evolves at constant luminosity and dust grains are Fe-rich. Images also revealed the presence of a second shell, about 4 times larger than the previously known one, which most probably results from an older eruptive event. Considering both structures, there is about 0.075 M$_{\odot}$ of dust in the system. Ionized gas is responsible for several forbidden lines observed in the Herschel-PACS range (Fig. \ref{Chloi}), which allow a N/O abundance of about 7 times solar and a mass of ionized gas of 1--2 M$_{\odot}$ (20 times that of dust), to be derived. Dust can be well studied in the IR, so this range may provide clues on where dust come from in galaxies. Two examples of such feedback were presented in the session: $\eta$\,Car and SN1987A. The latter was observed with Herschel at 100-500$\mu$m wavelengths, and 0.4-0.7 M$_{\odot}$ of dust was detected - mostly silicates and amorphous carbon (Matsuura et al. 2011). It is thought that this dust come from the explosion, but the role played by previous mass-loss episodes, in particular the LBV phase, is not yet clear. For example, about 0.12 M$_{\odot}$ of dust was detected, thanks to 30$\mu$m MiniTAO observations, in the famous LBV $\eta$\,Car. Up to 80\% of that dust belonged to the torus, hence may not be related to the big 1843 event. \section{Wolf-Rayet stars} Only a few percentage (4-6\%) of Wolf-Rayet stars displays surrounding nebulosities in the WISE survey, and most are found around WN stars (Wachter, in prep). The morphological classification scheme of WR nebulae proposed by Chu (1981) has been revised in this meeting by Guerrero et al. (Toal\'a et al, in prep.) using \emph{WISE} IR images and SDSS or Super Cosmos sky survey H$\alpha$ images for 35 nebulae associated with WRs. Two \emph{WISE} bands were particularly used: the one at 12$\mu$m, which encompasses PAH lines and lines of low excitation ions, and that at 22$\mu$m, to which thermal emission from dust and lines of He~{\sc i} as well as high excitation ions contribute. Three phases are defined. In the first one, WR nebulae appear as complete shells or bubbles. It corresponds to the star just entering the WR stage, when its powerful wind sweeps up the previous slow and dense winds (from e.g. LBV or RSG stages). The second phase is the clumpy phase. At that point, the nebulae display knots of gas and dust connected by partial shells and arcs. It corresponds to an age of a few 10$^4$\,yr, when instabilities break down the swept-up shell. The stellar motion through the ISM has an impact on the morphology, for example one-sided arc may be sometimes seen. Finally, the mixed nebular phase ends the cycle, with no definite morphology nor always a 1-to-1 correspondence between optical and IR images. It corresponds to the last stage, when the circumstellar nebula begins to dissolve into the ISM. \section{Studying the close environment of massive stars} The close environment of massive stars is the ``missing link'' between the star itself and the large circumstellar features. It plays a key role in understanding the mass-loss, but it is also difficult to probe directly. Emission lines arising in the wind and circumstellar material are a classical way to study this region, as well as near-IR excess linked to disk-like features. In this context, Be and B[e] stars are targets of choice, and surprises are frequent. For example, Graus et al. (2012) found three new sgB[e] in the SMC: they display typical spectra, with forbidden lines, but the line strengths as well as the IR excess appear reduced compared to usual objects of this class. It suggests that either the disks have less material or less dust than usual, or maybe that these stars are transitional objects. Another case intriguingly shows the opposite situation: CD$-$49$^{\circ}$3441 displays forbidden lines and appreciable IR excess, but is a main-sequence Be star away from any star-forming region (Lee and Chen 2009). A possibility may be that this star is in fact a weak B[e], rather than a classical Be. \begin{figure} \centering \includegraphics[width=12cm]{groh_fig1.png} \caption{ Visibility of HD 316285 as a function of wavelength for the three different telescope baselines measured with VLTI/AMBER. Notice the drop in visibility within the Bracket gamma line, indicating that the line is more extended than the neighboring K-band continuum. From Groh et al. (in prep.).} \label{groh} \end{figure} The environment close to the star can also be studied, directly, by means of interferometry, which is usually performed at long wavelengths. Most optical/IR interferometric measurements rely on the measurements of ``visibilities'', which are directly linked to the size of the object. Recently, several massive stars, including nine LBVs, were observed with the VLTI (Groh et al., in prep.). Amongst these, HD316285 (Fig. \ref{groh}): the recorded visibilities implied a size of 0.002'' for the source of continuum radiation, and 0.004'' for the source of the Br$\gamma$ line. A CMFGEN fit to the spectrum yields a stellar model with which one can estimate the wind size in IR, and it agrees well with VLTI observations. The asymetric shape of the measured differential phases (red vs blue side) favors a prolate shape for the rotating star wind, but it could also be explained by clumps or binarity. Since the latter imply in time variability, a monitoring will be needed to ascertain the exact nature of the asymmetry. \begin{figure} \begin{center} \includegraphics[width=10cm]{rapidrotators.pdf} \caption{Images of rapid rotators derived from MIRC interferometric measurements. Each star shows a bright pole and dark equator, which is caused by gravity darkening effect. Modeling of Regulus gives gravity darkening coefficient $\beta$ = 0.19, rather than 0.25 as von Zeipel predicted in 1924.} \label{che} \end{center} \end{figure} While visibilities provide valuable data, ``real'' images are always more impressive. Interferometric instruments such as Michigan Infrared Combiner (MIRC) and PIONIER are beginning to provide such data. MIRC was the first to image Altair (Monnier et al. 2007) and several other rapidly rotating stars as shown in Fig. \ref{che}. It has also imaged circumstellar disks and multi-object systems. For example, the disk contribution of $\delta$\,Sco was shown to remain stable during the periastron in 2011 (Che et al. 2012), and the mass-exchange in the $\beta$\,Lyr system can be clearly imaged (Zhao et al. 2008), as well as the 3 components of Algol (Baron et al. 2012) or the disk of the eclipsing companion of $\epsilon$\,Aur (Kloppenborg et al., 2010). \section{Conclusion} This session has demonstrated the usefulness of studies in the IR in studying the environment of massive stars. Recent advances in this domain are notably provided by surveys, as they enable discovery of new objects to study, thereby improving the census of nebular features associated with hot stars. Furthermore, IR diagnostics unveil the properties of these neighbouring nebulosities: morphology, temperature, composition, density are the necessary keys paving the way of a better understanding of the mass-loss in massive stars. \begin{acknowledgments} YN acknowledges comments from Augusto Daminelli and support from FNRS and Prodex Herschel/XMM-Integral contracts. NLJC thanks FWO and Prodex-Herschel for financial support. JHG is supported by an Ambizione Fellowship of the Swiss National Science Foundation. CDL has been supported financially by grant NSC-101-2922-I-008-120 of the National Science Council of Taiwan. \end{acknowledgments}
2,869,038,155,764
arxiv
\section{Introduction} \label{sec:intro} Transferring the knowledge that machine learning models learn from a source domain to a target domain, which is known as transfer learning (Figure \ref{fig:transfer}) \cite{10.1007/978-3-030-01424-7_27, zhuang2020comprehensive}, has shown tremendous success in Natural Language Processing (NLP) \cite{Alyafeai2020ASO, pennington2014glove,NIPS2013_5021, devlin-etal-2019-bert, Raffel2019ExploringTL}. One of the most prominent advantages of transfer learning is manifested in low data regimes. As the models become increasingly complex, in most cases this complexity comes with requirements for larger training data which makes transferring the learning from a high data domain to a low data domain very impactful. In this work we focus on the type of transfer learning in which the target domain is first mapped to the source domain. Next a model is trained on the source domain. Then the transfer of knowledge is done through fine-tuning of this model on the mapping of the target domain (to the source domain), as shown in Figure \ref{fig:target_map}. As an example of this transfer learning paradigm in NLP, decaNLP \cite{DBLP:journals/corr/abs-1806-08730} could be mentioned where 10 NLP tasks are mapped to the Question Answering (QA) problem, in which given a context the model should find the answer to a question. \begin{figure*}[t] \centering \begin{tabular}{c|c|c} \centering \begin{subfigure}{0.29\textwidth} \includegraphics[width=\textwidth]{transfer_learning.png} \caption{\footnotesize Transfer Learning from source domain to target domain} \label{fig:transfer} \end{subfigure} & \centering \begin{subfigure}{0.29\textwidth} \includegraphics[width=\textwidth]{target_map.png} \caption{\footnotesize Transfer learning through mapping a target domain to source domain. In this work we map NLU to QA tasks.} \label{fig:target_map} \end{subfigure}& \centering \begin{subfigure}{0.36\textwidth} \includegraphics[width=\textwidth]{atis_to_r8k.png} \caption{\footnotesize Sequential transfer learning for QANLU.} \label{fig:atis_to_r8k} \end{subfigure} \end{tabular} \caption{\vspace{-6mm}} \label{fig:transfer_learning} \end{figure*} In this work, we map Natural Language Understanding (NLU) problems to the QA problem. Here NLU refers to determining the intent and value of slots in an utterance \cite{DBLP:journals/corr/abs-1902-10909}. For instance in ``show cheap Italian restaurants'' intent could be \textit{inform} and the value for slot \textit{cuisine} is ``Italian'' and for slot \textit{price range} is ``cheap''. More specifically in our approach to which we refer as QANLU, we build slot and intent detection questions and answers based the the NLU annotated data. QA models are first trained on QA corpora and then fine-tuned on questions and answers created from NLU annotated data. In this approach transfer learning happens through transferring knowledge of finding the answer to a question given a context, that is acquired by the model during the training of the QA model, to finding the value of an intent or a slot in text input. Through our computational results we show that QANLU in low data regimes and few-shot settings significantly outperforms the sentence classification and token tagging approaches for intent and slot detection tasks, as well as the newly introduced ``IC/SF few-shot'' approach \cite{krone-etal-2020-learning} for NLU. We also show that QANLU sets a new state of the art performance on slot detection on the Restaurants-8k dataset \cite{coope2020spanconvert}. Furthermore, we show that augmenting the QA corpora with questions and answers created based on NLU annotated data improves the performance of QA models. Throughout this work we use span selection based QA models built on top of transformer-based language models \cite{devlin-etal-2019-bert}. That being said, our approach is quite generic and could be extended to any type of QA system. \begin{comment} \begin{itemize} \item Importance of transfer learning; specially in low data regimes and few-shot learning \item Question Answering is a generic task to which many tasks could get mapped to. \item Prior works on mapping different NLP tasks to QA \item QA as an oracle \item QA for NLU challenges; both precision and recall should be high \item Our approach \end{itemize} \end{comment} \section{Related Works} Framing NLP tasks as QA has been studied in the past. For instance \cite{DBLP:journals/corr/abs-1806-08730} maps 10 NLP tasks (excluding intent and slot detection) into QA and trains a single model for all of them. However, this work does not explore the task of intent and slot classification. In a similar line of reasoning, \cite{gao2019dialog} poses the Dialogue State Tracking (DST) task as machine reading comprehension (MRC), formulated as QA. \cite{gao2020machine} builds on that work achieving competitive DST results with full data and in few-shot settings. \cite{zhou2019multi} also explores DST as QA, using candidate values for each slot in the question (similar to the Multiple-Choice setting of \cite{gao2020machine}) achieving slightly better results than \cite{gao2020machine}. We propose a method that is conceptually similar but focuses on low-resource applications and does not require designing and training of a new model architecture or extensive data pre-processing, achieving strong results in slot and intent detection with an order of magnitude less data. Here we do not discuss all intent or slot detection methods. However, some notable few-shot NLU works include \cite{bapna2017towards,bhathiya2020meta,shah2019robust,coope2020spanconvert, bapna2017towards}, and we compare against their results when appropriate. Other interesting approaches that do not require training include priming pre-trained language models, e.g. \cite{madotto2020language}. \section{Question Answering for Natural Language Understanding (QANLU)} \label{sec:map} \subsection{Slot Detection} \label{sec:data_prep} Consider a set of text records $T = \{t_1, t_2, ..., t_n\}$ in which each record is annotated for the set of slots $S = s_1, s_2, ..., s_m$. Also for each slot $s_j$ consider a set of questions $Q_{s_j}=\{q_{s_j1}, q_{s_j2}, ..., q_{s_jk_j}\}$ that could be asked about $s_j$ given any text record $t_i$. The following is an example of such a setting: \begin{equation*} \footnotesize \begin{aligned} &S:\{\mbox{\texttt{\scriptsize{cuisine}}}, \mbox{\texttt{\scriptsize{price range}}}, \mbox{\texttt{\scriptsize{area}}}\}, t_i: \mbox{\textit{``Show cheap Italian restaurants''}} \\ &\hspace{1mm} \mbox{\texttt{\footnotesize{cuisine}}: ``Italian''}, \hspace{1mm} \mbox{\texttt{\footnotesize{price range}}: ``cheap''}, \hspace{1mm}\mbox{\texttt{\footnotesize{area}}: ``''} \\ &Q:\{Q_{\mbox{\texttt{\scriptsize{cuisine}}}}, Q_{\mbox{\texttt{\scriptsize{price range}}}}, Q_{\mbox{\texttt{\scriptsize{area}}}}\} \end{aligned} \end{equation*} where \begin{equation*} \footnotesize \begin{aligned} & Q_{\mbox{\texttt{\scriptsize{cuisine}}}}: \hspace{0mm}\{\mbox{``what cuisine was mentioned?''}, \\ & \hspace{17mm}\mbox{``what type of food was specified?''}\} \\ & Q_{\mbox{\texttt{\scriptsize{price range}}}}: \{\mbox{``what price range?''}\} \\ & Q_{\mbox{\texttt{\scriptsize{area}}}}: \hspace{0mm}\{\mbox{``what part of town was mentioned?''}, \mbox{``what area?''}\} \end{aligned} \end{equation*} Given $T$, $S$, and $Q$ it is straightforward to create the set of all the possible questions and their corresponding answers for each $t_i$ as the context for the questions: \vspace{-2mm} \begin{align*} \footnotesize \mbox{\bf{Context:}}\hspace{4mm}\mbox{\textit{``Show cheap Italian restaurants''}} \\ \mbox{what cuisine was mentioned?}& \hspace{4mm}\mbox{``Italian''} \\ \mbox{what type of food was specified?}& \hspace{4mm} \mbox{``Italian''} \\ \mbox{what price range?}& \hspace{4mm} \mbox{``cheap''} \\ \mbox{what part of town was mentioned?}& \hspace{4mm} \mbox{``''} \\ \mbox{what area?}& \hspace{4mm} \mbox{``''} \end{align*} We experiment with different ways of creating the set $Q$. This set could be handcrafted, i.e. for each slot we create a set of questions separately, or created using templates such as ``what \underline{\hspace{10mm}} was mentioned?'' where we the blank is filled with either the slot name or a short description of the slot, if available. \vspace{-3mm} \subsection{Intent Detection} For intent detection we add ``yes. no.'' at the beginning of the context and for each intent we create a question like ``is the intent asking about \underline{\hspace{10mm}}?'' where the blank is filled with the intent. The answer to these questions are ``yes'' or ``no'' from the segment that was added to the beginning of the context depending on whether the intent is in the context or not. \vspace{-3mm} \subsection{Question Answering Model} In this work we use span detection based QA models that are built on top of transformers \cite{NIPS2017_7181} as are described in \cite{devlin-etal-2019-bert}. We also use the SQuAD2.0 \cite{rajpurkar-etal-2018-know} data format for creating questions and answers, as well as the corpus for the source domain (QA). Note that in converting annotated NLU data to questions and answers in QANLU, since for each text record we ask all the questions for all the slots (whether they appear in the text or not), many of the questions are not answerable. As was discussed earlier, we use pre-trained QA models that are trained on SQuAD2.0 (the green box in Figure \ref{fig:target_map}) and fine-tune them with the questions and answers that are created from the NLU tasks. We also study how in a sequential transfer learning style we can improve the performance of NLU through QANLU (Figure \ref{fig:atis_to_r8k}). \section{Computational Results} In this section we present our computational results for QANLU. Our experiments are done on the ATIS \cite{hemphill-etal-1990-atis, 5700816} and Restaurants-8k \cite{coope2020spanconvert} datasets. All of the experiments are implemented using Huggingface \cite{Wolf2019HuggingFacesTS}, and we also use pre-trained language models and QA models provided by Huggingface and fine-tune them for our QA data. We base our experiments mainly on pre-trained DistilBERT \cite{sanh2020distilbert} and ALBERT \cite{Lan2020ALBERTAL} model . \subsection{ATIS} \vspace{-2mm} The ATIS dataset is an NLU benchmark that provides manual annotations for utterances inquiring a flight booking system. Since the original ATIS dataset does not have a validation set, we use the split of the original training set into training and validation that is proposed in \cite{zhang2019joint}. For each slot in ATIS we create a question set and for each record in ATIS we create the set of questions and answers based on all the question sets and the slot and intent annotation of the record, according to the approach described in Section \ref{sec:map}. In the first set of experiments we study how our QANLU approach compares to the widely used joint token and sentence classification \cite{DBLP:journals/corr/abs-1902-10909} in few-shot settings using different stratification in sampling of the training records for the few-shot setting. Table \ref{tbl:fewshot} summarizes the results. In this table we report F1 scores for both slots and intent detection tasks. The reason why we use F1 scores for intent detection is that in the ATIS dataset each record could have more than one intent. Each value in Table \ref{tbl:fewshot} is an average over 5 runs with different random seeds. Each row in this table represents one sample of the ATIS training data. The set of rows titled ``\(\mathcal{N}\) uniform samples'' are sampled uniformly with samples of sizes of 10, 20, 50, 100, 200, and 500 ATIS records. The set of rows titled ``\(\mathcal{N}\) samples per slot'' are sampled such that each sample includes at least \(\mathcal{N}\) instances for any of the slots, where \(\mathcal{N}\) is 1, 2, 5, or 10. The set of rows titled ``\(\mathcal{N}\) samples per intent'' are sampled such that each intent appear in at least \(\mathcal{N}\) instances, where \(\mathcal{N}\) is 1, 2, 5, or 10. The numbers in parenthesis in front of \(\mathcal{N}\) represent the number of ATIS records in the sample. For each ATIS record we have 179 questions and answers for intents and slots. In Table \ref{tbl:fewshot} we report performance of models based on both DistilBERT and ALBERT. For QANLU we fine-tune a QA model trained on SQuAD2.0 data (``distilbert--base--uncased--distilled--squad''\footnote[2]{Model acquired from \url{www.huggingface.co/models}\label{hf}} for DistilBERT and ``twmkn9/albert--base--v2--squad2''\footref{hf} for ALBERT) on our questions and answers for ATIS samples. We Also train joint intent and token classification models for the ATIS training samples based on pre-trained DistilBERT and ALBERT models (``distilbert--base--uncased''\footref{hf} and ``albert--base--v2''\footref{hf})\footnote[3]{We also tried these models fine-tuned on SQuAD2.0, but they didn't perform as well on the intent and token classification tasks}. We compare the results of QANLU models with the classification based models (noted as QANLU and Cls in the table, respectively). It is clear that QANLU models outperform classification based models, often by a wide margin. For instance for the ALBERT based model, for the case where there is at least 1 sample per slot the QA based model outperforms the classification based model by 26\% (86.37 vs 68.26). It is notable that the gap between the two approaches narrows as the number of samples increases, with the exception of intent detection for the uniform sample with only 10 samples. In a closer look at this sample, the intent for all the records is the same (``atis\_flight'' which is the intent for 74\% of the ATIS training set) and that could explain why the models almost always predict the same value for the intent. The fact that for both DistilBERT and ALBERT based models we see that the QANLU significantly outperforms the intent and slot classification models in few-shot settings indicates that the performance improvements are likely stemmed from transfer learning from reading comprehension that is learned in the QA task. In this set of experiments we used handcrafted questions for each slot. One could argue that creating questions for slots is as difficult or perhaps more difficult as getting data annotated specifically for intents and slots. To see if we can detour the manual question creation process we also experimented with questions that were created using frames based on a brief description of each slot as well as using the tokenized slots names. These frame based questions could be easily created for free by running some simple scripts. The experimental results show no significant degradation in the performance of QANLU models trained on frame based questions. In another set of experiments we compare QANLU with another few-shot approach (few-shot IC/SF) proposed in \cite{krone-etal-2020-learning}. We use the exact same split of the ATIS dataset that is created in that paper. Results are in Table \ref{tbl:ic/sf}. \begin{table}[ht!] \footnotesize \centering \begin{tabular}{|c|c|c|} \hline & Few-shot IC/SF & QANLU\\ \hline F1 score & 43.10 & 68.69 \\ \hline \end{tabular} \caption{\footnotesize QANLU vs Few-shot IC/SF \cite{krone-etal-2020-learning} Slot detection F1. 43.10 is reported in Table 5 of \cite{krone-etal-2020-learning} \vspace{-3mm}} \label{tbl:ic/sf} \end{table} The few-shot IC/SF results (43.10) are average of multiple runs of a BERT model first pre-trained on the training set, and then fine-tuned on a ``support'' set sampled from the test set, and then evaluated on a ``query'' set also sampled from the test set. We used the exact same training set that used in that work and trained a BERT (base size) based QANLU model on the training set. We then directly evaluated that model on the exact same test set created in \cite{krone-etal-2020-learning}, without any fine-tuning on a support set. The resulting F1 score (68.98) is 60\% higher than what is reported \cite{krone-etal-2020-learning}. \begingroup \setlength{\tabcolsep}{4mm} \renewcommand{\arraystretch}{1} \begin{table*}[ht!] \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{4}{c|}{\bf{Intent}} & \multicolumn{4}{c|}{\bf{Slot}} \\ \hline & & \multicolumn{2}{c|}{DistilBERT} & \multicolumn{2}{c|}{ALBERT} & \multicolumn{2}{c|}{DistilBERT} & \multicolumn{2}{c|}{ALBERT}\\ \hline & \( \mathcal{N} \) & QANLU & Cls & QANLU & Cls & QANLU & Cls & QANLU & Cls \\ \hline \multirow{6}{*}{\( \mathcal{N} \) uniform} &10&71.80 &71.78 &72.18 &71.78 &\bf{67.23} &61.60 &\bf{64.24} &54.78\\ \multirow{6}{*}{samples} & 20&\bf{83.95} &77.80 &\bf{83.28} &75.36 &\bf{78.53} &56.70 &\bf{74.53} &51.67\\ &50 &\bf{86.07} &78.93 &\bf{86.32} &73.90 &\bf{83.84} &76.61 &\bf{80.26} &74.04\\ &100 &\bf{93.08} &87.91 &\bf{92.14} &80.20 &\bf{85.69} &80.34 &\bf{83.13} &77.50\\ &200 &\bf{94.30} &90.97 &\bf{96.78} &85.02 &\bf{91.24} &85.32 &\bf{89.57} &83.63\\ &500 &\bf{96.40} &95.45 &\bf{96.77} &90.62 &\bf{92.31} &91.15 &\bf{91.18} &86.69\\ \hline \multirow{4}{*}{\( \mathcal{N} \) samples per} &1 (75) & \bf{88.72} &86.47 &\bf{90.91} &84.93 &\bf{88.47} &76.24 &\bf{86.37} &68.26 \\ \multirow{4}{*}{slot (Total)} &2 (136) & \bf{91.68} &84.91 &\bf{92.11} &82.42 &\bf{90.77} &84.42 &\bf{90.17} &79.49 \\ &5 (302) & \bf{94.34} &93.74 &\bf{95.52} &87.47 &\bf{93.11} &91.38 &\bf{87.82} &86.50 \\ &10 (549) & \bf{97.10} &96.19 &\bf{94.21} &92.73 &\bf{94.11} &93.93 &\bf{92.27} &91.68 \\ \hline \multirow{4}{*}{\( \mathcal{N} \) samples per} &1 (17) &\bf{40.32} &27.91 &\bf{54.49} &25.73 &\bf{62.57} &55.38 &\bf{62.22} &51.05\\ \multirow{4}{*}{intent (Total)}&2 (33) &\bf{78.24} &47.20 &\bf{62.22} &23.52 &\bf{75.39} &65.09 &\bf{74.99} &61.01\\ &5 (81) &\bf{86.49} &74.08 &\bf{89.36} &41.28 &\bf{84.40} &80.25 &\bf{82.70} &71.83\\ &10 (152) &91.23 &91.16 &\bf{90.13} &68.93 &\bf{88.37} &83.40 &\bf{86.32} &78.25\\ \hline All & N/A (4478) & 98.23 & 98.37 & 97.59 & 97.90 & 95.70 & 95.80 & 94.48 & 95.37\\ \hline \end{tabular} \caption{\footnotesize QANLU vs. intent and token classification (Cls) \cite{DBLP:journals/corr/abs-1902-10909} for ATIS in few-shot settings. Each row is associated with a different sampling size and strategy of ATIS data. Values in bold represent statistically significant difference at p-value 0.05. Note that QANLU performs significantly better (in some cases by more the 20\%) compared to joint intent and slot classification.\vspace{-3mm}} \label{tbl:fewshot} \end{table*} \endgroup \subsection{Restaurants-8k} \subsubsection{QANLU for Restaurants-8k} The Restaurants-8k dataset \cite{coope2020spanconvert} is a set of annotated utterances coming from actual conversations in the restaurant booking domain. The dataset only contains the user side utterances and slot (5 in total) annotations. The system side of the conversations are missing, but given the set of slots that are annotated at every user turn, using simple frames we can build a full context for token classification and QANLU approaches. The rest of data preparation process is identical to what we described in Section \ref{sec:data_prep}. We take both uniform and stratified samples of the training data to create few-shot settings for training QANLU models, and compare the results with token classification models. The QANLU model is again a QA model trained on SQuAD2.0 (``distilbert--base--uncased--distilled--squad''\footref{hf}) that we fine-tune on the sampled training sets. The token classification model is built on top of ``distilbert--base--uncased''\footref{hf}. The results are captured in the curves ``QANLU (SQ\(\rightarrow\)R8k)'' (SQ stands for SQuAD2.0 and R8k stands for Restaurants-8k) and ``Cls'' (stands for token classification and similar to the ATIS case is based on \cite{DBLP:journals/corr/abs-1902-10909} without the sentence classification head) in Figure \ref{fig:seq_trans}. We discuss the results in the next subsection. \subsubsection{Sequential Transfer Learning from ATIS to Restaurants-8k} In another set of experiments we study whether QANLU would enable transfer learning from one NLU domain to another. This is referred to as sequential transfer learning in the literature. For this purpose we fine-tune a QANLU model that was trained on the entire ATIS training set, on samples of Restaurants-8k dataset. We compare the performance of the resulting model with QANLU first trained on SQuAD2.0 and then fine-tuned on Restaurants-8k samples, as well as the token classification model. \subsubsection{Restaurants-8k Results} In Figure \ref{fig:seq_trans} the curve QANLU (SQ \(\rightarrow\) ATIS \(\rightarrow\) r8k) is the squential transfer learning model based on ``distilbert--base--uncased--distilled--squad''\footref{hf} model (DistilBERT base model trained on SQuAD2.0). From the figure we can see that except for 10 and 20 uniform samples, for all the samples fine-tuning of SQuAD2.0 QA models on Restaurants-8k results in significantly higher F1 scores compared to the token classification approach. For uniform samples of size 10 and 20 the QANLU model (trained on SQuAD2.0 and fine-tuned on Restaurants-8k samples) performs poorly. Our intuition on the reason behind this poor performance is the small number of questions and answers for these samples (15 per record), and most likely it is not sufficient for the model to learn how to handle NLU style questions. On the other hand for the sequential transfer learning QANLU model (SQ\(\rightarrow\)ATIS\(\rightarrow\)R8k column of Figure \ref{fig:seq_trans}) we see that the model outperforms both the token classification model and the QANLU model trained on SQuAD2.0 and fine-tuned on Restaurants-8k samples by a wide margin (in some cases by over 50\%). These numbers are also shown in Figure \ref{fig:seq_trans}. This suggests that perhaps using QA as the canonical problem where NLU problems from different domains could be mapped to, could facilitate transfer learning across these NLU problems specially in few-shot settings. Also note that when the entire data is used for training the performance difference vanishes (96.98 for SQ\(\rightarrow\) R8k, 96.43 for SQ\(\rightarrow\) ATIS \(\rightarrow\)R8k, and 95.94 for Cls), which suggests that the QANLU approach is as strong as the state of the art outside of few-shot settings. Also Figure \ref{fig:span-convert} shows a comparison between QANLU and Span-ConveRT \cite{coope2020spanconvert} in few-shot settings. The few-shot F1 scores of Span-ConveRT on Restaurants-8k are borrowed from Table 3 of \cite{coope2020spanconvert}. In these experiment in order to match the settings of Span-ConverRT we do not create the previous turn for the context, hence the difference between QANLU numbers in Figure \ref{fig:span-convert} compared to Figure \ref{fig:seq_trans}. From this figure it is notable that with 20 data points QANLU reaches the higher performance than Span-ConveRT achieves with 256 data points, which translates to a 10x reduction in the amount of data needed. Also with the entire training set QANLU performs within less than 1\% of the state-of-the-art. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{lc_w_lbl.png} \caption{\footnotesize Slot detection with QANLU vs token classification. SQ\(\rightarrow\) R8k indicates QANLU first trained on SQuAD2.0 and the fine-tuned on samples of Restaurants-8k. SQ\(\rightarrow\)ATIS\(\rightarrow\) R8k is QANLU first trained on SQuAD2.0, then fine-tuned on entire ATIS, and then fine-tuned on samples of Restaurants-8k (sequential transfer learning). Cls is for the token classification approach. Numbers associated with each point are F1 scores.\vspace{-4mm}} \label{fig:seq_trans} \end{figure} \vspace{-1mm} \begin{figure}[t!] \centering \includegraphics[width=0.45\textwidth]{span_convert_no_prev.png} \caption{\footnotesize QANLU compared to Span-ConveRT \cite{coope2020spanconvert} in few-shot settings. The numbers associated with each point are the sample size and F1, respectively. \vspace{-7mm}} \label{fig:span-convert} \end{figure} \section{Discussion} The customary feeding token embeddings of a sentence into a network and mapping the output of the network for each token onto a certain number of classes for NLU seems somewhat far from our intuition on how humans understand natural language. The main research question that we try to answer is whether all NLP problems can be efficiently and effectively mapped to one canonical problem. If the answer is yes, could that canonical problem be QA? In this work we scratch the surface on these questions, in that we showcase the strength of transfer learning that happens in this paradigm in learning from few examples for intent and slot detection. But our experiments were limited to span detection QA problem and SQuAD2.0 QA data. Future works will include going beyond this configuration and also expanding across different NLP problems. Measuring how much transfer of knowledge could be achieved across different NLP tasks would be interesting to know. Another future direction could be studying how the questions for QANLU could be generated automatically based on the context. One interesting side product of QANLU is that the questions and answers created for NLU tasks could augment the questions and answers of the QA task (SQuAD2.0 in this work) in order to improve the QA model performance. To study this idea we used the exact training script that Huggingface provides for training QA models on the SQuAD2.0 and also the SQuAD2.0 augmented with questions and answers that we created for ATIS QANLU. The training scripts specify 2 training epochs. It could be argued that this comparison would not be fair since 2 passes over the augmented data means a lot more optimization steps since there are many more questions and answers in the augmented data. To account for this we also run the training on the original SQuAD2.0 data for the same number of optimization steps as it takes to run 2 epochs on the augmented data (9000 steps). The results (QA F1 on the validation set) are shown in Table \ref{tbl:data_aug}. As the numbers show training the same models on the augmented data significantly improves the performance of the final QA model on the Development set of SQuAD2.0. We believe this result could be an indication that we can not only transfer from QA to other NLU tasks, we can also improve QA through data augmentation by mapping NLU problems to QA. \begin{table}[ht!] \scriptsize \centering \begin{tabular}{|c|c|c|c|} \hline & SQuAD2.0 & SQuAD2.0 + ATIS & SQuAD2.0\\ & (2 epochs) & (2 epochs = 9k steps & (9k steps)\\ \hline ``bert-base-cased'' & 70.07 & 74.29 & 65.42 \\ ``distilbert-base-uncased'' & 55.58 & 60.26 & 57.03\\ ``albert-base-v2'' & 78.05 & 79.26 & 76.44\\ \hline \end{tabular} \caption{\footnotesize F1 scores of QA models on original SQuAD2.0 and the augmented SQuAD2.0 with ATIS QANLU Data. Data augmentation improves the performance of QA models.} \label{tbl:data_aug} \end{table} \bibliographystyle{IEEEbib}
2,869,038,155,765
arxiv
\section{Introduction} Protoplanetary disks are expected to undergo dramatic morphological changes concurrent with the processes of planet formation. Transition disks have been suggested to be observational signatures of this evolution, with their heavily depleted inner dust cavities possibly cleared by nascent gas-giant planets \citep[e.g.][]{Papaloizou_2007, Zhu_2011}. Observations have uncovered disks with both fully and partially depleted cavities, often with remnants of an inner disk near the host star \citep[e.g.][]{Calvet_2002, Calvet_2005, DAlessio_2005, Najita_2007, Espaillat_2010, Espaillat_2011, Andrews_2011}. Recent observations, however, have revealed more complex disk dust structures. Multiple dust rings have been imaged in the disks around HL Tau, TW Hya, HD 163296, and HD 169142 \citep{ALMA_2015, Andrews_2016, Isella_2016, Fedele_2017}, and visibility modeling has suggested one other candidate system, DM Tau \citep{Zhang_2016}. These rings, and the gaps between them, may trace planet formation at its earliest stages \citep[e.g.][]{Flock_2015, Ruge_2016}, although other explanations have also been proposed \citep[e.g.][]{Zhang_2015a, Okuzumi_2016}. At small scales, the disk around the low mass (0.85 M$_{\odot}$) T Tauri star AA Tau exhibits compelling hints of substructures analogous to those being found in other disks \citep{Bouvier_1999}. AA Tau is considered the archetype of a class of young stars with a peculiar form of inner disk driven photometric variability \citep[e.g.][]{McGinnis_2015, Sousa_2016}. While investigating the effects of magnetic fields on inner disk accretion flows, \cite{Bouvier_1999} discovered that AA Tau has photometric variations with an 8.5 day period, similar to the stellar rotation period. As the AA Tau system was thought to be viewed at a high inclination \citep{Basri_1989, Shevchenko_1991, Kwan_1997}, Bouvier et al. suggested that the light curve could be explained by periodic occultation of the star by a warped inner disk. This odd light curve has since motivated intense multi-wavelength scrutiny of AA Tau \citep[e.g.][]{Menard_2003, Andrews_2007, Schmidtt_2007, Oberg_2010, Cox_2013, Zhang_2015b}. In 2011, a substantial dimming ($\sim$2 mag, \textit{V}-band) of the system was also observed, accompanied by significant reddening in the near-IR \citep[$\sim$3-4 mags of visual extinction,][]{Bouvier_2013}. The system has not emerged from this state since \citep{Rodriguez_2015}. A possible interpretation of these optical light variations is an extended non-axisymmetric feature (such as a disk warp or protoplanet) passing in front of the star at a distance of \textgreater8 AU, assuming a distance of 145~pc to AA Tau \citep{Bouvier_2013, Rodriguez_2015}. This naturally suggests an investigation of the interplay between fine-scale structures in the outer disk and the inner disk morphology. High resolution ALMA observations of the mm dust disk are therefore an important first step in directly observing such phenomena. In this paper, we present ALMA observations of the disk around AA Tau that identify it for the first time as multi-ringed, with its mm dust inclination differing substantially from previously inferred inner disk and scattered light inclinations. \S2 describes the details of the observations and the data reduction. \S3 presents the imaged data and an analysis of the visibilities. In \S4, we investigate the discrepancy between our derived inclination and previous measurements, and speculate on the interplay between the outer disk observations and possible inner disk structures. \section{Observations} AA Tau was observed on 2015 July 25 in Band 7 and on 2015 Sept. 29 in Band 6 as part of the ALMA cycle 2 project 2013.1.01070.S. Band 7 observations included 35 antennas with projected baseline lengths between 12 and 1455~m (11-1410~k$\lambda$). The total on-source integration time was 56 minutes. The correlator setup included a Time Division Mode (TDM) continuum window centered at 278.0~GHz with a bandwidth of 2~GHz, as well as two continuum chunks in Frequency Division Mode (FDM) spectral windows, centered at 279.5~GHz and 288.2~GHz with bandwidths of 234 and 469~MHz, respectively. The total continuum bandwidth was 2.7~GHz. Band 6 observations included 30 antennas with projected baseline lengths between 42 and 2065~m (36-1850~k$\lambda$). The total on-source integration time was 61 minutes. The correlator setup included two TDM mode continuum windows centered at 253.3 and 269.3~GHz, each with a bandwidth of 2~GHz, and an FDM spectral window centered at 255.5~GHz with a bandwidth of 469~MHz. The total continuum bandwidth was 4.47~GHz. HCO$^+$ 3--2 was targeted in an FDM spectral window centered at 267.6~GHz, with a channel spacing of 61~kHz (0.068~km s$^{-1}$). Separate spectral line Band 7 observations targeting $^{13}$CO in AA Tau were taken on 2016 July 21 as part of the ALMA cycle 3 project 2015.1.01017.S. These observations included 39 antennas, with projected baseline lengths between 12 and 1028~m (13-1170~k$\lambda$). The total on-source integration time was 45 minutes. An FDM spectral window was centered on $^{13}$CO 3--2 at 330.6~GHz. For the Band 7 continuum observations, the quasar J0423-0120 was used for both phase and bandpass calibration and the quasar J0510+1800 was used for flux calibration. For the Band 6 observations, J0510+1800 was used for phase and bandpass calibration, and J0423-0120 was used for flux calibration. While analyzing the delivered calibrated data, we discovered that the two flux calibrations are in conflict, suggesting an unphysical spectral index of the AA Tau disk ($\alpha\sim$0). The raw Band 7 measurements of J0510+1800 and J0423-0120 (2.3 and 1.0~Jy, respectively) are in good agreement with ALMA calibrator catalog flux values (2.2 and 0.9~Jy, respectively). In contrast, the ratio of the raw Band 6 measurements of J0510+1800 and J0423-0120 (2.1 and 1.0~Jy, respectively) conflicts with the ratio of the catalog values (1.7 and 1.0~Jy, respectively). Due to the narrow spectral windows used for molecular line observations in the correlator setup, phase calibration of the data required mapping of solutions between spectral windows, and an error may have been introduced in this step. We correct for this conflict by using J0510+1800 as the flux calibrator for both the Band 6 and Band 7 observations, adjusting the flux scaling of the Band 6 observations to correct the J0510+1800 discrepancy (2.1~Jy derived vs 1.7~Jy catalog). The applied flux correction results in a physical spectral index of $\sim$2, but also introduces a substantial uncertainty in the absolute continuum flux of the Band 6 data. While this uncertainty is not problematic when addressing the radial structure of the dust, it does preclude an analysis of the spectral index and thus the grain size distribution across the AA Tau disk. After fixing the flux calibration, we used the disk continuum emission to perform two rounds of phase-only self-calibration in CASA version 4.3. \section{Results} \subsection{Continuum Observations} The Band 6 and 7 continuum observations were concatenated and imaged using multi-frequency synthesis \texttt{CLEAN} with a reference frequency of 271.6~GHz. A high sensitivity image of the AA Tau dust continuum (Fig. \ref{Figure 1}, panel a) was created using Briggs `robust' weighting \citep{Briggs_1995}, with a robust parameter of 0, yielding a synthesized beam of 0\farcs18$\times$0\farcs12 (26$\times$17~AU at 145~pc) at a PA of $\sim$27$\degree$ and an rms noise of 44~$\mu$Jy bm$^{-1}$. As seen in the figure, AA Tau hosts a system of nested dust rings with an apparent inclination of $\sim$59$\degree$, which we confirm through visibility modeling (see \S3.2). This is substantially lower than the 71$\pm$1$\degree$ previously fit to infrared scattered light emission \citep{Cox_2013}, and we discuss this discrepancy in \S4.1. Fig. \ref{Figure 1}, panel (b) highlights the location of the three rings in a deprojected and azimuthally averaged radial intensity profile, calculated from the image in panel (a) using the model-constrained inclination of 59$\degree$ and PA of 93$\degree$. The third ring is marginally detected in this radial profile, but confirmed through our visibility modeling (see \S3.2). The three rings are nearly evenly-spaced, peaking at 0\farcs34, 0\farcs66, and 0\farcs99 (49, 95, and 143 AU). To better display structure in the inner two rings, we additionally imaged the data using super-uniform weighting (Fig. \ref{Figure 1}, panel c), yielding a synthesized beam of 0\farcs15$\times$0\farcs11 (22$\times$16~AU at 145~pc) at a PA of $\sim$33$\degree$ and a slightly higher rms noise of 60~$\mu$Jy bm$^{-1}$. A deprojected and azimuthally averaged radial profile is shown in Fig. \ref{Figure 1}, panel (d). The higher resolution image shows that the inner ring has apparently symmetric azimuthal variations, and there is a `bridge' of emission across the central clearing. At the resolution of the current observations, it is not immediately clear whether this `bridge' is caused by an unresolved inner disk, spiral arms \citep[e.g.][]{Muto_2012, Perez_2016}, or dust streamers similar to those hinted at in HD 142527 \citep{Casassus_2013}. We discuss these possibilities further in \S4.2. \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{Figure_1.pdf} \caption{\textit{Panel a:} Synthesized image of the AA Tau dust continuum using combined Band 6 and Band 7 data with a Briggs weighting of robust=0. The beam is 0\farcs18$\times$0\farcs12 and the rms noise is $\sim$44~$\mu$Jy bm$^{-1}$. Contours are $[5,10,20,40,60,...]\times\sigma$. To highlight weaker emission, a power-law stretch has been applied to the color scheme ($\gamma$=0.55). \textit{Panel b:} Deprojected and azimuthally averaged radial profile of the image in panel (a). Data are shown in black and standard deviation of the mean in each radial bin in shaded blue. Black dashed lines denote ring locations derived from our model fit (see \S3.2) and the gray curve represents the synthesized beam. \textit{Panel c:} Same as panel (a), but imaged using a super-uniform weighting to yield a higher spatial resolution. The beam is 0\farcs15$\times$0\farcs11 and the rms noise is $\sim$60~$\mu$Jy bm$^{-1}$. The color scheme is linear. \textit{Panel d:} Same as panel (b), but for the image shown in panel (c). \label{Figure 1}} \end{figure*} \subsection{Continuum modeling} We have interpreted the continuum emission with a simple parametric model composed of three Gaussian rings and a Gaussian inner disk. For each ring, the peak radius, width (FWHM), inclination, position angle, integrated flux in each band, and central offset were allowed to vary. As the observations do not have sufficient spatial resolution to constrain the PA and inclination of the inner disk, we fix the PA to 93$^{\degree}$, constrained by the PA of the observed jet \citep{Cox_2013}, and the inclination to 75$^{\degree}$, constrained by photopolarimetry modeling \citep{OSullivan_2005}. The FWHM and flux of the inner disk were allowed to vary. Two nuisance parameters were added to constrain the central position of the inner disk, which was then treated as the reference point for the outer ring offsets. This 29 parameter model was fit to the observed visibilities using the MCMC routine \texttt{emcee} \citep{Mackey_2013} and the visibility sampling routine \texttt{vis\_sample}\footnote{\texttt{vis\_sample} is publicly available at \url{$https://github.com/AstroChem/vis\_sample$} or in the Anaconda Cloud at \url{$https://anaconda.org/rloomis/vis\_sample$}}, yielding a final best-fit model with a reduced $\chi^{2}$ value of 1.02. \begin{table*} \begin{center} \small \begin{threeparttable}[b] \caption{Fit model parameters} \begin{tabular}{+c^c^c^c^c^c^c^c^c^c^c} \toprule \midrule[\heavyrulewidth] & Inner Disk & Ring 1 & Ring 2 & Ring 3 \\ \midrule[\heavyrulewidth] r (AU) & -- & 48.7$\pm$0.1 & 94.9$\pm$0.2 & 142.6$\pm$0.6 \\ FHWM (AU) & 5.4$\pm$1.1 & 22.6$\pm$0.2 & 28.7$\pm$0.7 & 26.4$\pm$1.5 \\ i ($\degree$) & 75$^{a}$ & 58.8$\pm$0.1 & 59.0$\pm$0.1 & 59.6$\pm$0.3 \\ PA ($\degree$) & 93$^{a}$ & 94.1$\pm$0.1 & 92.2$\pm$0.1 & 93.3$\pm$0.3 \\ Flux Band 6 (mJy) & 2.1$\pm$0.04 & 45.3$\pm$0.2 & 29.0$\pm$0.4 & 10.2$\pm$0.3 \\ Flux Band 7 (mJy) & 2.2$\pm$0.05 & 61.0$\pm$0.2 & 34.4$\pm$0.4 & 8.5$\pm$0.3 \\ $\Delta\alpha$ (AU) & -- & 0.0$\pm$0.1 & -0.3$\pm$0.2 & -1.4$\pm$0.4 \\ $\Delta\delta$ (AU) & -- & -1.1$\pm$0.2 & -2.4$\pm$0.2 & -4.3$\pm$0.4 \\ \bottomrule \end{tabular} \begin{tablenotes} \item[a] Fixed during fit \end{tablenotes} \end{threeparttable} \end{center} \end{table*} The best-fit values and 68\% confidence intervals (1$\sigma$) for all model parameters are presented in Table 1. The retrieved ring radii are 48.7$\pm$0.1, 94.9$\pm$0.2, and 142.6$\pm$0.6~AU, nearly evenly spaced. The FWHM widths of the rings range from 22 to 29~AU, not much larger than the beam along the major axis of the disk, suggesting the individual rings are, at best, marginally resolved, and they may contain further sub-structure. In addition to ring locations and widths, we constrain the ring inclinations to an average of 59.1$\degree\pm$0.3$\degree$, significantly lower than the 71$\degree\pm$1$\degree$ that \cite{Cox_2013} fit to their scattered light observations. In contrast, the fit PAs of the rings are 92-94$\degree$, in good agreement with \cite{Cox_2013} and the predictions of \cite{Menard_2003} from AA Tau's polarization curve. We also find small (\textless~5~AU) mutual offsets between the ring centers, which may be indicative \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{Figure_2.pdf} \caption{Comparison of dust continuum observations and best-fit model. \textit{Left:} Deprojected and azimuthally averaged radial profile of the imaged continuum in Fig. \ref{Figure 1}, panel (a). Data are shown in black and standard deviation of the mean in each radial bin in shaded blue. Black dashed lines denote ring locations derived from the model fit and the gray curve represents the synthesized beam. Our best fit model is overlaid in dashed red. \textit{Right:} Deprojected real visibilities from the Band 6 and Band 7 continuum observations, binned at 10k$\lambda$ intervals. Data are shown in black, standard deviation of the mean in each radial bin in shaded blue, and the best fit model in dashed red. \label{Figure 2}} \end{figure*} \noindent{of geometries not considered by our simple model (e.g. eccentricity in the rings).} This parametric model replicates well the observed visibilities and azimuthally averaged radial intensity profile (Fig. \ref{Figure 2}). When comparing the imaged data and simulated observations (using the CASA task \texttt{simobserve}) of our best-fit model, however, it becomes clear that an axisymmetric model does not fit the data perfectly (Fig. \ref{Figure 3}). After subtracting the model visibilities from the data and imaging the residuals, we find structured residuals with a peak of $\sim$12~$\sigma$ ($\sigma$ = 44~$\mu$Jy bm$^{-1}$), suggesting that there is azimuthal structure which is not captured by our models. This is consistent with the symmetric azimuthal variations seen in the imaged data (Fig. \ref{Figure 3}, left). Beam convolution often creates artificial bright spots on either side of an inclined ring, but the simulated observations show that the opposing `tails' of emission cannot be explained by this effect. \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{Figure_3.pdf} \caption{\textit{Left:} Synthesized image of the AA Tau dust continuum using combined Band 6 and Band 7 data with a Briggs weighting of robust=0. The beam is 0\farcs18$\times$0\farcs12 and the rms noise is $\sim$44~$\mu$Jy bm$^{-1}$. Contours for all panels are $[5,10,20,40,60,...]\times\sigma$. \textit{Middle:} Sim-observed model. \textit{Right:} Imaged residuals with ring locations overlaid in dashed red. \label{Figure 3}} \end{figure*} To address the origin of these residuals, we tested several variations of our simple concentric ring model. First, we fit a model with independent ring parameters for each of the Band 6 and Band 7 datasets. We found small differences between the bands for all parameters, but they remained broadly consistent with the values in Table 1, and residuals were not improved over the previously described model. As both models produced structured residuals, it is unclear whether the differences between the Band 6 and Band 7 parameters are real. Given this structure in the residuals, we also investigated if a combination of two point sources embedded in the disk near the innermost ring could replicate the observations. We added two variable strength point sources fixed at the innermost ring radius with a variable azimuthal location to the model, and fit it identically to the first model. Residuals improved ($\sim$4$\sigma$), but remained structured, suggesting that the responsible feature is resolved and not point-like. Finally, we tested if eccentricity could be responsible by adding two model parameters to the innermost ring: an eccentricity and an angle of perihelion. An ellipse with a Gaussian cross-section was used to describe the ring, with a 1/r$^2$ term added to approximately account for pericenter glow \citep[e.g.][]{Wyatt_1999}. After fitting the model using the procedure previously described, we found that a slight amount of eccentricity was preferred for the inner ring, but the residuals barely improved ($\sim$1$\sigma$) and remained structured. We therefore conclude that eccentricity alone cannot explain the observed azimuthal structure. Several other possible explanations are discussed in \S4.2. \subsection{Spectral line emission} \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{Figure_4.pdf} \caption{\textit{Panel a:} Moment-0 map showing total integrated HCO$^+$ 3--2 emission. The beam is 0\farcs23$\times$0\farcs15 at a PA of $\sim$25$\degree$ and the rms noise is $\sim$5~mJy bm$^{-1}$ km s$^{-1}$. Locations of the three rings are overlaid in dashed red. \textit{Panel b:} Deprojected and azimuthally averaged radial profile of the HCO$^+$ emission shown in panel (a). Data are shown in black and standard deviation of the mean in each radial bin in shaded blue. Red dashed lines denote ring locations and the gray curve represents the synthesized beam. \textit{Panel c:} Moment-0 map showing total integrated $^{13}$CO 3--2 emission. The beam is 0\farcs29$\times$0\farcs23 at a PA of $\sim$40$\degree$ and the rms noise is $\sim$6~mJy bm$^{-1}$ km s$^{-1}$. Locations of the three rings are overlaid in dashed red. \textit{Panel d:} Same as panel (b), but for the $^{13}$CO emission shown in panel (c). \label{Figure 4}} \end{figure*} To trace molecular gas kinematics in the disk, we analyzed HCO$^+$ 3--2 emission. The observations were continuum subtracted using \texttt{uvcontsub} and then imaged with \texttt{CLEAN} using natural weighting at a channel resolution of 0.4~km s$^{-1}$. A custom \texttt{CLEAN} mask created for each channel to match the emission, and \texttt{CLEAN}ing was terminated when residuals reached 3$\sigma$. An integrated emission map (Fig. \ref{Figure 4}, panel a) shows the HCO$^+$ emission to be quite bright interior to the innermost ring, compared to the outer disk emission. This is especially clear in a deprojected and azimuthally averaged HCO$^+$ radial profile (Fig. \ref{Figure 4}, panel b), where the emission intensity increases by over a factor of three interior to the innermost ring. In contrast, $^{13}$CO 3--2 emission (Fig. \ref{Figure 4}, panels c~\&~d, identically imaged to the HCO$^+$), is not centrally peaked, and its radial profile is essentially flat within the inner dust ring. The position angle and inclination of the dust continuum were used for deprojection. The radial profiles show kinks near the locations of the inner two rings, but it is unclear whether this reflects gas density variations, dust opacity effects, or artifacts of deprojecting flared emission with a single inclination. In any case, the presence of sharply centrally peaked HCO$^+$ emission and flat $^{13}$CO emission interior to the inner dust ring is interesting, as it implies that a high bulk gas density (traced by $^{13}$CO) interior to the ring is not responsible for the HCO$^+$ emission intensity. The bright inner disk emission must therefore result from peculiar excitation effects or enhanced HCO$^+$ formation chemistry (e.g. from a high-ionization environment). The HCO$^+$ moment-1 map (Fig. \ref{Figure 5}) is similarly intriguing, showing a twist interior to the inner dust ring, with the emission misaligned by $\sim$10$\degree$ with respect to the continuum orientation. This feature is similar to previous observations of CO in HD 142527 and HD 100546 \citep{Rosenfeld_2014, Pineda_2014} and HCO$^+$ in HD 97048 \citep{vanderPlas_2016, Walsh_2016}, and has been suggested to indicate either a disk warp or radial flow. \begin{figure}[ht!] \centering \includegraphics[width=0.47\textwidth]{Figure_5.pdf} \caption{Moment-1 map of HCO$^+$. The dust continuum orientation is shown in dashed black.\label{Figure 5}} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.47\textwidth]{Figure_6.pdf} \caption{Position-velocity diagram of HCO$^+$ 3--2 emission around AA Tau, modeled after \cite{Pineda_2014}. Contours are $[3,6,9,...]\times\sigma$, $\sigma$~=~6.5~mJy bm$^{-1}$. The expected Keplerian velocity profiles for two disk inclinations (59$\degree$ and 40$\degree$) and a stellar mass of 0.85 M$_{\odot}$ are shown in red and orange, respectively. The location of the first dust ring is denoted by the gray dashed line. \label{Figure 6}} \end{figure} A position-velocity diagram of the HCO$^+$ emission (Fig. \ref{Figure 6}) provides additional evidence for kinematics which cannot arise from a single Keplerian velocity field. The expected Keplerian velocity profiles for two inclinations (59$\degree$ and 40$\degree$) and a stellar mass of 0.85 M$_{\odot}$ are overlaid in red and orange, respectively. Although the higher inclination of 59$\degree$ is able to describe the HCO$^+$ emission well in the outer regions of the disk, it is not consistent with the emission inside of the innermost ring, which is better fit by a lower inclination of 40$\degree$. Similar to the signature in the moment-1 map, this could be indicative of either a disk warp or a non-Keplerian radial flow \citep{Pineda_2014}. We discuss possible interpretations of these kinematic signatures in \S4.2. \section{Discussion} \subsection{Dust rings and inclination} We have found that AA Tau hosts three nearly evenly-spaced mm dust rings at an inclination of $\sim$59$^{\circ}$, adding it to a growing list of substructured disks with rings and gaps. In contrast to HL Tau and TW Hya \citep{ALMA_2015, Andrews_2016}, however, which host power law dust disks with numerous narrow gaps, the dust in AA Tau is distributed in rings with broad gaps (Fig. \ref{Figure 1}, panel c), more similar to HD 163296 and HD 169142 \citep{Isella_2016, Fedele_2017}. If the dust gaps in AA Tau result from a planet-disk interaction, as suggested for HD 163296 and HD 169142, multiple massive planets might be involved \citep{Pinilla_2012, Picogna_2015}, although \cite{Gonzalez_2015} have shown that a single massive planet can also create multiple outer dust rings. More detailed modeling is necessary to interpret the observations in this vein, however, as our simple parametric model describes only the continuum surface brightness, rather than the disk surface density. The inclination (59.1$\pm$0.3$\degree$) of the outer disk dust rings we observe significantly deviates from the inclinations of both the scattered light disk \cite[71$\pm$1$\degree$,][]{Cox_2013} and the inner disk \cite[$\sim$75$\degree$, ][]{OSullivan_2005}. Understanding the true disk inclination is imperative, as a close to edge-on viewing geometry underpins the warped inner disk explanation of AA Tau's short term variability. This discrepancy suggests that either the previous inclinations were skewed, had larger uncertainties than reported, or the inner and outer disks are misaligned \citep[e.g.][]{Marino_2015}. \begin{figure*}[ht!] \centering \includegraphics[width=0.8\textwidth]{Figure_7.pdf} \caption{Schematic of possible AA Tau disk and streamer geometry. \textit{Bottom left:} Model modified to include streamers. \textit{Bottom middle:} Observations of streamer model simulated with current resolution. \textit{Bottom right:} Simulated high resolution (0\farcs05) ALMA observations of streamer model. \label{Figure 7}} \end{figure*} Supporting the third possibility, we find an opposite absolute disk orientation compared to \cite{Cox_2013} (i.e. they report an inclined disk with a northern near side, while we find a southern near side), with both orientations being fairly secure. Their orientation is derived both from the scattered light and from the observed jet, which is presumably aligned with the stellar axis and inner disk. In contrast, our orientation is derived from the HCO$^+$ and CO emission geometry in their channel maps (not shown). In moderately inclined disks, \cite{Rosenfeld_2013} have shown that emission arising from a vertically flared $\tau$=1 surface directly traces the absolute disk orientation. Misaligned inner and outer disks therefore appear to be the simplest explanation of all datasets, illustrated in a schematic diagram in Fig. \ref{Figure 7}. Such an orientation would remain consistent with the warped inner disk explanation of AA Tau's periodic photometric variability \citep{Bouvier_1999}, and the deviation between the mm and scattered light inclinations could be explained by shadowing from the inner disk \citep{Dong_2015}. \subsection{Non-axisymmetric disk substructure} The continuum observations show azimuthal variations in the innermost ring which are resolved and not explained by eccentricity alone. Several non-mutually exclusive possibilities could explain these observations. First, shadowing from a misaligned inner disk could affect the dust temperature, and therefore emission of the inner ring. Second, spiral arms may be present in the disk \citep[e.g.][]{Muto_2012, Perez_2016}. Third, gap crossing streamers could be present in both dust and gas around AA Tau. Dust streamers have been previously suggested in two disks \citep{Casassus_2013, Dutrey_2014}, although the former was not substantiated in further observations \citep{Fukagawa_2013, Muto_2015}. Beam convolution could cause these streamers to manifest as a non-axisymmetric contribution to the inner ring. Due to the suggestive HCO$^+$ gas kinematics observed, we consider here the observable effects of this third scenario. In general, the need for gap-crossing flows is observationally motivated, as accretion onto the central star continues to be observed even when gaps are present in disks. The small circumstellar disk will be rapidly depleted by accretion unless it is replenished \citep[e.g.][]{Verhoeff_2011}, implying that material must be crossing the inner gap. Models suggest that planets can drive dynamical instabilities which allow material to funnel into gap-bridging filaments \citep[e.g.][]{DodsonRobinson_2011}. Observational evidence for these radial flows is mostly indirect \citep{Beck_2012, Rosenfeld_2014, Zhang_2015b, vanderPlas_2016, Walsh_2016}, but ALMA has begun to allow direct imaging \citep{Casassus_2013, Dutrey_2014}. Radial gas flows in the AA Tau disk have previously been suggested as an interpretation of infrared CO absorption measurements \citep{Zhang_2015b}. Our HCO$^+$ observations may provide further indirect evidence for such a flow. The `kink' in the HCO$^+$ moment-1 map and the shape of the P-V diagram have both been previously invoked as signatures of radial gas flows and disk warps \citep{Rosenfeld_2014, Pineda_2014}. Furthermore, the extreme brightness and broad line-width of emission within the innermost ring suggests possibly enhanced HCO$^+$ formation in an ionizing environment (e.g. an accretion shock). If AA Tau does host a radial flow, the central `bridge' and azimuthal asymmetry in the inner ring might then be explained by the presence of dust streamers. The addition of dust streamers to our best fit model, shown in the bottom panels of Fig. \ref{Figure 7}, is consistent with the observed continuum emission within the inner cavity. Such streamers, which shear off from the walls of the inner ring and spiral into the inner disk, are additionally able to replicate the twisted azimuthal variations of the inner ring. Future higher resolution ($\sim$0\farcs05) ALMA Cycle 4 observations (Fig. \ref{Figure 7}, bottom right) will be able to distinguish between this scenario and a cavity with only an inner disk and no streamers. \subsection{Relationship between disk structure and photometric variability} As previously noted, AA Tau is the archetypal source for a class of stars with similar photometric variability. Recent observations have shown that a number of such `dipper stars' \citep{Ansdell_2016a} host millimeter dust disks at a wide range of inclinations \citep{Ansdell_2016b}, at odds with the warped edge-on inner disk explanation of their variability. As suggested in \cite{Ansdell_2016b}, misaligned inner and outer disks might explain this apparent dilemma, and our observations provide the first evidence for such a geometry in one of these systems. Furthermore, a misaligned inner and outer disk in AA Tau would present the interesting possibility of gap crossing material periodically intersecting our line of sight (Fig. \ref{Figure 4}). This provides a physical motivation for the non-axisymmetric over-density suggested by \cite{Bouvier_2013} and \cite{Rodriguez_2015} to explain the long-duration dimming of AA Tau. If an inner disk radius of several AU is assumed (consistent with our observations), then a direct path for a streamer between the inner ring and inner disk would suggest a LOS crossing distance between 5 and 10~AU, consistent with previous estimates of \textgreater8 AU \citep{Rodriguez_2015}. Higher resolution observations of AA Tau and similar objects will be needed to test both of these hypotheses. \acknowledgments We would like to thank Ilse Cleeves, Joey Rodriguez, and Andrew Vanderburg for productive discussions. We also thank the anonymous referee for providing comments that greatly improved the quality of the manuscript. RAL and MAM gratefully acknowledge funding from National Science Foundation Graduate Research Fellowships and ALMA Student Observing Support. KIO acknowledges funding from the David and Lucile Packard Foundation. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2013.1.01070.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
2,869,038,155,766
arxiv
\section{Introduction} \label{sec:intro} \textit{Scale variation}, a phenomenon that detection quality varies dramatically from one scale to another, originating from the imbalanced distribution of objects across different scales. It remains an unsolved challenge in object detection. In nature photography, it is impossible for an image to guarantee a balanced distribution of object patterns over different scales. Training model without handling this issue will not only depress the capability of detecting objects with minority scales but also hinder the overall performance. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Total_iters_performance.pdf} \caption{Performance varies from baseline and multi-scale training to ours as training proceeds. Experiments are conduct on Faster R-CNN~\cite{faster-rcnn} with ResNet-50~\cite{he2016deep} FPN~\cite{fpn}. Our method consistently boosts the performance even for much longer training periods. However, the baseline and the multi-scale variant encounter severe over-fitting. Please refer to Table~\ref{tab:longerperiod} for more details.} \label{fig:total_iters_performance} \end{figure} Generally, existing methods alleviate the scale variance in virtue of \textit{data preparation} or \textit{model optimization}. For instance, in the data preparation literature, image pyramid~\cite{imagepyramid} and multi-scale training augment inputs with multiple resolutions. In model optimization, feature pyramids~\cite{fpn,panet,nas-fpn} enhance representations at different receptive levels. TridentNet and POD~\cite{tridentnet,peng2019pod} propose scale-invariant architectures for assembling dilated information. Ming~\textit{et al.}~\cite{ming2019group} design architectures by balanced loss penalization. However, the above methods omit the collaboration of data preparation and model optimization. On one aspect, the preparation strategies do not fully exploit the information from model optimization, and likely to produce static augmented data blind to the dynamic optimization requirements. Also shown in Figure~\ref{fig:total_iters_performance}, strategies like multi-scale training might encounter over-fitting if persistently learning the static data patterns\footnote{Unlike \cite{he2019rethinking} that using SyncBN~\cite{megdet} and GN~\cite{wu2018group}, we fix BN~\cite{ioffe2015batch} across all experiments for common settings.}. On another aspect, the model optimization tends to be sub-optimal if no training data with desired scale information is prepared. AutoAugment~\cite{autoaug,autoaugfordet} considers the collaboration during searching but their preparation strategy is static in the re-retraining stage. Besides, existing dynamic training methods focus on the collaboration to the label assignment, sample mining, or feature aggregation, without considering the data preparation. In this paper, we propose a simple yet effective \textit{Dynamic Scale Training (DST)} paradigm to mitigate the scale variation issue. This is accomplished by designing a feedback-driven, dynamic data preparation paradigm to meet the optimization requirement. To resolve the requirement, we opt for tracking the penalization intensities, instantiated by loss proportions over different scales. For convenience, we adopt the loss proportion owing to the minority scale of objects as feedback. Since this statistics reflect the scale variation information of the most underwhelming samples under the background of imbalanced optimization. We deem small scale to be the minority as is acknowledged. In general, the issues to concern are (1) how to devise a handy enough data preparation strategy with potential capability towards scale variation handling (2) how to dynamically guide this strategy, given loss proportion of small objects as feedback. For the first issue, we introduce a collage fashion of down-scaled images\footnote{other than direct re-scaling in multi-scale training that might cause extra overheads by potential large resolution of augmented data.} (see Figure~\ref{fig:image_collage}). This augmentation will potentially introduce objects with smaller sizes that might help rectify the optimization bias against majority scales (medium and large objects). Critically for the second issue, we devise a feedback-driven decision paradigm to dynamically determine the exploitation of the collage data, according to the loss statistics of the minority scales. We experiment with our proposed DST method in various settings (backbones, training periods, and datasets). Results demonstrate that our method enhances performance consistently by handling scale variation. We also observe its versatility to different tasks by improving the performance on instance segmentation. In summary, our contributions are two-fold: \begin{itemize} \item We propose a feedback-driven, dynamic data preparation paradigm for handling scale variation. \item We introduce a handy collage fashion of data augmentation, which would then be guided by the feedback at runtime. \end{itemize} \section{Related Works} \label{sec:relatedworks} In this section, we shall give a brief retrospect to previous works about scale variation handling and start investigating the literature of dynamic training in object detection. \subsection{Scale Variation Handling} Current works for handling scale variation can be categorized into data preparation and model optimization. \paragraph{Handling by Data Preparation} Resampling is an intuitive method to handle scale variation, which is equivalent to amplify the loss magnitude of certain scales. However, the improvement could be limited and might hurt the performance of the other scales (see Table~\ref{tab:multismallloss}). Image pyramid~\cite{imagepyramid} has been popular since the era of hand-crafted descriptor learning to remedy scale variation. In recent years, multi-scale training becomes common for object detection. Features learned in this way are more robust to scale variation. However, both of the above strategies require additional overhead and storage consumption owing to transformed data with large resolutions. Moreover, since the target resolution is randomly chosen, an undesired data scale might be sub-optimal for handling scale variation. SNIP and SNIPER~\cite{SNIP,SNIPER} are advanced versions of image pyramids. SNIP~\cite{SNIP} is proposed to normalize the object scales under multi-scale training. SNIPER~\cite{SNIPER} sample patches, instead of regular inputs for training. It meticulously crops chips around the foregrounds and backgrounds to obtain training samples. However, the above methods rely on multi-scale testing that suffers from inference burden. Also, their strategies are fixed as training proceeds, overlooking the dynamic merits. Unlike above specialized methods, customized augmentations like AutoAugment~\cite{autoaug,autoaugfordet} plausibly relieve the variation problem to some extent. These methods involve thousands of GPU days for optimizing the policy controller before actual re-training. Moreover, the searched policy is also fixed during re-training without adapting the optimization. YOLOv4~\cite{yolov4}, and Zhou et al.~\cite{cheap-pretrain} involve similar image processing to our collage fashion. We claim the novelty about this since they are concurrent works to ours. YOLOv4 use Mosaic as data augmentation. Zhou, \textit{et al.} crops foreground patches to construct jigsaw assembly for upstream classification. Instead, our method focuses on utilizing the collage images guided by dynamic feedback for handling scale variation. \paragraph{Handling by Model Optimization} Another line of effort for handling scale variation mainly exists in scale-invariant model optimization. This usually falls into two categories: the feature pyramids or the dilation based methods. Feature pyramid methods aggregate information from multi-resolution levels. For instance, SSD~\cite{ssd} detects objects, taking as input the feature maps from different scales. Further, FPN~\cite{fpn} and its variants, e.g., PANet and NAS-FPN~\cite{panet,nas-fpn} fully explore path aggregation to obtain high-level semantics across all scales. However, the aggregation manner is fixed during the model learning, without considering the adjustment for better training. On the other hand, dilation based methods adaptively enlarge the receptive fields for scale robustness. Deformable convolution networks~(DCN)~\cite{deformable} generalizes dilated convolution with flexible receptive regions. TridentNet~\cite{tridentnet} and POD~\cite{peng2019pod} combine multiple branches with various dilation rates to extract scale-sensitive representations. However, dilation based methods are not storage-friendly due to the high-resolution intermediate feature maps. \subsection{Dynamic Training for Object Detection} Currently, dynamic training utilized in object detection typically exists in online sample mining, feature aggregation, and label assignment. For sampling mining, OHEM~\cite{ohem} exploits region of interests (\textit{RoIs}) for hard example mining according to the cost penalization. LapNet~\cite{lapnet} introduces dynamic loss weight to indirectly conduct sample mining. For feature aggregation, FSAF~\cite{FSAF} adaptively selects the most suitable features guided by the detection loss. ASFF~\cite{ASFF} automatically learns the aggregation manner by dynamic masking. For label assignment, Liu~\textit{et al.} propose HAMBox~\cite{hambox} with dynamic compensation towards mismatched ground-truths. FreeAnchor~\cite{freeanchor} seeks for adaptive anchor-target matching during optimization. In MAL~\cite{MAL}, the number of anchors shrinks progressively as the training proceeds. ATSS~\cite{atss} proposes target-dependent training sample selection. Zhang~\textit{et al.} propose Dynamic R-CNN~\cite{dynamic-rcnn} for two-stage detectors. It progressively increases the Intersection-over-Union (\textit{IoU}) threshold for better label assignment. However, none of the above methods refer to the data preparation which is also critical to the model training. In this paper, we propose an effective feedback-driven data preparation paradigm for scale variation handling. \section{Methodology} \label{sec:Approach} In this section, we shall briefly give a discussion about the scale variation issue. Subsequently, we will introduce the feedback-driven data preparation paradigm followed by the collage fashion of data augmentation. The overall pipeline of the proposed dynamic scale training framework is shown in Figure~\ref{fig:pipeline}. \subsection{A Brief Discussion about Scale Variation} \label{sec:analysis} \textit{Scale variation} refers to the phenomenon where models perform unfairly over different scales, featuring bad detection quality towards objects with minority scales. This commonly results from imbalanced frequencies of occurrence for instances belonging to different scales in the input images. Such imbalanced distribution would probably lead to biased network optimization. In many cases, the minority scales indicate the small scales. Without loss of generality, we conduct statistics upon MS COCO~\cite{coco} dataset and find two observations below: \begin{enumerate}[(a)] \item \textit{Imbalance across Dataset Does Not Affect}: Small\footnote{we follow the scale protocol in MS COCO~\cite{coco} referred in Sec.~\ref{sec:implementation_details}. For fair annotation usage, we use the box area instead of the mask area as the size metric.} objects hold above 41\% instances in the dataset, breaking their rare stereotype. However, they still suffer from low-quality detection. \item \textit{Imbalance over Images that Matters}: medium and large objects exist in 71\% and 83\% of the images, respectively. In contrast, only around 52\% of the images contain small objects. \end{enumerate} Based on these observations, we believe that it is the imbalance over image distribution that leads to biased optimization towards different scales. This implies an overlooked concentration of the minority scales. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Pipeline_new.pdf} \caption{The pipeline of Dynamic Scale Training.} \label{fig:pipeline} \end{figure} \subsection{Our Approach} \label{sec:method} \subsubsection{Feedback-Driven Data Preparation Paradigm} \label{sec:method_train_level} We propose a feedback-driven data preparation paradigm. In each training iteration, we fetch the loss proportion owing to small objects as feedback. It could be calculated after each forward propagation during model training. Subsequently, if the loss proportion statistics is below a certain threshold in current iteration $t$, we deem it the timing to relieve imbalanced network optimization by latent compensation. In detail, we will construct collage images as input data instead of employing regular images in the next iteration $t+1$. Otherwise, if this statistic above the threshold, the regular images will serve as the input data in the coming iteration just like the default data preparation setting. The above binary deterministic paradigm could be summarized in Eq.~(\ref{equation:2}) where $\mathrm{I}^{t+1}$ denotes the mini-batch data fed into the network at iteration $t+1$, $\mathrm{I}$ and $\mathrm{I}^c$ \textit{w.r.t.} represent the regular and collage images in the coming iterations if applied. $r_s^t$ denotes the loss proportion accounting for small-scale objects in iteration $t$. $\tau$ is the decision threshold to control data preparation. \begin{equation} \label{equation:2} \mathrm{I}^{t+1}=\left\{ \begin{aligned} &\mathrm{I}^c, &if\;r_s^t \leq \tau,\\ &\mathrm{I}\;, &otherwise. \end{aligned}\right. \end{equation} From another perspective, the proposed feedback-driven paradigm could be viewed as an agent optimization, ease of policy gradients in reinforcement learning. Specifically, in the environment of object detector training, given the loss proportion observation in each training iteration, a non-parametric controller utilizes the aforementioned deterministic policy (specified in Eq.~(\ref{equation:2})) to sample from a binary action space, composed of regular or collage fashion of image processing for the next iteration of data preparation. \subsubsection{Collage Fashion of Data Augmentation} \label{sec:method_image_level} \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{ImageCollage.pdf} \caption{Regular images and collage images.} \label{fig:image_collage} \end{figure} As stated in Sec.~\ref{sec:intro}, we propose a collage fashion of scalable data augmentation for the purpose of convenient manipulation for dynamic training. For simplicity and keeping the aspect ratio for retaining object shape priors, we formulate the collage by down-scaling and stitching $k$ regular images arranged in an equal number of rows and columns. Hence, $k$, equals to the square of row/column number, \textit{e.g.}, $1,2^2,3^2$, and so on. The spatial resolution of each component image inside is $(\frac{h}{\sqrt{k}}, \frac{w}{\sqrt{k}})$. Aside from the data, the box annotations of each source component image would get properly rescaled and translated for consistency. When $k$ equal to 1, a collage image degenerates to a regular image. Figure~\ref{fig:image_collage} shows a collage (right) specifying $k=4$ compared to a regular image (left). As can be seen, the collage fashion of image processing introduces a minimal scale variation handling by explicitly manufacturing object patterns with smaller scales. And Since collage images retain identical size as regular images, no additional overhead involves in network propagation. \section{Experiments} \label{sec:experiments} In this section, we begin by briefly describing the implementation details. Whereafter, the efficacy analysis of our proposed method compared to previous works are investigated. Next, we shall elaborate on the ablation studies. A quantitative analysis of the scale variation issue will also be given. We end the experiment section by discussing extra merits and corner cases brought by the proposed method. \subsection{Implementation Details} \label{sec:implementation_details} Experiments are mainly conducted on the challenging MS COCO~\cite{coco} dataset which contains 80 categories. Following the common practice in \cite{R-CNN}, the union of the primitive training set (80k images) and the {\fontfamily{qcr}\selectfont trainval35k} subset (35k images) of primitive validation set are used for training. The evaluation is conducted on the {\fontfamily{qcr}\selectfont minival} subset with 5k images. We follow the scale protocol in COCO to distinguish the small, middle and large objects by 32$^2$ and 96$^2$ pixel areas. Input images are resized such that their shorter side is 800 and the longer side no more than 1,333. Throughout all experiments, the initial learning rate is set as 0.02 with Stochastic Gradient Descent (SGD) with momentum as 0.9 and weight decay as 1e-4. The mini-batch size is set to 16 (2 images per GPU). The network is trained for 90k iterations that will be decayed by 10 at 60k and 80k iterations, respectively. For longer training periods if required, we adopt a common proportional milestones extension. For example, a 2$\times$ setting with 180k iterations, and milestones at 120k and 160k respectively. Besides MS COCO, we also examine our efficacy of handling scale variation on PASCAL VOC~\cite{PascalVOC} dataset. Moreover, extra studies on the challenging instance segmentation task also verify the versatility of our proposed method. \subsection{Comparison to Previous Methods} \subsubsection{Comparison to Resampling} \label{sec:compare_resampling} Following the spirit of the resampling strategy for more balanced training, we apply a careful re-weight scheme to assist the minority scales in each iteration. In detail, we amplify the loss magnitudes of small objects to be equal to that of the medium and large objects. However, as shown in Table~\ref{tab:multismallloss}, the overall performance and the performance of the other scales deteriorate with an only slight improvement to the small scale (AP$_s$ +0.3\%). \begin{table}[htbp] \centering \caption{Impact brought by resampling.} \label{tab:multismallloss} \resizebox{0.75\linewidth}{!}{ \begin{tabular}{l|cccc} \toprule & AP & AP$_s$ & AP$_m$ & AP$_l$ \\ \midrule Baseline & 36.7 & 21.1 & 39.9 & 48.1 \\ + Resampling & 36.4 & 21.4 & 39.3 & 47.4 \\ \bottomrule \end{tabular}} \end{table} \begin{table*}[h] \caption{Comparison with common baselines and multi-scale training on Faster R-CNN.} \label{tab:faster-rcnn-1x} \centering \resizebox{0.7\linewidth}{!}{ \begin{tabular}{l|l|c|ll|ll|ll|ll} \toprule & \multicolumn{1}{c|}{Backbone} & Hours & \multicolumn{2}{c|}{AP} & \multicolumn{2}{c|}{AP$_s$} & \multicolumn{2}{c|}{AP$_m$} & \multicolumn{2}{c}{AP$_l$} \\ \midrule Baseline & \multirow{5}{*}{ResNet-50 FPN} & 8.7 & 36.7 & & 21.1 & & 39.9 & & 48.1 & \\ MS-train$^{s}$ & & 8.1 & 36.3 & & 23.7 & \textcolor{blue}{(+2.6)} & 39.9 & & 45.9 & \textcolor[rgb]{0,0.4,0.2}{(-2.2)} \\ MS-train$^{m}$ & & 10.8 & \textbf{37.5} & & 22.0 & & 40.7 & & 48.8 & \\ MS-train$^{l}$ & & 14.4 & 37.1 & & 20.7 & \textcolor[rgb]{0,0.4,0.2}{(-0.4)} & 40.3 & & 49.8 & \textcolor{blue}{(+1.7)} \\ \textbf{Ours} & & 9.0 & \textbf{38.6} & \textbf{(+1.9)} & \textbf{24.4} & \textbf{(+3.3)} & 41.9 & (+2.0) & 49.3 & (+1.2) \\ \midrule Baseline & \multirow{5}{*}{ResNet-101 FPN} & 11.5 & 39.1 & & 22.6 & & 42.9 & & 51.4 & \\ MS-train$^{s}$ & & 10.8 & 38.9 & & 24.2 & \textcolor{blue}{(+1.6)} & 42.7 & & 49.0 & \textcolor[rgb]{0,0.4,0.2}{(-2.4)} \\ MS-train$^{m}$ & & 14.2 & \textbf{39.7} & & 23.6 & & 43.3 & & 51.3 & \\ MS-train$^{l}$ & & 21.3 & 39.3 & & 22.3 & \textcolor[rgb]{0,0.4,0.2}{(-0.3)} & 43.0 & & 51.9 & \textcolor{blue}{(+0.5)} \\ \textbf{Ours} & & 11.7 & \textbf{40.8} & \textbf{(+1.7)} & \textbf{25.8} & \textbf{(+3.2)} & 44.1 & (+1.2) & 51.9 & (+0.5) \\ \bottomrule \end{tabular}} \end{table*} \begin{table*}[h] \caption{Comparison with common baselines and multi-scale training on Faster R-CNN for 2x training periods.} \label{tab:faster-rcnn-2x} \centering \resizebox{0.72\linewidth}{!}{ \begin{tabular}{l|l|c|ll|ll|ll|ll} \toprule & \multicolumn{1}{c|}{Backbone} & Hours & \multicolumn{2}{c|}{AP} & \multicolumn{2}{c|}{AP$_s$} & \multicolumn{2}{c|}{AP$_m$} & \multicolumn{2}{c}{AP$_l$} \\ \midrule Baseline & \multirow{3}{*}{ResNet-50 FPN} & 17.2 & 37.7 & & 21.6 & & 40.6 & & 49.6 & \\ MS-train$^{m}$ & & 20.5 & 39.1 & & 23.5 & & 42.2 & & 50.8 & \\ \textbf{Ours} & & 17.5 & \textbf{39.9} & \textbf{(+2.2)} & \textbf{25.1} & \textbf{(+3.5)} & 43.1 & (+2.5) & 51.0 & (+1.4) \\ \midrule Baseline & \multirow{3}{*}{ResNet-101 FPN} & 23.4 & 39.8 & & 22.9 & & 43.3 & & 52.6 & \\ MS-train$^{m}$ & & 28.5 & 41.6 & & 25.5 & & 45.3 & & 54.1 & \\ \textbf{Ours} & & 23.5 & \textbf{42.1} & \textbf{(+2.3)} & \textbf{26.9} & \textbf{(+4.0)} & 45.5 & (+2.2) & 54.1 & (+1.5) \\ \bottomrule \end{tabular}} \end{table*} \subsubsection{Comparison to Common Baselines} \label{sec:compare_baselines} As shown in Table~\ref{tab:faster-rcnn-1x}, the improvement against baseline is highlighted in parenthesis. We observe decent improvement overall ($1.7\%$+ AP), and more significant results for the minority scales, \textit{i.e.}, the small scales ($\mathbf{3.2}$\%+ AP$_s$). Table~\ref{tab:faster-rcnn-2x} shows the comparison in 2$\times$ training periods, presenting even higher gains ($2.2$\%+ AP and up to $\mathbf{4.0}$\% AP$_s$). We also conduct counterpart experiments on single stage detectors, \textit{e.g.}, RetinaNet~\cite{retinanet} and FCOS~\cite{tian2019fcos} as shown in Table~\ref{tab:retinanet_fcos}. These demonstrate the effectiveness not only on general detection enhancing but also on scale variation handling \textit{esp.} for the minority scales using dynamic scale training. \subsubsection{Comparison to Multi-scale Training} \label{sec:compare_multiscale} \paragraph{(a) Different settings of multi-scale training} \textbf{} \\ We carefully compare our method with multi-scale training with various scale settings as exhibited in Table~\ref{tab:faster-rcnn-1x}. Here, MS-train$^{s}$, MS-train$^{m}$ and MS-train$^{l}$ correspond to sampling intervals about the shorter side length, denoted as \underline{[400, 800]}, \underline{[600, 1000]}, and \underline{[800, 1200]} respectively, with stride 100. They indicate settings prefer to small, middle, and large scale respectively. Among them, Multi-scale$^m$ achieves the best trade-off, as the other two settings acquire improvement in their favorite scale at the price of greatly harming the opposite scale (highlighted in blue and green in Table~\ref{tab:faster-rcnn-1x}). Hence, We adopt the Multi-scale$^m$ setting for Multi-scale training in the following experiments. Yet, our method still outperforms this strategy across all scales. \paragraph{(b) Time efficiency} \textbf{} \\ The proposed dynamic scale training method brings about negligible overhead compared to baselines. It mainly comes from the collage augmentation which involves $nearest$ neighbor interpolation for down-scaling component images. Empirically, a collage operation costs about 0.02 seconds in a single training iteration. Since the frequencies of collage operation depend on the dynamic preparation paradigm that is unavailable in advance. We practically measure the time consumption in terms of the complete training period. All measurements are benchmarked on 8 RTX 2080Ti GPU cards with 16 mini-batch size. As shown in Table~\ref{tab:faster-rcnn-1x}, it takes 8.7 hours to train the baseline with ResNet-50 FPN in 1$\times$ period. Instead, multi-scale training requires extra 2 hours (10.8). The gap enlarges when experimenting on a larger backbone (ResNet-101 FPN) or longer training period (2$\times$). In contrast, our method takes only a bit longer than the baseline (9 hours with extra 0.3 hours). And the gap is invariant to the training periods (nearly the same in both 1$\times$ and 2$\times$ settings). Moreover, the gap shrinks when taking larger backbones (ResNet-101 FPN) for experiments. Please refer to Table~\ref{tab:faster-rcnn-1x} and Table~\ref{tab:faster-rcnn-2x} for details. Therefore, our proposed method is much more efficient than multi-scale training. \begin{table}[htbp] \caption{Evaluation on the effect of multi-scale testing.} \label{tab:compare_multiscaletesting} \centering \resizebox{0.7\linewidth}{!}{ \begin{tabular}{l|ll|l|l|l} \toprule \multicolumn{1}{l|}{} & \multicolumn{2}{c|}{AP} & \multicolumn{1}{c|}{AP$_s$} & \multicolumn{1}{c|}{AP$_m$} & \multicolumn{1}{c}{AP$_l$} \\ \midrule MS-train$^{m}$ & 37.5 & & 22.0 & 40.7 & 48.8 \\ + MS-test$^{m}$ & 38.8 & (+1.3) & 23.7 & 41.6 & 49.8 \\ \midrule Ours & 38.6 & & 24.4 & 41.9 & 49.3 \\ + MS-test$^{m}$ & \textbf{39.9} & (+1.3) & 26.5 & 42.7 & 51.0 \\ \bottomrule \end{tabular} } \end{table} \paragraph{(c) Compatible to multi-scale testing} \textbf{}\\ It is acknowledged that models trained with multi-scale training could further enhance the performance with matching multi-scale testing. Thus, without loss of generality, we conduct a comparison by applying MS-test$^{m}$ to MS-train$^{m}$ and our proposed method, respectively. As shown in Table~\ref{tab:compare_multiscaletesting}, our proposed method shares exactly the same merit (+1.3\%). This reveals good compatibility. \begin{table}[htbp] \caption{Evaluation on longer training periods.} \label{tab:longerperiod} \centering \resizebox{0.85\linewidth}{!}{ \begin{tabular}{c|c|l|ccc} \toprule & Iterations & AP & AP$_s$ & AP$_m$ & AP$_l$ \\ \midrule \multirow{4}{*}{Baseline} & 90k & 36.7 & 21.1 & 39.8 & 48.1 \\ & 180k & 37.7 & 21.6 & 40.6 & 49.6 \\ & 360k & 37.3 $\downarrow$ & 20.3 & 39.6 & 50.1 \\ & 540k & 35.6 $\downarrow$ & 19.8 & 37.7 & 47.6 \\ \midrule \multirow{2}{*}{MS-train} & 90k & 37.5 & 22.0 & 40.7 & 48.8 \\ & 180k & 39.1 & 23.5 & 42.2 & 50.8 \\ & 360k & 40.1 & 24.3 & 43.3 & 52.4 \\ & 540k & 39.8 $\downarrow$ & 24.1 & 43.0 & 52.0 \\\midrule \multirow{2}{*}{Ours} & 90k & 38.6 & 24.4 & 41.9 & 49.3 \\ & 180k & 39.9 & 25.1 & 43.1 & 51.0 \\ & 360k & 40.4 & 25.2 & 43.6 & 51.9 \\ & 540k & \textbf{40.5} $\uparrow$ & 26.1 & 43.2 & 51.6 \\ \bottomrule \end{tabular}} \end{table} \begin{table*}[htbp] \caption{Comparison on RetinaNet and FCOS with ResNet-50 and ResNet-101 backbones for 2$\times$ training periods.} \label{tab:retinanet_fcos} \centering \resizebox{0.75\linewidth}{!}{ \begin{tabular}{l|c|l|ll|ll|ll|ll} \toprule & \multicolumn{1}{c|}{Model} & \multicolumn{1}{c|}{Backbone} & \multicolumn{2}{c|}{AP} & \multicolumn{2}{c|}{AP$_s$} & \multicolumn{2}{c|}{AP$_m$} & \multicolumn{2}{c}{AP$_l$} \\ \midrule Baseline & \multirow{4}{*}{RetinaNet} & \multirow{2}{*}{ResNet-50 FPN} & 36.8 & & 20.2 & & 40.0 & & 49.7 & \\ Ours & & & \textbf{39.0} & \textbf{(+2.2)} & \textbf{23.4} & \textbf{(+3.2)} & 42.9 & (+2.9) & 51.0 & (+1.2) \\ \cline{1-1} \cline{3-11} Baseline & & \multirow{2}{*}{ResNet-101 FPN} & 38.8 & & 21.1 & & 42.1 & & 52.4 & \\ Ours & & & \textbf{41.3} & \textbf{(+2.5)} & \textbf{25.4} & \textbf{(+4.3)} & 45.1 & (+3.0) & 54.0 & (+1.6) \\ \midrule Baseline & \multirow{4}{*}{FCOS} & \multirow{2}{*}{ResNet-50 FPN} & 37.1 & & 21.6 & & 41.0 & & 47.3 & \\ Ours & & & \textbf{39.8} & \textbf{(+2.7)} & \textbf{25.4} & \textbf{(+3.8)} & 43.9 & (+2.9) & 50.2 & (+2.9) \\ \cline{1-1} \cline{3-11} Baseline & & \multirow{2}{*}{ResNet-101 FPN} & 39.1 & & 22.2 & & 43.4 & & 50.6 & \\ Ours & & & \textbf{41.6} & \textbf{(+2.5)} & \textbf{26.1} & \textbf{(+3.9)} & 45.5 & (+2.1) & 53.3 & (+2.7) \\ \bottomrule \end{tabular}} \end{table*} \paragraph{(d) Longer training periods} \textbf{}\\ Recalling the proposed collage augmentation, one association with multi-scale training is that they both create scalable instance patterns to some extent. However, we are wondering if multi-scale training is capable of mitigating the performance gap to ours, if long enough training periods are allowed. To resolve this, we conduct experiments upon Faster R-CNN with ResNet-50 and FPN on various training periods as shown in Table~\ref{tab:longerperiod}. We find that the gap starts shrinking when the training process reaches sufficiently longer periods (3$\times$ to 4$\times$). However, interestingly for the longest 6$\times$ training period (540k iterations), the performance of the multi-scale training (also the baseline) encounter degradation. Instead, our method could further enhance the performance. One reasonable explanation is that the feedback-driven preparation paradigm consistently provides data of the desired scale to effectively avoid {\it over-fitting}. \begin{table}[htbp] \caption{Comparison with SNIP / SNIPER.} \label{tab:compare_sniper} \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{l|l|l|ccc} \toprule & \multicolumn{1}{c|}{Backbone} & \multicolumn{1}{c|}{AP} & AP$_s$ & AP$_m$ & AP$_l$ \\ \midrule SNIP & \multirow{3}{*}{ResNet-50 C4} & 43.6 & 26.4 & 46.5 & 55.8 \\ SNIPER & & 43.5 & 26.1 & 46.3 & 56.0 \\ Ours & & \textbf{44.2} & \textbf{28.7} & \textbf{47.2} & \textbf{58.3} \\ \midrule SNIP & \multirow{3}{*}{ResNet-101 C4} & 44.4 & 27.3 & 47.4 & 56.9 \\ SNIPER & & 46.1 & 29.6 & 48.9 & 58.1 \\ Ours & & \textbf{46.9} & \textbf{30.9} & \textbf{50.5} & \textbf{60.9} \\ \bottomrule \end{tabular}} \end{table} \subsubsection{Comparison to SNIP and SNIPER}\label{sec:compare_sniper} As shown in Table~\ref{tab:compare_sniper}, we compare our method to SNIP~\cite{SNIP} and SNIPER~\cite{SNIPER}{\color{red} \footnote{For fair comparisons, we use the same augmentations (deformable convolution, MS test, and soft-NMS~\cite{soft-nms}) as SNIP and SNIPER do.}} methods on various backbones. As a result, our method performs better. This might because the SNIP and SNIPER operate in a static manner during training, rendering them unable to provide scale-sensitive data that the network desires. In contrast, our method benefits from the dynamic data preparation paradigm to meet the requirements training-dependently. Moreover, our method is simpler to use while SNIPER involves extended label assignment and chips sampling procedure. \begin{table}[htbp] \caption{Evaluation on Large Backbones.} \label{tab:largebackbone} \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{l|c|l|ccc} \toprule & Backbone & \multicolumn{1}{c|}{AP} & AP$_s$ & AP$_m$ & AP$_l$ \\ \midrule Baseline & \multirow{2}{*}{ResNext 101} & 41.6 & 24.8 & 45.1 & 53.3 \\ Ours & & \textbf{43.1} & \textbf{28.0} & \textbf{46.7} & \textbf{54.2} \\ \midrule Baseline & \multirow{2}{*}{ResNet 101 + DCN} & 42.3 & 24.8 & 46.1 & 55.7 \\ Ours & & \textbf{43.3} & \textbf{27.1} & \textbf{47.0} & \textbf{56.0} \\ \midrule Baseline & \multirow{2}{*}{ResNext 101 + DCN} & 44.1 & 26.8 & 47.5 & 57.8 \\ Ours & & \textbf{45.4} & \textbf{29.4} & \textbf{48.8} & \textbf{58.5} \\ \bottomrule \end{tabular}} \end{table} \subsubsection{Evaluation on Large Backbones} Table~\ref{tab:largebackbone} shows the improvement from our method on large backbones, \textit{i.e.}, ResNext 101~\cite{resnext}, ResNet-101 with DCN~\cite{deformable} and ResNext-32$\times$8d-101 with DCN~\cite{deformable}. Based on the strong baselines, our method could still enhance the performance by 1.0\% to 1.5\% AP. \begin{table}[htbp] \caption{Evaluation on Instance Segmentation.} \label{tab:instanceseg} \centering \resizebox{0.9\linewidth}{!}{ \begin{tabular}{l|c|l|l|l|l} \toprule & Backbone & \multicolumn{1}{c|}{AP} & \multicolumn{1}{c|}{AP$_s$} & \multicolumn{1}{c|}{AP$_m$} & \multicolumn{1}{c}{AP$_l$} \\ \midrule Baseline & \multirow{2}{*}{ResNet-50 FPN} & 34.3 & 15.8 & 36.7 & 50.5 \\ Ours & & \textbf{35.1} & \textbf{17.0} & \textbf{37.8} & \textbf{51.4} \\ \midrule Baseline & \multirow{2}{*}{ResNet-101 FPN} & 35.9 & 15.9 & 38.9 & 53.2 \\ Ours & & \textbf{37.2} & \textbf{19.0} & \textbf{40.3} & \textbf{53.7} \\ \bottomrule \end{tabular}} \end{table} \subsubsection{Evaluation on Instance Segmentation} Beyond object detection, we also apply our method to instance segmentation task. Experiments are conducted on the COCO instance segmentation track~\cite{coco}. We report COCO mask AP on the {\fontfamily{qcr}\selectfont minival} split. Models are trained for 90k iterations and divided by 10 at {60k and 80k} iterations. We train Mask R-CNN~\cite{maskrcnn} models with Stochastic Gradient Descent (SGD), 0.9 momentum and 1e-4 weight decay and 16 batch size (2 images for per GPU). As shown in Table~\ref{tab:instanceseg}, our method improves AP by 0.9\% on ResNet-50 and by 1.3\% on ResNet-101. \begin{table*}[htbp] \caption{Evaluation on PASCAL VOC dataset on Faster R-CNN.} \label{tab:pascal_voc} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{l|c|cccccccccccccccccccc} \toprule & mAP & plane & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & bike & person & plant & sheep & sofa & train & tv \\ \midrule Baseline & 80.3 & 86.9 & 86.7 & 80.1 & 72.5 & 71.9 & 86.9 & 88.4 & 88.7 & 63.3 & 87.0 & 75.3 & 88.5 & 88.4 & 80.1 & 85.5 & 56.7 & 78.2 & 78.8 & 85.0 & 77.6 \\ \midrule Ours & 82.6 & 89.0 & 86.7 & 80.2 & 73.0 & 72.7 & 87.0 & 89.3 & 89.0 & 68.6 & 86.8 & 79.7 & 88.8 & 88.5 & 88.1 & 87.3 & 59.8 & 86.7 & 80.2 & 88.1 & 84.0 \\ \bottomrule \end{tabular}} \end{table*} \subsubsection{Evaluation on PASCAL VOC} Besides MS COCO, we also generalize our proposed dynamic scale training method to Pascal VOC~\cite{PascalVOC} dataset. Following the protocol in~\cite{fast-r-cnn}, the union of {\fontfamily{qcr}\selectfont 2007 trainval} and {\fontfamily{qcr}\selectfont 2012 trainval} are used for training. Models are trained by 24k iterations in which the learning rate is set as 0.01 and 0.001 in the first two-thirds and the remaining one-third iterations, respectively. Evaluation is performed on {\fontfamily{qcr}\selectfont 2007 test}. As shown in Table~\ref{tab:pascal_voc}, our method obtains a gain of 2.3\% mAP overall. In addition, the detection quality of small scale categories like bottle, chair, and tv get significantly improved. \subsection{Ablation Studies} In this section, we analyze the best practice of the feedback choice and the deterministic threshold $\tau$ in the feedback-driven data preparation paradigm. Besides, we conduct a simple ablation on selecting the number of component images $k$ in the collage fashion. We use Faster R-CNN with ResNet-50 and FPN for the studies. \paragraph{Feedback choice.} To explore the preparation paradigm, we set up below control experiments as shown in Table~\ref{tab:ablation1}. \noindent $\centerdot$ \textit{All collage}: collage images all the time; \\ $\centerdot$ \textit{All regular}: regular images all the time (baseline); \\ $\centerdot$ \textit{Random sampling}: collage or regular images randomly; \\ $\centerdot$ \textit{Input feedback}: occurrence frequency of small instances in the input as feedback; \\ $\centerdot$ \textit{Classification/Regression/Joint loss feedback}: loss proportion of small objects as feedback. \begin{table}[h] \caption{Ablation study on feedback choice.} \label{tab:ablation1} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{ll|c|ccc} \toprule \multicolumn{2}{c|}{feedback strategy (if any)} & AP & AP$_s$ & AP$_m$ & AP$_l$ \\ \midrule \multicolumn{1}{l|}{\multirow{3}{*}{No}} & All collaged & 32.1 & 21.9 & 36.4 & 36.8 \\ \multicolumn{1}{l|}{} & All regular & 36.7 & 21.1 & 39.8 & 48.1 \\ \multicolumn{1}{l|}{} & Random sampling & 37.8 & 23.6 & 40.7 & 46.7 \\ \midrule \multicolumn{1}{l|}{\multirow{4}{*}{Yes}} & Input Ratio & 38.1 & 23.1 & 41.3 & 49.1 \\ \multicolumn{1}{l|}{} & Classification Loss & 38.5 & 23.9 & 41.6 & 48.8 \\ \multicolumn{1}{l|}{} & Regression Loss & 38.6 & 24.4 & 41.9 & 49.3 \\ \multicolumn{1}{l|}{} & Joint Loss & 38.5 & 23.7 & 41.6 & 49.3 \\ \bottomrule \end{tabular}} \end{table} As shown in Table~\ref{tab:ablation1}, static usage of collage images leads to bad performance. It might run into another extreme situation where learning biases towards small-scales. Besides, random sampling performs better than the common baseline, but it is still static. The dynamic feedback strategies, \textit{e.g.}, the input feedback, results in better performance. However, such input-guided feedback is inferior to the loss-guided ones since it does not consider the optimization process. Results with different loss-guided feedback strategies are comparable, robust to specific supervision tasks. By default, we use regression loss-guided for convenience. \begin{figure}[htbp] \centering \includegraphics[width=0.85\linewidth]{ap_curve_forPZ.pdf} \caption{Ablation study on the threshold $\tau$.} \label{fig:threshold} \end{figure} \paragraph{Deterministic threshold.} In the proposed method, only one hyper-parameter $\tau$ requires tuning. We apply grid searching and study the impact as shown in Figure~\ref{fig:threshold}. The performance decreases dramatically as $\tau$ exceeds 0.2. Empirically, we set $\tau$ as 0.1 and apply it across all experiments without loss of generality. Notably, it happens to be coincident with the ratio observation covering half of the training iterations, as described in Figure~\ref{fig:loss_iterations}. This provides a promising heuristics for convenient tuning by first calculating the statistics during the baseline training on a minimal subset. \begin{table}[htbp] \caption{Ablation study on number of collage components.} \label{tab:collage_component_k} \centering \begin{tabular}{c|c|ccccc} \toprule $k$ & AP & AP$_{50}$ & AP$_{75}$ & AP$_s$ & AP$_m$ & AP$_l$ \\ \hline $1^2$ & 36.7 & 58.4 & 39.6 & 21.1 & 39.8 & 48.1 \\ $2^2$ & 38.6 & 60.5 & 41.8 & 24.4 & 41.9 & 49.3 \\ $3^2$ & 38.4 & 60.5 & 41.5 & 24.2 & 41.7 & 48.8\\ \bottomrule \end{tabular} \end{table} \paragraph{Number of collage components.} We conduct a simple ablation on different number $k$ of component images used in collage by our proposed method. Since we mainly focus on the dynamic preparation paradigm, we simply adopt $k=4$ for a good trade-off as shown in Table~\ref{tab:collage_component_k}. \subsection{Analysis of Scale Variation} Besides reflecting the improvement of scale variation handling by performance gains, we also investigate in the view of optimization preference. We measure this by loss proportions occupied by different scales over iterations. These statistics are collected from the training process of Faster R-CNN with ResNet-50 and FPN. As a result, we draw the curves of model training w/ and w/o our proposed method in Figure~\ref{fig:loss_iterations}. It can be observed that the scale variation gets much alleviated. \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{loss_iterations_compare.pdf} \caption{Loss proportion across different scales before and after.} \label{fig:loss_iterations} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1.0\linewidth]{loss_distribution2_forPZ.pdf} \caption{Loss proportion of minority scales before and after.} \label{fig:loss_distribution2} \end{figure} Beyond the overall observation, we also investigate into loss proportions of the small scales. As shown in the Figure~\ref{fig:loss_distribution2} left, more than half of the training iterations undergo an extremely low loss proportion of small objects, dubbed 0.1. By adopting our proposed method, the scale variation from the perspective of loss proportion distribution get much alleviated (see Figure~\ref{fig:loss_distribution2} right). \subsection{Other Merits} Beyond the performance enhancement, there are also extra merits brought by the proposed dynamic scale training during scale variation handling. \subsubsection{Speed-Accuracy Trade-off} We find an improvement upon speed-accuracy trade-off as shown in Table~\ref{tab:speed-accuracy-trade-off}. It could be observed that our method runs on par with the baseline (AP: 37.0 \textit{vs.} 36.7) given inputs of much smaller sizes (resolution: (512, 853) \textit{vs.} (800, 1333)) and meanwhile is 1.6$\times$ faster. \begin{table}[htbp] \caption{Speed-accuracy trade-off merit brought by dynamic scale training. The baseline is Faster R-CNN with ResNet-50 and FPN.} \label{tab:speed-accuracy-trade-off} \centering \resizebox{0.8\linewidth}{!}{ \begin{tabular}{l|c|c|l} \toprule & Resolution & Inference time & AP \\ \midrule Baseline & (800, 1333) & 56 ms / img & 36.7 \\ Baseline & (512, 853) & 35 ms / img & 33.5 \\ Ours & (800, 1333) & 56 ms / img & 38.6 \\ Ours & (512, 853) & 35 ms / img & 37.0 \\ \bottomrule \end{tabular}} \end{table} \subsubsection{Fast Convergence} We discover the fast convergence capacity of our proposed DST method. Referring to Table~\ref{tab:intro_table}, after applying DST, it nearly halves (iters: 50k vs. 90k) the training iterations to achieve the same accuracy to the baseline. \begin{table}[t] \caption{Fast convergence merit brought dynamic scale training. The baseline is Faster R-CNN with ResNet-50 and FPN.} \label{tab:intro_table} \centering \resizebox{0.8\linewidth}{!}{ \begin{tabular}{l|c|l|lll} \toprule & Iterations & AP & AP$_s$ & AP$_m$ & AP$_l$ \\ \midrule Baseline & 90k & 36.7 & 21.1 & 39.9 & 48.1 \\ Ours & \textbf{50k} & 36.7 & 22.9 & 39.9 & 46.6 \\ Ours & 90k & 38.6 & 24.4 & 41.9 & 49.3 \\ \bottomrule \end{tabular} } \end{table} \subsection{Corner cases of collage} \label{subsec:corner_cases} Recalling the collage procedure, regular images are down-scaled before being stitched to form the collage components. This might produce extremely tiny objects more likely to be a noisy pattern (from existing small objects). To investigate the impact, we discard tiny samples whose box areas less than 100 pixels. Before removal, the results are AP: 38.6, AP$_s$: 24.4, AP$_m$: 41.9, AP$_l$: 49.3. After removal, we obtain AP: 38.6, AP$_s$: 24.7, AP$_m$: 41.8, AP$_l$: 49.1. This demonstrates that tiny patterns do not affect the overall performance but might hamper the quality on small scales. \section{Conclusion} In this paper, we propose a simple yet effective \textit{dynamic scale training method (DST)} for object detection. By relieving the scale variation issue in virtue of feedback information from the optimization process, we observe significant gains in detection performance. Moreover, it introduces efficient convergence during training and does not affect the inference time as a free lunch. Abundant experiments have been conducted to verify its efficacy on various backbones, training periods, datasets, and different tasks. DST could be easily incorporated into modern detectors and steadily enhances the detection quality. We expect it could serve as a common configuration in the future, facilitating further dynamic training research for object detection. \newpage{} {\small \bibliographystyle{ieee_fullname}
2,869,038,155,767
arxiv
\section{Introduction and Preliminaries} The aim of this paper is to characterize the parameter domain of non-central Wishart distributions (with shape, scale and non-centrality parameters) and that of Wishart processes, a class of positive semi-definite diffusion processes (with drift parameter). Denote by $\mathcal{S}_p$ the space of symmetric $p\times p$ matrices and let $\mathcal{S}_p^+$ be the open cone of positive definite matrices, with topological closure $\bar {\mathcal S}^+_p$, the positive semi-definite matrices. The classical Gindikin\footnote{The name of this set originates from Gindikin's \cite{bib:Gin} work in a general multivariate setting.} set $W_{0}$ is defined as the set of admissible $\beta\in\R$ such that there exists a random matrix $X$ with values in $\bar {\mathcal S}^+_p$ (equivalently a measure with support in $\bar {\mathcal S}^+_p$) such that its Laplace transform is of the form $$ \ex e^{-\tr(uX)}=(\det(I+\Sigma u))^{-\beta},\ \ u\in \bar {\mathcal S}^+_p, $$ where $\Sigma\in {\mathcal S}_p^+$. It is well-known (cf.~\cite{bib:farautKOR}, pp. 137, 349) that $$ W_0=\frac12 B \cup \left[\frac{p-1}2,\infty\right) \/, $$ where $B=\{0,1,\cdots,p-2\}$. A more intricate question concerns the existence of non-central Wishart distributions, which in addition involves a parameter of non-centrality: \begin{definition}\label{def wish} The general non-central Wishart distribution $\Gamma_p(\beta,\omega;\Sigma)$ on $\bar{\mathcal{S}}_p^+$ is defined (whenever it exists) by its Laplace transform \begin{equation}\label{FLT Mayerhofer Wishart} \mathcal L (\Gamma_p(\beta,\omega;\Sigma))(u)= \left(\det(I+\Sigma u)\right)^{-\beta}e^{-\tr(u(I+\Sigma u)^{-1}\omega)},\quad u\in \bar{\mathcal S}_p^+ \end{equation} where $\beta>0$ denotes its shape parameter, $\Sigma\in \mathcal S_p^{+}$ is the scale parameter and the parameter of non-centrality equals $\omega\in \bar{\mathcal{S}}_p^+$. \end{definition} Random matrices $X$ verifying \eqref{FLT Mayerhofer Wishart} arise in statistics as estimators of the covariance matrix parameter $\Sigma$ of a normal population. In fact, for the random matrix $$ X = \xi_1\xi_1^T+\ldots+\xi_n\xi_n^T=:q(\xi_1,\dots,\xi_n)\/, $$ where for $i=1,\dots,n$, $\xi_{i}\sim \mathcal N_p(m_i,\Sigma/2)$ are independent, normally distributed column vectors in $\R^p$, the Laplace transform of $X$ is given by the right-hand side of \eqref{FLT Mayerhofer Wishart} with $\beta =n/2$ and $\omega=q(m_1,\ldots,m_n)$, {see Johnson and Kotz \cite[Chap.38 (47), p.175]{john-kotz}.} Accordingly, the pair $(\omega,\beta)$ is said to belong to the non-central Gindikin set $W$ if there exists a random matrix $X$ with values in $\bar {\mathcal S}_p^+$ having the Laplace transform \eqref{FLT Mayerhofer Wishart} for a matrix $\Sigma\in {\mathcal S}_p^+$ \footnote{If $\Sigma$ is of maximal rank, this definition is indeed independent of $\Sigma$, see Lemma \ref{mayx}.}. \newpage Note the following: \begin{itemize} \item If $(\omega,\beta)\in W$ then $\beta\geq 0$, otherwise $\mathbb E[e^{-\tr(uX)} ]$ would be unbounded; and clearly, $(0,\beta)\in W$ if and only if $\beta \in W_0$. \item In the case, where $\rank(\omega)=1$ and $\beta\not=0$, the characterization of the non-central Gindikin set $W$ is given in \cite{bib:PR}: then $(\omega,\beta)\in W$ if and only if $\beta\in W_0$. \item For $\beta>\frac{p-1}{2}$, Bru \cite{bib:b91} shows that Wishart processes have Laplace transform given by \eqref{FLT Mayerhofer Wishart}. \end{itemize} The general problem of existence and non-existence of non-central Wishart distributions is studied by Letac and Massam \cite{bib:LetMassFalse}\footnote{However, the statement and proof in \cite{bib:LetMassFalse} are incomplete, as pointed out by \cite{bib:mayer} and \cite{bib:mayerJMA}.}. In a more recent work Mayerhofer \cite{bib:mayerJMA} reveals that there is an interplay between the rank of the non-centrality parameter $\omega$ and the magnitude of $\beta$ in the discrete part of the classical Gindikin ensemble: if $(\omega,\beta)\in W$ and $2\beta\in B$, then $\rank(\omega)\le 2\beta +1$. {The Laplace transform formulas in Johnson and Kotz \cite{john-kotz} and Bru \cite{bib:b91} and }the results in \cite{bib:mayerJMA} allow to conjecture\footnote{In \cite{bib:LetMass} and a previous version of this paper, the name {\it Mayerhofer Conjecture} is used. The conjecture was first presented at the CIMPA Workshop in Hammamet in 2011.} the following: \medskip {\bf NCGS Conjecture}. {\it The non-central Gindikin set is characterized by } $$ (\omega,\beta )\in W \ \ \Leftrightarrow \ \ (2\beta\in [p-1,\infty), \;\omega\in \bar {\mathcal S}_p^+) \ {\it or}\ (2\beta\in B, \rank(\omega)\le 2\beta). $$ A proof of the NCGS Conjecture has been put forward by the preprint \cite{bib:LetMass}. The proof of \cite{bib:LetMass} is technical\footnote{It requires a detailed analysis of the singular and continuous part of certain non-central distributions. Besides, the present version of \cite{bib:LetMass} does not prove that $(w, p) \in W$ implies $(0, p) \in W$.} and does not provide an intuitive explanation for the particular parametric restrictions of shape and non-centrality parameter. The present paper gives a first complete proof of the NCGS conjecture, which reveals and builds on the intimate connection between non-central Wishart distributions and Wishart processes (\cite{bib:b91}, see also \cite[Theorem 1.1]{donati2004some}). The latter constitute positive semi-definite solutions $(X_t)_{t\geq0}$ of stochastic differential equations of the form \begin{eqnarray}\label{eq:Wishart:SDePLUS} dX_t = \sqrt{X_t}dW_t+dW^T_t\sqrt{X_t}+\alpha Idt\/,\quad X_t\in \bar {\mathcal S}_p^+\/,\quad X_0=x_0\in \bar {\mathcal S}_p^+, \end{eqnarray} where $\sqrt{X_t}$ is the unique positive square root of $X_t$, $W$ is a $p\times p$ matrix of standard Brownian motions, and $\alpha\geq0$ is the drift parameter. Wishart processes are natural generalizations of squared Bessel Processes \cite{YorEnc}. It is demonstrated in the present paper that the existence of Wishart processes depends crucially on the drift parameter. The paper proves a necessary and sufficient condition for the existence of Wishart processes, and how this existence issue is related to the one of Wishart distributions. Already Bru \cite{bib:b91}, who introduces Wishart processes for the first time, realizes the explicit formula for the Laplace transform of $X_t$: \begin{proposition}Bru(\cite[Theorem 3] {bib:b91})\label{stochWallach} If the stochastic differential equation (\ref{eq:Wishart:SDePLUS}) with $x_0\in \bar {\mathcal S}_p^+$ has a global weak solution in $\bar {\mathcal S}_p^+$, then $X_t$ is Wishart distributed for each $t\geq 0$. In particular, \begin{equation} \label{Lap_Wish} \ex^{x_0}[\exp(- \tr(uX_t)]=(\det(I+2tu))^{-\alpha/2} \exp[- \tr(x_0(I+2tu)^{-1}u))],\quad u\in \bar{\mathcal S}^+_p\/. \end{equation} \end{proposition} In the present paper, it is also shown how to construct full-fledged Wishart processes from individual Wishart distributions. The main result is thus a three-fold characterization: \begin{theorem}\label{th super} Let $x_0\in\bar{\mathcal S}_p^+$ and $\alpha\geq 0$. The following are equivalent: \begin{enumerate} \item \label{super a} The SDE \eqref{eq:Wishart:SDePLUS} has a global weak solution with $X_0=x_0$. \item \label{super b} Either $\alpha\geq p-1$, or $\alpha\in B$ and $\rank(x_0)\leq \alpha$. \item \label{super c} $(x_0,\alpha/2)\in W$. \end{enumerate} \end{theorem} Our proof of the NCGS Conjecture (that is, Theorem \ref{th super} \ref{super b} $\Leftrightarrow$ \ref{super c}) is based on an analysis of affine Wishart semigroups. As a new tool, the action of a class of polynomials on Wishart processes is used, which arise as coefficients of the characteristic polynomial of a symmetric matrix. A full characterization of Wishart processes is provided by (Theorem \ref{th super} \ref{super a} $\Leftrightarrow$ \ref{super b}). For convenience of the reader, but at the expense of proving an additional implication, Theorem \ref{th super} is split into two independent theorems in the following two chapters. They require different mathematical tools and therefore can be read independently. Chapter \ref{sec2} is concerned with the existence of solutions to Wishart stochastic differential equations using elementary stochastic analysis with symmetric polynomials (Theorem \ref{char sdes} comprises the equivalence \ref{super a} $\Leftrightarrow$ \ref{super b} of Theorem \ref{th super}). Chapter \ref{sec em} concerns the existence of Wishart distributions (the NCGS conjecture, which comprises \ref{super b} $\Leftrightarrow$ \ref{super c} of Theorem \ref{th super} ). Here the Markovian viewpoint is used, in particular the fact that Wishart semigroups are affine Feller semigroups. Finally, in Section \ref{char low rank} a conjecture by Damir Filipovi\'c \cite{bib:filcon} on the existence of such semigroups on the cones of lower rank matrices is proved. \section{Gindikin sets for Wishart Processes}\label{sec2} This section studies the question of solutions in $\bar{\mathcal{S}}_p^{+}$ of the Wishart SDE \eqref{eq:Wishart:SDePLUS}, using the dynamics of some polynomial functionals of these solutions. For a symmetric $p\times p$ matrix $X$, define the elementary symmetric polynomials \begin{equation}\label{bijection} e_n(X) = \sum_{i_1<\ldots<i_n}\lambda_{i_1}(X)\lambda_{i_2}(X)\ldots \lambda_{i_n}(X)\/,\ \ \ \quad n=1,\ldots,p, \end{equation} in the eigenvalues $\lambda_1(X) \le \ldots\le\lambda_p(X)$ of $X$. Moreover, the convention $e_0(X)\equiv 1$ is used. Up to the sign change, the polynomials $e_n$ are the coefficients of the characteristic polynomial of $X$, i.e. $$ \det(X-uI)=(-1)^p u^p + (-1)^{p-1} e_1(X)u^{p-1}+\ldots -e_{p-1}(X)u+e_p(X) $$ and are polynomial functions of the entries of the matrix $X$. In particular, $e_p(X)=\det X$. In \cite{bib:gm2}, symmetric polynomials related to general class of non-colliding particle systems were studied in details. Here similar results are presented, adapted to the matrix SDE \begin{equation}\label{MatrixSDE} dX_t = g(X_t)dW_th(X_t)+h(X_t)dW_t^Tg(X_t)+b(X_t)dt\/, \end{equation} where the continuous functions $g,h,b$ act spectrally\footnote{ Recall that if $g:\R\mapsto\R$ then $g(X)$ is defined spectrally, i.e. $g(U \diag(\lambda_i) U^T)=U \diag(g(\lambda_i)) U^T$, where $U\in SO(p)$.} on $\mathcal S_p$ and $W_t$ is a Brownian $p\times p$ matrix. Henceforth, abbreviate $\sigma=2gh$ and $G(x,y) = g^2(x)h^2(y)+g^2(y)h^2(x)$. Furthermore, the natural bijection \eqref{bijection} between the eigenvalues $\Lambda=(\lambda_1\ldots \lambda_p)$ and the polynomials $e=(e_1,\ldots,e_p)$ is used, extended to the closed Weyl chamber $\bar C_+= \{(x_1,\ldots,x_p)\in\R^p: x_1\le x_2<\ldots\le x_p\}$, see \cite[p.6]{bib:gm2}. Furthermore, write $\Lambda=\Lambda(e)$ for the inverse bijection on the set $\overline{e(C_+)}$. The notation $e_{n}^{\overline i}$ for the incomplete polynomial of order $n$, not containing the variable $\lambda_i(e)$, is used; the notation $e_{n}^{\overline i,\overline j}$ is analogous. {Moreover, set $e_{0}^{\overline i}\equiv 1$ and $e_{-1}^{\overline i,\overline j}\equiv 0$.} \medskip \begin{proposition}\label{Matrix_to_Polynomial} Let $X=(X_t)_{t\geq 0}$ be a weak solution of (\ref{MatrixSDE}) (with possible finite time blow-up). Then the symmetric polynomials $e_n=e_n(X)$, $n=1,\ldots, p$, are continuous semimartingales described by the system of SDEs ($n=1,\ldots,p$) \begin{eqnarray} \label{eq:en:SDE1} de_n = \left(\sum_{i=1}^p\sigma^2(\lambda_i(e))(e_{n-1}^{\overline i})^2\right)^{\frac{1}{2}}dV_n +\left(\sum_{i=1}^pb(\lambda_i(e))e_{n-1}^{\overline i}-\sum_{i<j}G(\lambda_i(e),\lambda_j(e))e_{n-2}^{{\overline i},{\overline j}}\right)dt\/, \end{eqnarray} where $V_n$ are Brownian motions on $\R$ such that $ d\left<e_n,e_m\right> = \sum_{i=1}^p \sigma^2(\lambda_i(e) )e_{n-1}^{\overline i}e_{m-1}^{\overline i}dt\/. $ \end{proposition} \begin{proof} Note that here the equation is considered on $\mathcal{S}_p$ which does not require solutions to live in $\bar{\mathcal{S}}_p^+$ (as is required in reference to Wishart processes). Since the coefficients of the equation (\ref{MatrixSDE}) are continuous, a local weak solution exists. This solution, before its possible blow-up, is considered. The symmetric polynomials $(e_1,\ldots, e_p)$ are given by an analytic function \[ F: \mathcal{S}_p\rightarrow \mathbb R^p,\quad X\to (e_1(X),\ldots,e_p(X)), \] since each elementary symmetric polynomial is given in terms of the coefficients of the matrix $X$. {Thus It\^o's formula implies that $(e_1,\ldots, e_n)$ are continuous semimartingales (for every starting point $x_0$ and even when the eigenvalues collide).} The SDEs describing $(e_1,\ldots, {e_p})$ can be determined similarly as in Propositions 3.1 and 3.2 in \cite{bib:gm2}, which generalize the proof of (4.1) in \cite{bib:b91}. One uses the SDEs for the eigenvalues \begin{eqnarray} \label{eq:eigenvalues} d\lambda_i = 2g(\lambda_i)h(\lambda_i)dB_i+\left({b}(\lambda_i)+\sum_{j\neq i}\frac{G(\lambda_i,\lambda_i)}{\lambda_i-\lambda_i}\right)dt\/,\quad i=1,\ldots,p\/, \end{eqnarray} which are available, according to Theorem 3 from \cite{bib:gm11}, when eigenvalues $\lambda_i(0)$ of $x_0$ are all distinct and before their eventual collision. {However, the It\^o formula states that the martingale part and the bounded variation part of $(e_1,\ldots, e_p)$ are given in terms of derivatives of the smooth function $F$ and those derivatives have just be determined on the open set $U=\{X\in \mathcal{S}_p: \lambda_i(X)\neq \lambda_j(X) \text{ for all } i\neq j, \text{ with } 1\leq i,j\leq p\/\}$. Since the derivatives of $F$ are continuous on $\mathcal{S}_p$ as well as the coefficients in \eqref{eq:en:SDE1} (the singular expressions $(\lambda_i-\lambda_j)^{-1}$ appearing in (\ref{eq:eigenvalues}) are no longer present in \eqref{eq:en:SDE1}), one can conclude, by continuity, that the equalities hold on $\bar{U}=\mathcal{S}_p$, i.e. one can drop the conditions that eigenvalues of the initial point are all different and that they are non-colliding for $t>0$.} \end{proof} Using Proposition \ref{Matrix_to_Polynomial} the following characterization of the symmetric polynomials related to Wishart processes is obtained: \medskip \begin{proposition} \label{prop:Poly} Let $X_t$ be a Wishart process, i.e. a solution of the matrix SDE \eqref{eq:Wishart:SDePLUS}. Then the symmetric polynomials $e_n=e_n(X)$, $n=1,\ldots, p$ are semimartingales satisfying the following system of SDEs \begin{eqnarray} d e_1&=&{2 \sqrt{e_1}dV_1+ p\alpha dt,}\label{eq:e1:SDEs}\\ de_n &=& M_n(e_1,\ldots,e_p)dV_n +(p-n+1)(\alpha-n+1)e_{n-1}dt\/,\quad n=2,\ldots,p-1\/, \label{eq:polynom_first:SDEs} \\ de_p &=& 2\sqrt{e_{p-1}e_p}dV_p +(\alpha-p+1)e_{p-1}dt, \label{eq:polynom_last:SDEs} \end{eqnarray} where $V_n$, $n=1,\ldots, p$ are one-dimensional Brownian motions and the functions $M_n$ are continuous on $\R^p$. Furthermore, for $n=1,\ldots, p$, the processes ${\mathcal M}_n(t):=\int_0^t M_ndV_n$ are martingales satisfying \begin{equation}\label{finite qvar} \mathbb E[\int_0^t\langle {\mathcal M}_n, {\mathcal M}_n \rangle_s ds]<\infty, \quad \text{for each} \ t>0\ { {\it and} \ n=1,\dots, p.} \end{equation} \end{proposition} \medskip \begin{remark} Note that by Proposition \ref{Matrix_to_Polynomial}, the explicit forms of the martingale parts $ {d{\mathcal M}_n}=M_n(e_1,\ldots,e_p)dV_n$ as well as their brackets $d\left<e_n,e_{m}\right>$ are known for every $n,m=1,\ldots,p$. \\ {Equation \eqref{eq:e1:SDEs} is given by Bru \cite{bib:b91} and is used in the proof of \eqref{finite qvar}}. Equation \eqref{eq:polynom_last:SDEs} is just kept for informative reasons. They are both covered by \eqref{eq:polynom_first:SDEs}, by setting $n=1$ and $n=p$. \end{remark} \begin{proof} Applying Proposition \ref{Matrix_to_Polynomial} to the SDE \eqref{eq:Wishart:SDePLUS}, one finds that \begin{equation}\label{eq: Mn} M_n=2\left(\sum_{i=1}^p\lambda_i(e_{n-1}^{\overline i})^2\right)^{1/2}. \end{equation} Moreover, the drift coefficients of $de_n$ satisfy $$ \sum_{i=1}^p \alpha e_{n-1}^{\overline i}-\sum_{i<j}(\lambda_i+\lambda_j)e_{n-2}^{\overline{i},\overline j}=(p-n+1)(\alpha-n+1)e_{n-1}. $$ It remains to show \eqref{finite qvar}, for each $n=1,\dots,p$. For $n=1$, {by \eqref{eq:e1:SDEs}}, $e_1(t)$ is a squared Bessel process. Furthermore, since $e_1(t)$ is non-centrally chi-squared distributed, for each $t>0$, and for each $m\geq 1$ \begin{equation}\label{eq M1} \int_0^t \mathbb E[\vert e_1(s)\vert^m ds]<\infty, \end{equation} hence by Fubini \begin{equation*} \mathbb E[\int_0^t\vert e_1(s)\vert^m ds]<\infty. \end{equation*} For $m=1$, this estimate implies \begin{equation} \mathbb E[\int_0^t\langle \mathcal M_1, \mathcal M_1 \rangle_s ds]<\infty, \end{equation} for each $t>0$. For $1<n\leq p$ one can use \eqref{eq: Mn} to obtain the estimate \[ \langle \mathcal M_n,\mathcal M_n\rangle= 4\sum_{i=1}^p\lambda_i(t) (e_{n-1}^{\bar i})^2\leq 4 e_{n-1}(t)\leq 4 e_1 ^{2n-2}(t), \] and thus, by \eqref{eq M1}, one obtains \eqref{finite qvar}. \end{proof} Since a Wishart process is $\bar{\mathcal{S}}_p^+$ valued by definition, so $e_n\geq 0$, for all $n=1,\dots,p$. The idea of the proof of the next Theorem is to show that for $(x_0,\beta)\not \in W$, some of the symmetric polynomials $e_n$ become strictly negative. \subsection{Solving the Wishart stochastic differential equations} This section gives a full characterization of the existence of solutions to Wishart SDEs \eqref{eq:Wishart:SDePLUS}. \begin{theorem}\label{char sdes} Let $\alpha\geq 0$, and $x_0\in \bar{\mathcal S}_p^+$. The following are equivalent. \begin{enumerate} \item \label{ax} The SDE \eqref{eq:Wishart:SDePLUS} has a global weak solution with $X_0=x_0$. \item \label{bx} $\alpha\geq p-1$, or $\alpha\in \{0,1,\dots, p-2\}$ and $\rank(x_0)\leq \alpha$. \end{enumerate} \end{theorem} \begin{proof} Assume first \ref{ax}. If $\alpha\geq p-1$, nothing has to be shown. Suppose, therefore, $\alpha<p-1$. Recall {formulas \eqref{eq:e1:SDEs}--\eqref{finite qvar}} from Proposition \ref{prop:Poly}. One can compute explicitly the expected value of the polynomials starting from the first one, \begin{eqnarray*} \ex e_1(t) = e_1(0)+p\alpha \int_0^t ds = e_1(0)+p\alpha t. \end{eqnarray*} Therefore \begin{eqnarray*} \ex e_2(t) &=& e_2(0)+(p-1)(\alpha-1)\int_0^t \ex e_1(s)ds\\ &=& e_2(0)+(p-1)(\alpha-1)e_1(0)t+p(p-1)\alpha(\alpha-1)\frac{t^2}{2}, \end{eqnarray*} and so on. Consequently $\ex e_n(t)$ is a polynomial of degree not greater than $n$. In particular, the coefficient of $t^n$ is \begin{eqnarray*} \frac{p(p-1)\cdot\ldots\cdot(p-n+1)\cdot\alpha(\alpha-1)\cdot\ldots\cdot (\alpha-n+1)}{n!}. \end{eqnarray*} If $\alpha\notin B$ and $n$ is the first integer greater than or equal to $\alpha+1$, then $\ex e_n(t)$ is a polynomial of degree $n$ such that the leading coefficient is negative. Consequently, it cannot stay positive for every $t>0$, which is an impossibility. \medskip If $\alpha=m\in B$, consider $\ex e_n(t)$ where $n=m+1$. Then \begin{eqnarray*} \ex e_n(t) = e_n(0)+(p-n+1)(\alpha-n+1)\int_0^t \ex e_{n-1}(s)ds = e_n(0). \end{eqnarray*} If $e_n(0)> 0$, then \begin{eqnarray*} \ex e_{n+1}(t) = e_{n+1}(0)+(p-n)(\alpha-n)e_n(0)t, \end{eqnarray*} i.e. the leading term is negative and thus $\ex e_{n+1}(t)<0$ for large $t$. It implies $e_n(0)=0$, i.e. $\rank(x_0)\leq n-1=m=\alpha$. Proof of \ref{bx} $\Rightarrow$ \ref{ax}: The existence of global weak solutions for $\alpha\geq p-1$ is { proved by Bru \cite{bib:b91} (Bru's proof for $\alpha > p-1$ can be easily extended to $\alpha \ge p-1$) and in \cite[Theorem 2.6]{bib:CFMT}.} Therefore, only the cases $\alpha\in \{0,1,\dots,p-2\}$ need to be considered. If $\alpha=0$, then $X=0$ is the global weak solution of \eqref{eq:Wishart:SDePLUS}, for initial value $x_0=0$. Let therefore $1\leq \alpha\leq p-2$, and $\rank(x_0)\leq \alpha$. Let $B_1,B_2,\dots, B_\alpha$ be a sequence of independent, $p$--dimensional standard Brownian motions, and let $y_1,\dots, y_\alpha\in\mathbb R^p$ {be} such that $x_0= y_1 y_1^\top+\dots y_\alpha y_\alpha^\top$. Then the process \[ X_t:=\sum_{i=1}^\alpha(y_i+B_i)(y_i+B_i)^\top \] is a continuous semimartingale, by construction, and $X_0=x_0$. Furthermore $dX_t=dM_t+\alpha I dt$, where $I$ is the $p\times p$ unit matrix, and $(M_t)_t$ is a continuous martingale having quadratic variation \eqref{eq: qvar}. Therefore, by Proposition \ref{prop wish semi}, the Wishart SDE \eqref{eq:Wishart:SDePLUS} has a global weak solution. \end{proof} \begin{remark} Necessity of {\rm (ii)} can be also proved, if the validity of the NCGS Conjecture is assumed (a fact that is proven in Section \ref{sec em}, and which has not been used above to keep the section self-contained). Suppose the existence of a weak solution. Then by Proposition \ref{stochWallach}, the solution is Wishart distributed, that is, for each $t\geq 0$, $X_t\sim\Gamma_p(\alpha/2,x_0;2t I)$. By the NCGS Conjecture, $\alpha/2\in W_0$ and, in addition, if $\alpha<p-1$ then $\rank(x_0)\leq \alpha$. \end{remark} \section{The NCGS Conjecture and Wishart Semigroups}\label{sec em} In this section Wishart semigroups are introduced, which are the main tool for the proof of the NCGS Conjecture in Section \ref{proof NCGS} below. In Section \ref{char low rank} all Wishart semigroups on lower rank matrices are characterized. \subsection{Wishart semigroups} For $p\geq 1$, let $D_p(k)\subset \bar{\mathcal S}_p^+$ be the sub-cones of rank $\leq k$ matrices, $0\leq k\leq p$, where clearly $D_p(0)=\{0\}$ and $D_p(p)=\bar{\mathcal S}_p^+$. Denote by $f_u(x)=\exp(\tr(-ux))$, where $u,x\in \bar{\mathcal S}_p^+$. \begin{definition} Let $D\subset \bar{\mathcal S}_p^+$ be a closed set. A Wishart semigroup $(P_t)_{t\geq 0}$ on $D$ is a positive, strongly continuous $C_0(D)$ contraction semigroup which for any $u\in\mathcal S_p^+$ acts on $f_u\mid_D$ as \begin{equation}\label{eq:action P} P_t f_u(x)=\det(I+2tu)^{-\alpha/2} e^{-\tr(x (u^{-1}+2 tI)^{-1})}, \quad x\in D. \end{equation} Here $\alpha\geq 0$ is called the drift parameter of $(P_t)_{t\geq 0}$. \end{definition} Note: A Wishart semigroup may or may not exist, depending on the choice of $\alpha$ and $D$. In Theorem \ref{nude semigroups} below, the existence of Wishart semigroups for $D=D_p(k)$ is characterized. The following remark summarizes several essential properties of Wishart semigroups: \begin{remark}\label{rem1x} Let $(P_t)_{t\geq 0}$ be a Wishart semigroup with drift parameter $\alpha$. \begin{enumerate} \item \label{rem1x1} (Markovian representation) In view of the Riesz representation theorem for positive functionals \cite[Chapter 2.14]{Rudin}, for each $t>0$, $x\in D$ there exists a positive measure $p_t(x,d\xi)$ on $D$ such that \begin{equation}\label{eq: action P} P_t f(x)=\int _D f(\xi)p_t(x,d\xi). \end{equation} Furthermore, the semigroup property of $(P_t)_{t\geq 0}$ implies, that $p_t(x,d\xi)$ satisfies the Chapman-Kolmogorov equations, thus $p_t(x,d\xi)$ is a Markov transition function. Hence, the semigroup has a stochastic representation as a Markov process $(\mathbb P^x)_{x\in D}$, where for each $x\in D$, $\mathbb P^x$ denotes the resulting probability on the canonical path space $D^{\mathbb R_+}$ with initial law $\delta_x$, and $X_t(\omega):=\omega(t)$, where $\omega\in D^{\mathbb R_+}$. { The Markov process $(X, \mathbb P^x)$ is called the {canonical representation} of the semigroup $ (P_t)_{t\geq 0}$ .} \item (C\`adl\`ag Paths) It is a well-established fact, that any Feller process (that is, a Markov process with strongly continuous $C_0$ semigroup), has a c\`adl\`ag version. \item (Affine Property) By definition, Wishart semigroups are affine semigroups (see \cite{bib:CFMT}), that is, the Laplace transform of their transition function is of the form \begin{equation}\label{ap} \mathbb E[e^{-\tr(u X_t)}\mid X_0=x]=e^{-\phi(t,u)-\tr(\psi(t,u) x)}, \end{equation} where \[ \phi(t,u)=\frac{\alpha}{2} \log(\det(I+2t u)),\quad \psi(t,u)=(u^{-1}+2tI)^{-1}. \] \item (Wishart transition function) By definition, the Markovian transition function of a Wishart semigroup $p_t(x,d\xi)$ is $\Gamma_p(\alpha/2,x ; 2tI)$ distributed, for each $t\geq 0$ and for all $x\in D$. Furthermore, the support of $\Gamma_p(\alpha/2,x ; 2tI)$ is contained in $D$. \item (Non-Explosion) $(P_t)_{t\geq 0}$ is conservative: Let $u_n\in\mathcal {S}_p^+$ such that $u_n\rightarrow 0$ as $n\rightarrow \infty$. By \eqref{eq: action P} and Lebesgue's dominated convergence theorem one thus has \[ P_t 1=\lim _{n\rightarrow\infty}P_t f_{u_n}(x)=1. \] \item (Semimartingales and Continuity) If, in addition, one assumes that the linear span of $D$ has non-empty interior, $(X,\mathbb P_x)$ for each $x$ is an affine semimartingale, that is, a semimartingale with differential characteristics which are affine functions in the state variable. {The continuity of the sample paths of $X$ follows}. For more details, see Appendix \ref{appendix a}. \item (Strong Maximum Principle) For a strongly continuous $C_0$ semigroup $(P_t)_{t\geq 0}$ with infinitesimal generator $\mathcal A$, the following are equivalent \begin{enumerate} \item \label{first im} $\mathcal A$ satisfies the strong maximum principle, that is, $\mathcal Af(x_0)\geq 0$, for any $f\in C_0$ that satisfies $f(x)\geq f(x_0)$. \item \label{se im} $(P_t)_{t\geq0}$ is positive (hence a Feller semigroup). \end{enumerate} The proof of {(a) $\Rightarrow$ (b)} is simple. A proof of the non-trivial implication (b) $\Rightarrow$ (a) employs the positivity of the Yoshida approximations of $\mathcal A$ (\cite[Corollary 2.8]{Kurtz}). \end{enumerate} \end{remark} Wishart semigroups on $D=\bar{\mathcal S}_p^+$ are well understood; they are the semigroups associated with affine diffusion processes on $D$. By \cite[Theorem 2.4]{bib:CFMT} the following are equivalent: \begin{itemize} \item The Wishart semigroup with drift parameter $\alpha$ exists with state space $D=\bar{\mathcal S}_p^+$. \item $\alpha\geq p-1$. \end{itemize} However, for strict subsets $D\subset\bar{\mathcal S}_p^+$, less is known about Wishart semigroups. In Theorem \ref{nude semigroups} below a new result for the sets of rank $k\leq p-1$ matrices is given. Let $\mathcal S^*_p$ be the space of rapidly decreasing smooth functions on $\mathcal S_p$, and for a subset $D\subseteq \bar{\mathcal S}_p^+$, let $S^*_p(D)=\{f\mid_{D}\mid f\in \mathcal S_p^*\}$. For any $f\in \mathcal S^*_p(D)$, the action of the following differential operator is well-defined, \begin{equation}\label{A sharp} \mathcal A^\sharp f(x)=2\tr(x\nabla^2 )f(x)+\alpha \tr(\nabla f(x)), \end{equation} where the notation of Bru \cite{bib:b91} \[ x\nabla^2:=x\cdot \nabla\cdot \nabla \] is used, with $\cdot$ denoting the matrix multiplication, and $\nabla$ being the matrix of partial differential operators $\nabla=(\nabla_{ij})_{ij}$, where $\nabla_{ij}=\frac{\partial}{\partial x_{ij}}$. This expression reads in canonical coordinates, (cf. the notation of \cite[Theorem 2.4]{bib:CFMT}) \[ 2\tr(x\nabla^2)=\sum_{i,j,k,l} A(x)_{i,j,k,l}\frac{\partial ^2}{\partial x_{ij}\partial x_{kl}}, \] where $A(x)$ is a quadratic form on $(\bar{\mathcal S}_p^+)^2$, defined in coordinates as \[ A(x)_{i,j,k,l}=x_{ik}\delta_{jl}+x_{il}\delta_{jk}+x_{jk}\delta_{il}+x_{jl}\delta_{ik}. \] \begin{proposition}\label{prop genx} Suppose $D_p(1)\subseteq D$, and let $(P_t)_{t\geq 0}$ be a Wishart semigroup on $D$ with infinitesimal generator $\mathcal A$. Then $S^*_p(D)\subset \mathcal D(\mathcal A)$ and $\mathcal Af=\mathcal A^\sharp f$ in \eqref{A sharp} for any $f\in S^*_p(D)$. \end{proposition} \begin{proof} It is first proved that \begin{equation}\label{action fu} \mathcal Af^D_u=(\mathcal A^\sharp f_u)\mid_D, \end{equation} for any exponential $f_u^D(\cdot):=e^{-\tr(u\cdot)}\mid_{D}$. Here the right hand side involves differentiation on the open domain $\mathcal S_p$, and later restriction to $D$, whereas on the left hand side $\mathcal A$ acts directly on {$f_u^D$}. By the definition of the affine property \eqref{ap}, \begin{equation}\label{gen form 2} \mathcal A f_u(x)=(F(u)+\text{tr}(R(u)x)f_u(x), \quad x\in D, \end{equation} for $f_u(x)=\exp(-\text{tr}(ux))$ and $u\in \bar{\mathcal S}_p^+$, and thus ${f_u^D}\in\mathcal D(\mathcal A)$. Here \begin{equation}\label{eq F} F(u)=\frac{\partial\phi(t,u)}{\partial t}|_{t=0}=\alpha\tr(u) \end{equation} and \begin{equation}\label{eq R} R(u)=\frac{\partial \psi(t,u)}{\partial t}|_{t=0}=-2 u^2, \end{equation} where the differentiation rules for inverse map and determinant (\cite[Proposition III.4.2 (ii) and Proposition II.3.3 (i)]{bib:farautKOR}) have been used. The assumption that $D$ contains rank one matrices implies that the convex hull of $D$ equals $\bar{\mathcal S} _p^+$, and thus $F$ and $R$ are uniquely determined, as the coefficients of the affine (in the state variable $x$) function \[ x\mapsto F(u)+\tr(x R(u)). \] A straightforward computation reveals that the action of $\mathcal A^\sharp$ on $f_u^D$ coincides with \eqref{gen form 2}, hence \eqref{action fu} holds. According to the density argument \cite[Theorem B.3]{bib:CFMT}, the linear hull of such exponentials for strictly positive definite $u$ is dense in the space of rapidly decreasing functions on $\bar{\mathcal S}_p^+$ and thus equality in \eqref{action fu} extends, by convergence properties in the Schwarz class and the closedness of $\mathcal A$, to rapidly decreasing functons. \end{proof} Recall that a time-homogenous Markov process is polynomial if the action of its semigroup can be extended to polynomials of any order (\cite[Definition 2.1]{Cuchiero11}). \begin{proposition}\label{prop GM1} Suppose $(P_t)_{t\geq 0}$ is a Wishart semigroup supported on $D\subset \bar{\mathcal S}_p^+$ with drift $\alpha\geq 0$. $(P_t)_{t\geq 0}$ is polynomial and its infinitesimal generator acts on symmetric polynomials as follows \begin{equation}\label{gm eq1} \mathcal Ae_n(x)=(p-n+1)(\alpha-n+1)e_{n-1}(x),\quad x\in D,\quad 1\leq n\leq p. \end{equation} \end{proposition} \begin{proof} By Proposition \ref{prop a22}, there is a version $(\widetilde X_t)_{t\geq 0}$ of $(X_t)_{t\geq 0}$ which is a Wishart semimartingale, and thus by Proposition \ref{prop wish semi} there exists a $d\times d$ dimensional Brownian motion $W$ such that the pair ($(\widetilde X_t)_{t\geq 0}, W$) constitutes a global weak solution of the Wishart SDE. Hence Proposition \ref{prop:Poly} may be applied, that yields the SDE dynamics \eqref{eq:e1:SDEs}--\eqref{eq:polynom_last:SDEs}. By \eqref{finite qvar}, {$\int_0^t M_n dV_{n}$} are true martingales, hence \[ \mathbb E^x[e_n(t)]=e_n(x)+(p-n+1)(\alpha-p+1)\mathbb E^x[\int_0^t e_{n-1}(s)ds], \] thus by Lebesgue's dominated convergence theorem, \[ \mathcal A e_n(x)=\lim_{t\downarrow 0}\frac{P_t e_n(x)-e_n(x)}{t}=(p-n+1)(\alpha-p+1)e_{n-1}(x). \] \end{proof} An equivalence relation $\simeq$ on the space of random variables with values in $\bar{\mathcal S}_p^+$ is introduced by defining $X\simeq Y$ if and only if for all $0\leq r\leq p$ \begin{center} $\mathbb P[\rank(X)=r]>0$ if and only if $\mathbb P[\rank(Y)=r]>0$ . \end{center} Three technical lemmas are useful: \begin{lemma}\label{mayx} Let $\beta\geq 0,\omega,\in \bar{\mathcal S}_p^+$ and $\Sigma\in\mathcal S_p^+$. \begin{enumerate} \item \label{x1} (linear automorphism) Let $\Sigma=q q^\top$, where $q$ is a real $p\times p$ matrix. If $X\sim\Gamma_p(\beta,\omega; I)$, then $Y=qXq^\top\sim \Gamma_p(\beta,q\omega q^\top; \Sigma)$ and $Y\simeq X$. Conversely, $Y\sim\Gamma_p(\beta,q\omega q^\top; \Sigma)$ implies $X=q^{-1}Y (q^{-1})^\top\sim\Gamma_p(\beta,\omega;I)$. \item \label{x2} (exponential family) If $X\sim\mu(d\xi)\sim\Gamma_p(\beta,\omega;I)$, then for $v:=\Sigma^{-1}-I$ there exists a random variable $Y$ distributed as \[ Y\sim \frac{\exp(\tr(v\xi))\mu(d\xi)}{\mathbb E[\exp(\tr(vX))]} \sim \Gamma_p(\beta,\Sigma\omega\Sigma;\Sigma) \]\ and $Y\simeq X$. Conversely, $Y\sim\Gamma_p(\beta,\Sigma\omega\Sigma;\Sigma)$ implies that $X\sim\Gamma_p(\beta,\omega; I)$ \item \label{x3} If $X\sim\Gamma_p(\beta,\omega;\Sigma)$ then $\Gamma_p(\beta,{\tilde \omega; \tilde\Sigma})$ exists for any ${\tilde\omega}$ satisfying $\rank({\tilde\omega})\leq \rank(\omega)$ and for any ${\tilde\Sigma}\in\bar{\mathcal S}_p^+$. \end{enumerate} \end{lemma} \begin{proof} The equivalence relation in \ref{x1} holds, since any linear automorphism maintains the rank of matrices. The remaining claims in \ref{x1} follow from the following chain of identities, using the very definition of the Wishart distribution in terms of its Laplace transform (using multiplicativity of the determinant and the cyclic property of the trace): \begin{align*} \mathbb E[e^{-\tr(u Y)}]&=\mathbb E[e^{-tr(u q X q^\top)}]=\mathbb E[e^{-\tr((q^\top u q)X)}]=(\det(I+ q^\top u q))^{-\beta}e^{-\tr(q^\top u q (I+q^\top u q)^{-1}\omega)}\\ &=(\det(I+\Sigma u))^{-\beta}e^{u (I+\Sigma u)^{-1} q\omega q^\top}, \end{align*} i.e. $Y\sim\Gamma_p(\beta,q \omega q^\top; \Sigma)$. Proof of \ref{x2}: Note that due to Proposition \ref{FLT maximal}, $v=-I+\Sigma^{-1}\in D(\mu)$ {\and} and \eqref{FLT Mayerhofer Wishart} holds for $v$. Hence the first part of the proof of \ref{x2} follows the lines of the proof of \cite[Proposition 3.1 (ii)]{bib:mayerJMA}. Conversely, let $Y\sim\mu_1=\Gamma_p(\beta,\Sigma\omega\Sigma; \Sigma)$. Then $v_1=-\Sigma^{-1}+I\in D(\mu_1)$ and, after a few computations, one obtains \[ \int e^{-\tr((u+v_1)\xi)}\mu_1(d\xi)=\left((\det(\Sigma))^{-\beta}e^{-\tr((\Sigma-I)\omega)}\right)(\det(I+u))^{-\beta} e^{-\tr(u(I+u)^{-1}\omega)}, \] where the pre-factor is recognized as \[ (\det(\Sigma))^{-\beta}e^{-\tr((\Sigma-I)\omega)}=\mathbb E[e^{-\tr(v_1 Y)}], \] and the second factor equals \[ (\det(I+u))^{-\beta} e^{-\tr(u(I+u)^{-1}\omega)}=\mathbb E[e^{-\tr(uX)}] \] for $X\sim\Gamma_p(\beta,\omega;I)$. Finally, for any ${ u}\in -\Sigma^{-1}+\mathcal S_p^+$, let \[ \nu(d\xi):=\frac{\exp(-\tr({ u}\xi))\mu(d\xi)}{\mathbb E[\exp(-\tr(({ u}X))]}. \] Then $\nu(B)>0$ if and only if $\mu(B)>0$, for any Borel set $B\subset\bar{\mathcal S}_p^+$. Hence $Y\simeq X$ in \ref{x2}. Proof of \ref{x3}: Let $\rank(\omega)=r$ with $0\leq r\leq p$. The following outlines the transformations that map $\Gamma_p(\beta,\omega;\Sigma)$ onto $\Gamma_p(\beta,\omega_1;\Sigma_1)$. Suppose first $\rank(\omega_1)=r$ and that $\Sigma_1=q_1q_1^\top$ is of full rank. By properties of the Natural Exponential Family \ref{x2}, one obtains $\Gamma_p(\beta,\Sigma^{-1}\omega \Sigma^{-1}; I)$. By \ref{x1} the transformation $\xi\mapsto q_a\xi q_a^{\top}$, where $q_a$ is an invertible but not necessarily symmetric matrix, yields $\Gamma_p(\beta,q_a\Sigma^{-1}\omega \Sigma^{-1} q_a^\top; \Sigma_a)$, where $\Sigma_a:=q_a q_a^\top$. Again using \ref{x2} yields \[ \Gamma_p(\beta,\Sigma_a^{-1}q_a\Sigma^{-1}\omega \Sigma^{-1} q_a^\top\Sigma_a^{-1}; I)=\Gamma_p(\beta,(q_a^{-1})^\top\Sigma^{-1}\omega \Sigma^{-1} q_a^{-1}; I) \] Finally, by \ref{x1}, the linear transformation $\xi\mapsto q_1 \xi q_1^{\top}$ yields \[ \Gamma_p(\beta,q_1(q_a^{-1})^\top\Sigma^{-1}\omega \Sigma^{-1} q_a^{-1}q_1^\top; \Sigma_1) \] Note that $q_a$ has not been specified yet. Since the linear automorphism group acts transitively on $\bar{\mathcal S}_p^+$ and maintains ranks, there exists $q_a$ such that \[ q_1(q_a^{-1})^\top\Sigma^{-1}\omega \Sigma^{-1} q_a^{-1}q_1^\top=\omega_1, \] and thus one obtains the existence of $\Gamma_p(\beta,\omega_1;\Sigma_1)$ for any invertible $\Sigma_1$ and any $\omega_1$ with $\rank(\omega_1)=r$. Finally, let $\rank(\widetilde\omega)\leq \rank(\omega)=r$ and let $\widetilde\Sigma$ be not necessarily invertible. Let $(\omega_n)_n$ be a sequence of non-centrality parameters $\omega_n$ such that $\lim_{n\rightarrow\infty}\omega_n=\widetilde\omega$, where $\rank(\omega_n)=r$ for each $n$, and let $(\Sigma_n)_n$ be a sequence of non-singular matrices $\Sigma_n$ such that $\lim_{n\rightarrow\infty }\Sigma_n= \widetilde\Sigma$. By the previous arguments, \[ \Gamma_p(\beta,\omega_n;\Sigma_n) \] exists for any $n\in\mathbb N$. By Proposition \ref{FLT maximal}, for each $n$, the characteristic functions are of the same form, and converge for any $u\in i\mathcal S_p$ as $n\rightarrow \infty$ to \[ \left(\det(I+\widetilde\Sigma u)\right)^{-\beta}e^{-\tr(u(I+\widetilde\Sigma u)^{-1}\widetilde\omega)} \] Hence, by L\'evy's continuity theorem, the limit is the characteristic function of a positive measure on $\bar{\mathcal S}_p^+$, namely $\Gamma_p(\beta,\widetilde\omega;\widetilde\Sigma)$. \end{proof} \begin{lemma}\label{extra supp} Let $\Xi$ be a positive semi-definite random matrix supported on $D_p(r-1)$ and $\rank(\Xi)=r-1$ with nonzero probability, where $1\leq r\leq { p}$. Let further $\eta\sim \mathcal N(\mu,\Sigma)$ with $\mu\in \mathbb R^p$ and with covariance matrix $\Sigma\in \mathcal S_p^+$. If $\Xi$ and $\eta$ are independent, then $\rank(\Xi+\eta \eta^\top)=r$ with nonzero probability. \end{lemma} \begin{proof} Assume first the constant case $\Xi=\Xi_0\in \bar{\mathcal S}_p^+$. Without loss of generality, one may assume $\Xi_0=\diag(I_{r-1},0)$, where $I_k$ is the $k\times k$ unit matrix. Define \[ V=\left(\begin{array}{ll} I_{r-1} & -\Omega\\ 0& I_{p-r+1}\end{array}\right) \] with a $(r-1)\times (p-r+1)$ matrix $\Omega_{ij}=\delta_{ij}\frac{\eta_i}{\eta_{r-1+j}}$. Then \[ V (\Xi_0 +\eta \eta^\top) V^\top=\diag(I_{r-1}, (\eta\eta^\top)_{r\leq i,j\leq p}) \] and since $(\eta_k)_{r\leq k\leq p}\sim\mathcal N((\mu_k)_{r\leq k\leq p}, (\Sigma_{ij})_{r\leq i,j\leq p})$, it follows that $\eta \eta^\top$ has rank $1$ almost surely. Thus $\rank(V(\Xi_0+\eta\eta^\top)V^\top)=r-1+1=r$ almost surely. Now consider a random matrix $\Xi$. Clearly, $\rank(\Xi+\eta\eta^\top)\leq r$. The set $A_\Xi:=\{\rank(\Xi(\omega))=r-1\}$ is Borel, since for $r=1$ it is precisely the set $\{\tr(\Xi)=0\}$, and for $r>1$ one has $A_\Xi=\{e_{r-1}(\Xi)=0\}^c \cap \{ e_r(\Xi) =0\}$. By assumption $\mathbb P[A_\Xi]>0$, thus the first part of the proof implies \[ \mathbb E[\rank(\Xi+\eta\eta^\top)\mid \rank(\Xi)=r-1]=r \] and thus $\rank(\Xi+\eta\eta^\top)=r$ almost surely on $A_\Xi$. \end{proof} \begin{lemma}\label{lem 1} Suppose $\Xi_0\in \bar{\mathcal S}_p^+$ with $\rank(\Xi_0)= p-1$, and let $\Xi\sim\Gamma_p((p-1)/2,\Xi_0;\Sigma)$, where $\Sigma$ is non-degenerate. Then $\rank(\Xi)=p-1$ almost surely. \end{lemma} \begin{proof} By Lemma \ref{mayx} \ref{x1}, the automorphism $\xi\rightarrow q^{-1}\xi q^{-1}$ with $q=\sqrt{\Sigma}$ yields $q^{-1}\Xi q^{-1}\sim\Gamma_p((p-1)/2,q^{-1}\Xi_0 q^{-1};I)$, and since $\rank(\Xi_0)=\rank(q^{-1}\Xi_0 q^{-1})$, and $\Xi\simeq q^{-1}\Xi q^{-1}$, one may without loss of generality assume $\Sigma=2I$. Let $\mu_i \in\mathbb R^p$ for $i=1,\dots,p-1$ such that $\mu_1 \mu_1^\top+\dots \mu_{p-1}\mu_{p-1}^\top=\Xi_0$. Let $x_{ij}$, $1\leq i \leq p$, $1\leq j \leq p-1$ be a sequence of independent standard normally distributed random variables, and set $x_j=(x_{ij})_{1\leq i \leq p}$ and $y_j=x_j+\mu_j$. Then (see \cite[Section 1]{bib:mayer}) the random variable \[ X=\sum_{j=1}^{p-1} y_j y_j^\top \] is $\Gamma_p(\frac{p-1}{2},\Xi_0; 2 I)$ distributed. Furthermore, $x:=(x_{ij})_{ij}$ has rank $p-1$ almost surely, hence $X$ has rank $p-1$ almost surely, and thus also $\Xi$. \end{proof} The following statement concerns the support of Wishart distributions with general shape parameter. \begin{proposition}\label{prop may} Suppose $\beta\in \{0,1/2,\dots,(p-2)/2\}$ and $\Sigma \in\mathcal S_p^+$. Suppose $\rank(\omega)= 2\beta+k$, where $1\leq k\leq p-(2\beta+1)$. Then $\Gamma_p(\beta,\omega;\Sigma)$, if exists, is supported in $D_{p}({2\beta})$. In other words, almost surely, \begin{equation}\label{eq: support} \rank(\Xi)\leq {2\beta} \end{equation} for any $\Xi\sim \Gamma_p(\beta,\omega;\Sigma)$. \end{proposition} \begin{proof} Suppose first $\beta=0$ and $\rank(\omega)\geq 1$. Then, also $\Gamma_p(0,\widetilde\omega;2t I)$ exists, with $\rank (\widetilde\omega)=1$, see Lemma \ref{mayx} \ref{x3}. Let $x\in \bar{\mathcal S}_p^+$, then one can write \[ x=\sum_{i=1}^p\mu_i\mu_i^\top,\quad \mu_i\in \mathbb R^p \] Let $t>0$ be fixed. By Lemma \ref{mayx} \ref{x3}, there exist independent random variables $\Xi_i\sim \Gamma_p(\beta=0,\mu_i\mu_i^\top;2tI)$, for $i=1,\dots,p$, and therefore \[ \Xi=\Xi_1+\dots+\Xi_p\sim \Gamma_p(0,x;2tI), \] and thus a transition function of a Wishart semigroup with zero drift is constructed, violating the drift condition for affine Markov processes on $\bar{\mathcal S}_p^+$ \cite[Theorem 2.4 and Definition 2.3, equation (2.4)]{bib:CFMT} (which rules out drifts strictly below $(p-1)/2$). Thus $\Gamma_p(\beta,\omega;\Sigma)$ does not exist. Let now $\beta\in \{1/2,\dots,(p-2)/2\}$, then, since $2\beta+k\geq 2\beta+1\geq 2$, there is nothing to show when $p\leq 2$. Set therefore $p\geq 3$. Then, \begin{itemize} \item $\beta':=(p-1)/2-\beta$ satisfies $1/2\leq \beta'\leq (p-2)/2$. \item Since \[ 2\leq \rank(\omega)= 2\beta+k\leq 2\beta +(p-(2\beta+1))=p-1 \] there exists $\omega'\in \bar{\mathcal S}_p^+$ with $\rank(\omega')=(p-1)-\rank(\omega)=(p-1)-(2\beta+k)$ and such that $\omega_*:=\omega+\omega'$ satisfies $\rank(\omega_*)= p-1$. Furthermore, since \[ \rank(\omega')= p-1-(2\beta+k)=2\beta'-k\leq 2\beta' \] a random variable $Y\sim \Gamma_p(\beta',\omega';\Sigma)$ exists, independent of $\Xi$: Let $m_i\in\mathbb R^p$ $(i=1,\dots,{n:=2\beta'})$ such that \[ m_1m_1^\top+\dots+ m_n m_n^\top=\omega' \] and $\xi_j$ $(j=1,\dots,n$) be a sequence of independent, normally distributed random variables with mean $m_j$, and variance $\Sigma/2$, and independent of $\Xi$. Then $Y:=\xi_1\xi_1^\top+\dots+\xi_n\xi_n^\top \sim \Gamma_p(\beta',\omega';\Sigma)$, see the remark following Definition \ref{def wish}. \end{itemize} The sum $\Xi'=\Xi+Y$ is $\Gamma_p((p-1)/2,\omega_*,\Sigma)$ distributed. Since $\rank(\omega_*)=p-1$, Lemma \ref{lem 1} applies and yields $\rank(\Xi')= p-1$ almost surely. Thus, by Lemma \ref{extra supp} (applied exactly $2\beta'$ times, since $Y$ is constructed by a sum of $2\beta'$ squares of independent, normally distributed vectors) one must have $\rank(\Xi)\leq 2\beta$ almost surely, as otherwise $\rank(\Xi')>p-1$ with non-zero probability. \end{proof} \subsection{Proof of the NCGS Conjecture.}\label{proof NCGS} Proof of $\Leftarrow$: { Sufficiency of conditions in NCGS Conjecture was shown for $2\beta\in B$ in \cite[Chap.38 (47), p.175]{john-kotz} and for $2\beta>p-1$ in \cite{bib:b91}. The case $2\beta=p-1$ follows from the case $2\beta>p-1$ by L\'evy continuity theorem arguments \cite{bib:mayer,bib:mayerJMA}.} Proof of $\Rightarrow$: Conversely, suppose the existence of a single distribution $\Gamma_p(\beta,\omega;I)$. Then by Lemma \ref{mayx} \ref{x3}, also $\Gamma_p(\beta,0;I)$ exists. Since the latter is a classical Wishart distribution with non-degenerate scale parameter, $\beta\in W_0$, the classical Gindikin set. \newline Let $\beta\in \{0,1/2,\dots,(p-2)/2\}$ and assume, for a contradiction, $\rank(\omega)=2\beta+l$, where $1\leq l\leq p-2\beta$. By Lemma \ref{mayx} \ref{x3} one can obtain non-central Wishart distributions for $\Gamma_p(\beta,\omega';\Sigma)$ with any $\rank(\omega')\leq 2\beta+l$ and any invertible $\Sigma$. Using, in addition, the support information of Proposition \ref{prop may}, one thus obtains a Wishart semigroup $(P_t)_{t\geq 0}$ with state space $D_{p}(2\beta+l)$ and with drift $2\beta$, by creating $\Gamma_p(\beta,x;2t I)$, for each $t>0$, and for each $x$ with $\rank(x)\leq 2\beta+l$. Denote by $\mathcal A$ the infinitesimal generator of $(P_t)_{t\geq 0}$. Distinguish the following two cases. \begin{enumerate} \item $l<p-2\beta$. Since for all $x\in D_p(2\beta+l)$, $e_{2\beta+l+1}{(x)=} 0$, \begin{align*} 0&=\lim_{t\rightarrow 0}\frac{P_t e_{2\beta+l+1}(x)-e_{2\beta+l+1}(x)}{t}=\mathcal A e_{2\beta+l+1}(x)=\\ &= (p-(2\beta+l))(-\beta-l)e_{2\beta+l}(x)\neq 0,\quad \text{for all } x \text{ with }\rank(x)=2\beta+l, \end{align*} which is a contradiction. Here, for the last identity Proposition \ref{prop GM1} has been used. \item $l=p-2\beta$. Then $\rank(\omega)=p$ and the semigroup $(P_t)_{t\geq 0}$ acts on $C_0(\bar{\mathcal S}_p^+)$. The positivity of the Feller semigroup implies that its infinitesimal generator $\mathcal A$ satisfies the positive maximum principle. Applied to $e_p(x)=\det(x)$ this implies that \[ \mathcal A \det(x_0)\geq 0 \] for any $x_0$ with $\rank(x_0)<p$. Choose $x_0$ with $\rank(x_0)=p-1$, then $e_{p-1}(x_0)>0$, and therefore by Proposition \ref{prop GM1} (setting $n=p$ and recalling {that} $e_p=\det$) \[ \mathcal A \det(x_0)=(2\beta-p+1)e_{p-1}(x_0)<0 \] because $\beta\in\{0,1\dots,\frac{p-2}{2}\}$, by assumption. This violates the positive maximum principle. \end{enumerate} These two contradictions imply that indeed $\rank(\omega)\leq 2\beta$, whenever $\beta\in\{0,\dots,\frac{p-2}{2}\}$, and thus the proof of the NCGS conjecture is finished. { \begin{remark} Let us mention another proof of the necessity in the NCGS. As above, the existence of a single distribution $\Gamma_p(\beta,\omega;I)$ implies the existence of a Wishart semigroup $(P_t)_{t\geq 0}$ with state space $D_{p}(2\beta+l)$ and with drift $2\beta$. By Proposition A.2(ii), the Wishart SDE \eqref{eq:Wishart:SDePLUS} has a global weak solution with $X_0=\omega$. The proof is completed by using Theorem \ref{char sdes}. \end{remark} } \subsection{A Characterization of Wishart Semigroups}\label{char low rank} The paper concludes with the following characterization of Wishart semigroups with state spaces {$D_p(k)$, the $p\times p$ symmetric positive semi-definite matrices of rank $\leq k$.\footnote{Note that $D_p(k)$ are non-convex domains for $k<p$, but, {by Theorem \ref{th super}, } Wishart semigroups on $D_p(k)$ cannot be extended to their convex hull $\overline{\mathcal S}_p^+$.} The statement has been conjectured by Damir Filipovi\'c \cite{bib:filcon} in 2009. \begin{theorem}\label{nude semigroups} Let $k\in\{1,\dots,p\}$ and let $\alpha \geq 0$. The following are equivalent: \begin{enumerate} \item \label{state2} The Wishart semigroup with state-space $D=D_p(k)$ exists. \item \label{state1} If $k\in\{1,\dots,p-1\}$, then $\alpha= k$, and if $k=p$, then $\alpha\geq p-1$. \end{enumerate} \end{theorem} \begin{proof} If $k=p$, that is $D=\bar{\mathcal S}_p^+$, then $\alpha\geq p-1$ due to \cite{bib:CFMT}, which also includes a proof of existence. Therefore, only the cases $k<p$ require a proof: Proof of \ref{state1} $\Rightarrow$ \ref{state2}: The existence is shown by construction, using squares. See, for instance, the proof of Theorem \ref{char sdes}, or \cite[Examples {III.1 and III.2}]{bib:mayer}. Proof of \ref{state2} $\Rightarrow$ \ref{state1}: Assume the existence of a Wishart semigroup on $D_p(k)$ \footnote{Using the NCGS conjecture, the following, weaker, conclusion can be made. Assume the existence of a Wishart semigroup on $D_p(k)$. Then $\Gamma_p(\alpha,x_0,I)$ exists with $\rank(x_0)=k$. By the NCGS Conjecture, $\alpha/2 \in W_0$ and, if $\alpha<p-1$, then $\rank(x_0)\leq \alpha$. This implies $\alpha \geq k$.}. Since $e_{k+1}$ vanishes on $D_p(k)$, one obtains by using Proposition \ref{prop GM1} that \[ 0=(\mathcal A e_{k+1})(x)=(p-k)(\alpha-k) e_k(x). \] Since $k<p$, and $e_k(x)>0$ for $\rank(x)=k$ matrices, $\alpha$ must be equal to $k$. \end{proof} \begin{appendix} \section{Wishart Semimartingales}\label{appendix a} \begin{proposition}\label{prop wish semi} Let $(\Omega,\mathcal F,(\mathcal F_t)_{t\geq 0}\mathbb P)$ be a {standard} filtered probability space. Let $(X_t)_{t\geq 0}$ be a continuous, $\bar{\mathcal S}_p^+$ valued semimartingale of the form \begin{equation}\label{eq: wish semi} dX_t=dM_t+\alpha { I dt} , \end{equation} where $\alpha\geq 0$, and the continuous martingale $M_t$ has quadratic variation \begin{equation}\label{eq: qvar} d{\langle}M_{t,ij}, M_{t,kl}{\rangle} =\left((X_t)_{ik}\delta_{jl}+(X_t)_{il}\delta_{jk}+(X_t)_{jk}\delta_{il}+(X_t)_{jl}\delta_{ik}\right)dt. \end{equation} Then there exists an extension $(\widetilde{\Omega},\widetilde{\mathcal F},(\widetilde{\mathcal F_t})_{t\geq 0}, \widetilde{\mathbb P})$ of $(\Omega,\mathcal F,(\mathcal F_t)_{t\geq 0}, \mathbb P)$ which supports a $d\times d$ standard Brownian motion $W$ such that \begin{equation}\label{eq: matrix sde} dX_t=\sqrt{X_t}dW_t+dW_t^\top \sqrt{X_t}+\alpha I dt. \end{equation} \end{proposition} \begin{proof} This is an application of \cite[Theorem V.20.1]{rogerswilliams2}, where one interprets the SDE \eqref{eq: matrix sde} in vector form, and thus $W$ as a vector of $p^2$ independent, standard Brownian motions. The details of the proof are the same as those found in \cite[p. 53, Proof of Theorem 2.6]{bib:CFMT}. \end{proof} \begin{proposition}\label{prop a22} Let $D\subset\mathcal S_p^+$ such that $D_p(1)\subset D$, and let $(P_t)_{t\geq 0}$ be a Wishart semigroup on $D$ with parameter $\alpha$. The following hold: \begin{enumerate} \item \label{crux 1} For $x\in D$ let $(X,\mathbb P^x)$ be the canonical representation of the Markov semigroup with the initial law ${\delta_x}$ (cf. Remark \ref{rem1x}(i)). There exists a version $\widetilde X$ of $X$ that is a continuous semimartingale of the form \eqref{eq: wish semi} with quadratic variation \eqref{eq: qvar}. \item \label{crux 2} For any $x\in D$, the Wishart SDE \eqref{eq:Wishart:SDePLUS} has a global weak solution with $X_0=x$. \end{enumerate} \end{proposition} \begin{proof} Proof of \ref{crux 1}: The canonical representation $(X, (\mathbb P_x)_{x\in D}$ constitutes a time homogeneous Markov process in the sense of \cite[Definition 1]{Cpaths} and an affine processes in the sense of \cite[Definition 2]{Cpaths}. Since $D_p(1)\subset D$, $D$ contains {$p\times (p+1)/2+1$} affinely independent elements, and thus $D$ satisfies \cite[Assumption 1]{Cpaths}.\\ Let $\mathcal F_t^0=\sigma(X_s,s\leq t)$ be the filtration generated by the canonical process $X_t(\omega):=\omega(t)$, and let $\mathcal F^0:=\vee _{t\geq 0}\mathcal F_t$. Then by \cite[Theorem 2]{Cpaths}, there exists a version $\widetilde X$ of $X$ which is c\`adl\`ag. Since $(P_t)_{t\geq 0}$ is conservative, \cite[Theorem 6]{Cpaths} implies that $\widetilde X$ is a semimartingale with characteristics $(B,C,\nu)$, where \begin{align*} B_{t,i}&=\int_0^t b_i(\widetilde X_{s_-})ds,\\ C_{t,ij}&=\int_0^tc_{ij}(\widetilde X_{s-})ds,\\ \nu(\omega;dt,d\xi)&=K(\widetilde X_t,d\xi) dt. \end{align*} Here $b: D\rightarrow \mathcal S_p$ and $c: D\rightarrow \text{Sym}_+(\mathcal S_p)$ are measurable functions, and $K(x,d\xi)$ is a positive kernel ($\text{Sym}_+(V)$ denotes positive semidefinite matrices on a vector space $V$). From the computations in the proof of Proposition \ref{prop genx} it follows that $(X,\mathbb P_x)$ is regular in the sense of \cite[Definition 7]{Cpaths}, that is, the coefficients $\phi,\psi$ are differentiable at $t=0$, with derivatives $F(u), R(u)$ given by \eqref{eq F} and \eqref{eq R}. On the other hand, by \cite[Theorem 7]{Cpaths}, the functions $F(u), R(u)$ uniquely determine the differential characteristics $b_i(x), c_{ij}(x)$ and $K(x,d\xi)$. A comparison of \eqref{eq F}--\eqref{eq R} with the expressions of $F$ and $R$ in \cite[Theorem 7]{Cpaths} finally reveals that $\nu=0$, i.e., the process $\widetilde X$ is continuous $\mathbb P_x$-almost surely, because by the semimartingale decomposition \[ X_t=X_0+\int_0^t dM_s+\int _0^tb(X_s)ds, \] where $M$ is the continuous martingale part of $X$. Proof of \ref{crux 2}: Follows from \ref{crux 1} by applying Proposition \ref{prop wish semi}. \end{proof} \section{Fourier-Laplace Transform of Wishart distributions} This section shows that the Laplace transform \eqref{FLT Mayerhofer Wishart} can be extended to its maximal domain, which is dictated by the blow up of the right side. The right side of \eqref{FLT Mayerhofer Wishart} is a real analytic function, which is finite on the domain \[ D(\mu):=-\Sigma^{-1}+\mathcal S_p^+ \] but blows up as the argument $u$ approaches the boundary $\partial D(\mu)$, since then the determinant vanishes. Furthermore, the right side of \eqref{FLT Mayerhofer Wishart} can be extended to a complex analytic function on the complex strip $D(\mu)+i\mathcal S_p$ (by just replacing $u$ by $u+iv$, where $v\in\mathcal S_p$) and it agrees, by definition, with the left side of \eqref{FLT Mayerhofer Wishart}, on a set of uniqueness, namely the open domain $\mathcal S_p^+$. Hence, by \cite[(9.4.4)]{dieudonne}, equality holds in \eqref{FLT Mayerhofer Wishart} for $u\in \mathcal S_p^++i\mathcal S_p$. The following extends the validity of \eqref{FLT Mayerhofer Wishart} to its maximal domain $D(\mu)+i\mathcal S_p$: \begin{proposition}\label{FLT maximal} Let $\mu=\Gamma_p(\beta,\omega;\Sigma)$. Then its Fourier-Laplace transform can be extended to the complex strip $D(\mu)+i\mathcal S_p$, and \eqref{FLT Mayerhofer Wishart} holds for any $u\in D(\mu)+i\mathcal S_p$. \end{proposition} For the proof, the following fundamental technical statement concerning extension of the Laplace transform of a measure on the non-negative real line is used. It is a refinement of \cite[Lemma A.4]{ADPTA}: \begin{lemma}\label{lem adpta} Let $\mu$ be a probability measure on $\mathbb R_+$, and $h$ an analytic function on $(-\infty,s_1)$, where $s_1>s_0\geq 0$ such that \begin{equation}\label{eq A1} \int _{\mathbb R_+}e^{sx} \mu(dx)=h(s) \end{equation} for $s\in (-\infty, s_0)$. Then \eqref{eq A1} also holds for $s\in (-\infty, s_1)$. \end{lemma} \begin{proof} If $s_0>0$, the statement follows from \cite[Lemma A.4]{ADPTA}. Let therefore $s_0=0$. Denote, {for $s\le 0$}, $f(s)=\int_{\mathbb R_+} e^{sx}\mu(dx)$. Since $h(s)$ is real analytic at $0$, there exists $0<\varepsilon<s_1$ such that for any $s\in (-\varepsilon,\varepsilon)$ \[ h(s)=\sum_{k\geq 0} \frac{c_k}{k!} s^k. \] Furthermore, by dominated convergence, one obtains iteratively for the left derivatives \[ \int_{\mathbb R_+} x^k e^{sx} \mu(dx)= \lim_{t\uparrow 0} \int _{\mathbb R_+}x^{k-1}e^{sx} \frac{e^{-tx}-1}{-t}\mu(dx)=f^{(k)}(s)=h^{(k)}(s), \quad s\leq 0, \] hence \[ c_k=\int_{\mathbb R_+}x^k \mu(dx). \] Hence, by monotone convergence, for any $s\in (0,\varepsilon)$ \[ h(s)=\sum_{k\geq 0} \int_{\mathbb R_+}\frac{s^k {x^k}}{k!} \mu(dx)=\int_{\mathbb R_+}\sum_{k\geq 0}\frac{s^k {x^k}}{k!} \mu(dx)=\int_{\mathbb R_+} e^{sx}\mu(dx). \] Thus $h(s)$ {verifies \eqref{eq A1}} on all of $(-\infty, \varepsilon)$. Now the assumptions of \cite[Lemma A.4]{ADPTA} are verified (setting $s_0=\varepsilon$), that shows the extension to the maximal domain $(-\infty, s_1)$. \end{proof} {\it Proof of Proposition \ref{FLT maximal}.} For $u=\Sigma^{-1}$, define $\mu^*$ {as} the pushforward of $\mu=\Gamma_p(\beta,\omega;\Sigma)$ under $\xi\mapsto \tr(u \xi)=\tr(\Sigma^{-1}\xi)$. Then $\mu^*$ is a probability measure on $\mathbb R_+$ with Laplace transform \begin{align}\label{eq: one dim} f(t):&=\int e^{t x}\mu^*(dx)=\int e^{-\tr{((-tu)\xi)}}\mu(d\xi)\\\nonumber&=(\det\Sigma)^{-\beta} \det(\Sigma^{-1}(1-t))^{-\beta}e^{{t(1-t)^{-1}\tr (\Sigma^{-1}\omega)}},\quad\quad\quad t\leq 0, \end{align} and the right side is real analytic for $t<1$. Hence, by Lemma \ref{lem adpta} the left side is also finite for $t<1$ and equality holds in \eqref{eq: one dim}. Therefore, it is shown that the formula \eqref{FLT Mayerhofer Wishart} can be extended to $u=-t\Sigma^{-1}$, for any $t<1$. Since $u>-\Sigma^{-1}$ implies $u>-t\Sigma^{-1}$ for some $t<1$, also for any $u>-{\Sigma^{-1}}$ \[ \int e^{-\tr(u\xi)}\mu(d\xi)\leq \int e^{t \tr(\Sigma^{-1}\xi)}\mu(d\xi)<\infty \] and therefore the left side of \eqref{FLT Mayerhofer Wishart} exists for any $u>-\Sigma^{-1}$, and thus also the Fourier-Laplace transform exists for any $u+iv$, where $u>-\Sigma^{-1}$ and $v\in\mathcal S_p$. Since the Fourier-Laplace transform is complex analytic on the strip ${-{\Sigma}^{-1}}+ \mathcal S_p^++i\mathcal S_p$, and agrees with the right side of \eqref{FLT Mayerhofer Wishart} on the domain $\mathcal S_p^+$ (which is a set of uniqueness), equality in \eqref{FLT Mayerhofer Wishart} holds by \cite[(9.4.4)]{dieudonne}. This concludes the proof of Proposition \ref{FLT maximal}. \end{appendix}
2,869,038,155,768
arxiv
\section{Introduction.} The possibility of novel groundstates has been motivating the study of two dimensional frustrated quantum antiferromagnets for quite some time now. In classical unfrustrated antiferromagnets, the ground state is the well known Neel state. The SO(3) spin symmetry of the system is broken down to SO(2). The low energy excitations are the two branches of gapless spin waves which are the Goldstone modes. A commonly observed effect of frustration is that the ground state becomes a spiral state. The spin arrangement remains periodic and is characterized by a spiral vector, {\bf q}. The SO(3) symmetry is now completely broken and there are three gapless Goldstone modes. If the parameters of the system are such that the effects of quantum fluctuations are not very large then this basic picture remains true in the quantum system with changed values of physical quantities like staggered magnetization, spin wave velocities etc. However strong quantum fluctuations can destroy the long range spin order and the system could go to a paramagnetic phase. In frustrated systems several alternate novel effects of quantum fluctuations have been proposed. One possibility \cite{pwa} is the spin liquid groundstate that has the full symmetry of the hamiltonian. Closely related states are the chiral spin liquids or flux phases\cite{frad}. More recently, magnetic states characterized by long range order of higher tensor operators have been proposed \cite {cc}. Numerical and analytical studies \cite {huse,rrps} indicate that the triangular lattice antiferromagnet (TLAF) has a Neel ordered spiral ground state with a spiral angle of $2 \pi / 3$ . This is the so called $\sqrt{3}\times \sqrt{3} $ state. However, work on the spin 1/2 Kagome lattice antiferromagnet (KLAF) indicates the absence of any kind of long range Neel order \cite {zeng,chalk,leung}. The KLAF is therefore a potential candidate for novel groundstates. The KLAF is experimentally realized in the magnetoplumbite type compound Sr Cr$_{8}$ Ga$_{4}$ O$_{19} $ \cite {obr}. This is a layered compound containing planes of Cr$^{3+}$ ions that form a $S={3 \over 2}$ KLAF. About $80\%$ of the KLAF sites are occupied by the chromium ions. The inter Cr spacing is $2.9~~ A$. Susceptibility measurements show a Curie -Weiss behaviour at high temperature with a Curie-Weiss temperature $\theta _ {CW} \sim 400 K$. There is a spin glass like cusp at $T_{g} \sim 5 K$. The specific heat shows a $T^{2}$ behaviour below $T_{g}$ \cite {ram}. Neutron scattering however shows no Bragg peaks down to 1.5K \cite {broh}. There exists short range $\sqrt 3 \times \sqrt 3$ order with a correlation length of about $7 A$ at $1.5 K$. This has led to the speculation that the groundstate is characterized by the long range order of some order parameter that is invisible to the neutrons. Recent $\mu\mbox{sr}$ studies on the compound lends support to a spin liquid type of ground state \cite { musr}. Recently another system where the KLAF is experimentally realized has been reported \cite{jaro}. This is the deuteronium jarosite, (D$_3$O)Fe$_3$(SO$_4$)(OD)$_6$. The Fe$^{3+}$ atoms in this compound form layers of $S={5 \over 2}$ KLAF. About $97\%$ of the KLAF sites are occupied by the iron ions. The inter Fe spacing is $3.67~~ A$. The Curie-Weiss temperature is $\sim 1500~ K$. There is a spin glass type cusp at $13.8~ K$. The specific heat goes as $T^2$ below this temperature. Neutron scattering sees no long range order. There is short range order corresponding to the $\sqrt 3 \times \sqrt 3$ spin structure with a correlation length ofabout $19~ A$ at $1.9~ K$. The $T^2$ behaviour of the low temperature specific heat, absence of long range spin order and the presence of short range ${\sqrt 3} \times {\sqrt 3}$ order are common properties of both these systems indicating that these are universal properties of a KLAF. The specific heat behaviour indicates the presence of a gapless boson in the low temperature phase. However the neutron scattering shows absence of long range spiral order. Further, as mentioned above, numerical work indicates that all the symmetries of the hamiltonian are intact. What is the mechanism in these systems that produces a gapless boson while keeping the symmetries of the hamiltonian intact ? In this paper, we address this puzzle and propose a possible solution for it. We work within the framework of the large $S$ semiclassical expansion. The fairly high value of the spin in the experimental systems indicates that these properties should be seen in this approximation. The classical KLAF has infinitely many (apart from symmetry operations) degenerate groundstates including many with non-coplanar spin configurations. It exhibits the order from disorder phenomenon, i.e , the spin wave modes around the planar groundstates are softer, hence the fluctuations partially lift the groundstate degeneracy. However, there still remain infinitely many distinct planar ground state spin configurations \cite {shend,chub}. This property of the KLAF makes it difficult to study analytically . In this paper we consider instead, a one parameter family of models that interpolate between the TLAF and the KLAF. Such a model has also been considered by Zeng and Elser in \cite {zeng}, where they do a spin wave analysis of the model . We will refer to these models as the deformed triangular lattice antiferromagnet (DTLAF). The model is defined by the Hamiltonian, \beq H = J~\big ( \sum_{<i,j>\epsilon K_{B}} \vec{S_{i}}.\vec{S_{j}} + \chi \sum_{<i,j>\not \epsilon K_{B}} \vec{S_i}. \vec{S_{j}} ~~\big ) \eeq Here $ <i,j>$ label the nearest neighbour sites on a triangular lattice. $K_{B}$ denotes the set of nearest neighbour bonds that belong to the kagome lattice (which is a subset of the triangular lattice ). When $\chi =1$, the model is the TLAF, whereas if $\chi = 0$, it is the KLAF. It is also interesting to note that the structure of the Cr atoms in SCGO is made up of a two layers .The atoms in one plane lie on a Kagome lattice, while those on the upper layer lie on a triangular lattice whose lattice points lie over the centres of the hexagons in the kagome structure \cite{obr}. Therefore the DTLAF could be of direct relevance to SCGO. An important property of the DTLAF which we will show in the next section is that the ground state is unique (upto symmetry operations) for all nonzero values of \ki . For $0< \chi \leq 2 $, the ground state is the $\sqrt{3} \times \sqrt{3} $ state . Our strategy is then to study the model at $\chi \neq 0$ and analyse the quantum groundstate as a function of $\chi$. As mentioned earlier, short range $\sqrt{3} \times \sqrt{3}$ order has been experimentally observed both in SrCr$_8$Ga$_4$O$_{19}$ $~ \mbox{and}$ in (D$_3$O)Fe$_3$(SO$_4$)(OD)$_6$. Theoretically also in a large N formalism, the fluctuations pick out the $\sqrt{3} \times \sqrt{3}$ state \cite { sach}. This indicates that it should be meaningful to look upon the KLAF as the $\chi \rightarrow 0$ limit of the DTLAF. The analysis of a spin system near a transition requires consideration of large amplitude fluctuations. Further since the correlation length near the transition is large, the lattice model can be approximated by a continuum field theory . Thus the physics is expected to be well described by a field theory of the softmodes of the system . This expectation has been well verified experimentally for the unfrustrated square lattice antiferromagnet where the physics is described well by the nonlinear sigma model \cite {nel} . The order parameter for this model is a unit vector field . The soft modes are the two Goldstone modes. Field theories for frustrated systems, in particular for the TLAF, have been derived \cite {domb} and analysed using momentum space renormalization group techniques \cite {fried,az1,az2}. The order parameter here is a SO(3) group element . Physically a rotation group element can be looked upon as describing the orientation of a rigid body. In the spin system this orientation is specified by the sublattice magnetization and the chiral order parameter. The internal symmetry group of these field theories is $SO(3)\times SO(2)$ which is larger than the $SO(3)$ symmetry of the spin system. The extra $SO(2)$ symmetry corresponds to rotations in the body fixed frame of the rigid bodies. The renormalization group analysis of these models\cite{fried,az1,az2} shows no novel phases. The system is either in the Neel ordered phase or in the usual paramagnetic phase with exponentially decaying correlation functions and gapped spin one magnon excitations. Thus the DTLAF also can be expected not to show any novel behaviour near \ki = 1. However, near \ki = 0 we expect some modes other than the Goldstone modes to soften, reflecting the infinite degeneracy that sets in at \ki = 0. The field theory that includes these modes would be appropriate to study the physics near \ki = 0. In this paper we motivate a field theory to describe the Kagome end of the model and study its phase structure. We start by finding the classical groundstates for different values of $\chi$ in section 2. We do the spin wave calculation in section 3, systematically parametrise the hard and soft fluctuations about the classical ground state in the region $0<\chi<1$ and identify the modes that soften when $\chi \rightarrow 0$. The field theory describing the system near $\chi =1$ is derived in section 4. In section 5, we motivate the form of the field theory near $\chi =0$ that includes large amplitude fluctuations of the modes that soften in this region. In section 6, we integrate out the Goldstone modes and obtain the effective theory of the new modes. The phases of this effective theory are analysed in section 7. We summarise our results in section 8. \section{Classical ground states.} In this section, we will analyse the ground state of the classical model .We will show that there are three different types of ground states corresponding to three ranges of the parameter $\chi$ . The energy of the classical model can be written as, \beq E = J \sum_{<i,j> \epsilon K_{B}} \vec {S}_{i}. \vec {S}_{j} + J~ \chi \sum_{<i,j> \not \epsilon K_{B}} \vec {S}_{i}. \vec {S}_{j} \label{en} \eeq Here, $\vec {S_{i}}$ are vectors satisfying the constraint $\vec{ S_{i}}.\vec{ S_{i}} = S^{2}$ . $K_{B}$ denotes the set of bonds that belong to the Kagome lattice. We begin with the parameter range $0 < \chi < 2$ . The energy in equation (\ref{en}) can be rewritten as, \beq \frac {E}{J}= \frac {1}{2}\left (1- \frac{\chi }{2} \right) \sum_ {\Delta \epsilon K _{\Delta}}\left (\sum_{i} \vec{S}_{i}\right ) ^2 + \frac{\chi }{4}\sum_ {\Delta \not \epsilon K_{\Delta}}\left (\sum_{i} \vec {S}_{i} \right ) ^2 - \frac {3S^{2} N}{2}\left (\frac{1+\chi}{2}\right )\label{en2}\eeq where the sum is over all the triangles that belong to the Kagome lattice. N is the total number of sites. In the range of $\chi$ under consideration, the coefficients of the first two terms in equation (\ref{en2}) are positive. Thus the energy is minimized by spin configurations that satisfy the condition that the net magnetization of every triangle is zero . It is well known that the unique (upto symmetry operations) solution of this constraint is the spiral state with spiral angle equal to $2\pi /3$ Thus, this so called $ \sqrt{3}\times \sqrt{3}$ state is the unique, stable groundstate of the model when $0 < \chi <2 $ . At \ki = 0, of course, there are infinitely many other solutions to the constraint and the ground state is highly degenerate. The ground state energy in this range of \ki is given by, \beq E_{G.S} = -\frac {3JNS^{2}}{4} \left (\frac {1+\chi} {2} \right ) \eeq Next we look at the range $\chi \geq 2 $ . We rewrite the energy as, \beq \frac{E}{J} = \sum_{\Delta \not \epsilon K_{\Delta}}\frac{1}{2}\left ( \vec {S}_{1K} +\vec {S}_{2K} + \frac{\chi }{2} \vec{S}_{NK} \right)^{2} - \frac {3S^{2}N}{2} \left ( 1+ \frac{\chi ^{2}}{8} \right ) \eeq Here the sum is over all the triangles that do not belong to the Kagome lattice . $ \vec {S}_{1K} $ and $ \vec {S}_{2K} $ are the spins at the two Kagome sites and $ \vec {S}_{NK} $ is the spin at the non-Kagome site in the centre of every hexagon. In the range $ 2 \leq \ki \leq 4 $, the quantity ($ \vec {S}_{1K} + \vec {S}_{2K} + \frac {\chi} { 2} \vec {S}_{NK} $) can be made to be equal to zero on every triangle by a non-coplanar spin configuration described below . Consider any non-Kagome site and let $ \vec {S_{a}} $ and $ \vec {S_{b}} $ be the spins of the $ \sqrt{3}\times \sqrt{3}$ spiral state on the sites that surround it . Choose, \beqar \vec{S}_{1K} &=& \cos \theta \vec{S}_{a} + S \sin \theta \hat{z} \nonumber \\ \vec{S}_{2K} &=& \cos \theta \vec{S}_{b} + S \sin \theta \hat{z} \label{conf1}\eeqar If $\theta $ satisfies the equation \beq \sin ^{2} \theta = \frac{1}{3}\left ( \frac{\chi ^{2}}{4} -1 \right)\label{sin} \eeq Then we have $ \mid \vec {S}_{1K} +\vec {S}_{2K} \mid = \frac {\chi}{2} S $ So if we choose \beqar \vec{S}_{NK} &=& - \frac{2}{\chi } \left ( \vec {S}_{1K} + \vec {S}_{2K} \right ) \nonumber \\ &=& \frac{2}{\chi } \left ( \cos \theta \vec {S}_{c} - 2S \sin \theta \hat{z} \right ) \label{conf2} \eeqar Then the condition $ \vec {S}_{1K} +\vec {S}_{2K} + \frac {\chi} {2}\vec {S}_{NK} = 0 $ is satisfied in every triangle under consideration . Equation (\ref{sin}) always has a solution in the parameter range $2 \leq \chi \leq 4$ . Thus the non-coplanar configuration described in equations (\ref{conf1}) and (\ref{conf2}) is the stable ground state in the range of \ki . The ground state energy in this range is given by, \beq E_{G.S} = - \frac{3S^2 JN}{2} \left ( 1 + \frac{\chi^{2}}{8} \right )\label{en3} \eeq At \ki = 4, we have $\theta = \pi / 2 $ . All the spins are then collinear . The spins on the Kagome lattice point up and the others point down . Examining the energy as written in equation (\ref{en3}), it is clear that this state ($ \theta =\pi /2$) will minimize the energy in the range $ \chi \geq 4 $ . The ground state energy in this range being, \beq E_{G.S} = - \frac{3S^2 JN}{2} \left ( \chi -1 \right ) \eeq In the range $\chi > 2 $, the system has non-zero magnetization. The average magnetization per site is given by, \beqar \vec{M}&=& S \sin \theta \left( \frac{3}{4}-\frac {1}{ \chi } \right ) \hat{z} ~~~~~ 2 \leq \chi \leq 4 \nonumber \\ &=& \frac{S}{2} \hat{z} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \chi \geq 4 \eeqar To summarize, at \ki = 0 the model is exactly the Kagome lattice model the ground state is infinitely degenerate . As soon as we tune on \ki, this infinite degeneracy is lifted and we have the $ \sqrt{3}\times \sqrt{3}$ spiral state as the unique (upto symmetry operations) ground state . This state remains the ground state until \ki = 2 . The spins then start lifting off the plane . The spins on the Kagome sites having a $\hat {z}$ component which is anti parallel to the $\hat {z} $ component of the spins on the non- Kagome sites . This state is thus a combination of a spiral and ferrimagnetic state . At \ki = 4 all the spins are collinear and the transition to the ferrimagnetic state is complete . The ferrimagnetic state persists for all the values of $ \chi \geq 4$ This completes our analysis of the classical ground states. For the rest of the paper we will be focussing our attention on the region $ 0 < \chi \leq 2 $ and will be analyzing the fluctuations about the $ \sqrt{3}\times \sqrt{3}$ ground state . \section{Spinwave theory} In this section we do the spin wave analysis of our model hamiltonian in the region $0 < \chi < 2$. The calculation has been done earlier in reference \cite{zeng}. Our aim here is to compute the gaps as a function of $\chi$ and explicitly identify the modes which soften as $\chi \rightarrow 0$. The hamiltonian is, \\ \beq H =\sum_{<i,j>}J_{ij}\vec{S_i}.\vec{S_j}\eeq where, $ J_{ij}=\chi$ for i or j belonging to the Kagome lattice and $ J_{ij}=1$ when i and j both lie in the kagome lattice The unit cell, as shown in fig 1, is a set of 12 points. This is commensurate with the periodicity of the DTLAF and the $\sqrt{3} \times \sqrt{3} $ structure of the classical groundstate. Adapting our notation to what is suggested by the unit cell structure, we rewrite the hamiltonian, as, \beq H=\sum_{Ii\alpha,Jj\beta}\frac{1}{2}J_{Ii\alpha,Jj\beta}Tr[S_{Ii\alpha}S_{Jj\beta}]\eeq Where we have used the notation, $S_{Ii\alpha} = {1 \over 2}\vec{S}_{Ii\alpha}.\vec{\tau}$, $\tau^a$ being the Pauli spin matrices. The index I labels the unit cell. The set $(i,\alpha)$ label the spins in each unit cell. $\alpha = 0,1,2 $ is the sublattice index. $i=0,..,3$ label the four different spins of each sublattice in the unit cell. The convention we are using to label the twelve spins in each unit cell is shown in fig 1. We then write the spins as, \beq S_{Ii\alpha}={\tilde s}\{n_{\alpha} -\frac{i}{\sqrt{\tilde s}} [w_{Ii\alpha},n_{\alpha}] -\frac{1}{2\tilde s}[w_{Ii\alpha},[w_{Ii\alpha},n_{\alpha}]]\} \eeq This is the usual Holstein Primakoff transformation and $n_{\alpha}={1 \over 2}\vec{n_{\alpha}} .\vec{\tau},~ \vec{n_{\alpha}}$ being the classical groundstate spin configuration. ${\tilde s} = \sqrt{S(S+1)}$, so that the magnitudes of the spins are normalised to $\vec{S}.\vec{S} = S(S+1)$. Here, $w_{Ii\alpha}=\frac{1}{2}{\vec w_{Ii\alpha}}.{\vec \tau}$, with ${\vec w_{Ii\alpha}}$ being perpendicular to the ground state spin orientation, $\vec n_\alpha$. Hence $\vec{\wIi} = \tilde P_{Ii\alpha}\hat{\epsilon}_{\alpha}^{1}~+~\tilde Q_{Ii\alpha} \hat{\epsilon}_{\alpha}^{2} $, with $\hat \epsilon_{\alpha}^1 ,\hat \epsilon_{\alpha}^2,\hat n_{\alpha} $ forming an orthogonal set of basis vectors for each $\alpha$ and $[\tilde Q_{Ii\alpha},\tilde P_{Jj\beta}]=i\delta_{IJ}\delta_{ij}\delta_{\alpha \beta}$. The hamiltonian expanded to the quadratic order in the fluctuations, \wIi, is given by, \beq H=\frac{\tilde s^2}{2}\sum_{r=0,3}J_{Ii\alpha,Jj\beta}^rTr[-\frac{1}{2\tilde s} \nal\nbet(\wIi^2+\wJj^2)-\frac{1}{ \tilde s}\wIi\nal\wJj\nbet]\eeq we define the fourier transform as follows, \beq \wki=\sum_{I}\wIi\exp{(-i\vec{K}.\vec{I})}\eeq Where I is the unit cell index and $\vec{K}$ takes values in the Brilloiun Zone. Now we can write H in the form, \beq H=J\tilde s\sum_{K,i\alpha,j\beta}[\tilde P_{Ki\alpha} M^{-1}_{i\alpha, j\beta}\tilde P_{-Ki\alpha}+\tilde Q_{Ki\alpha} K_{i\alpha,j\beta}\tilde Q_{-Ki\alpha}]\eeq For arbitrary $\chi$ the matrices $M^{-1}$ and K do not commute, hence it is not possible to directly diagonalise H. We can define the matrix $M^{-1}K ~=~ \Omega^2$, and the left and right eigenvectors of this matrix ,$\Psi_L^{n,r}$ and $\Psi_R^{n,r}$,are the normal modes of H and the eigenvalues $\omega^2_{n,r}$ are the corresponding energy gaps. The old variables $\tilde P$ and $\tilde Q$ are written in terms of the new canonical variables $P $ and $Q$ as follows, \beqar \tilde P_{K i\alpha}&=& 2\Psi_{R,i \alpha}^{nr}\sqrt{\frac{c'_{n r}}{\omega_{n r}}} P_{Knr}\\ \tilde Q_{K i\alpha}&=& 2\Psi_{L,i \alpha}^{nr}\sqrt{\frac{c_{n r}}{\omega_{n r}}} Q_{Knr}\eeqar The explicit form of M$^{-1}$ and K and our derivation of the normal modes of H are given in the Appendix-A. In terms of the new canonical variables $P_{nr}$ and $Q_{nr}$, the Hamiltonian is, \beq H = \frac{1}{2}\sum_{n r} \omega_{n r}(P_{Kn r}P_{-Kn r}+ Q_{Kn r}Q_{-Kn r}) \eeq Explicit expressions for the gaps and the left and right eigenvectors of the matrix $\Omega ^2$ are given in the appendix-B. The modes (0,0), (0,1), (0,2) are the soft modes, gapless for all $\chi$, which we shall address as the S-S modes. The modes (1,0), (1,1), (3,1), (1,2), (3,2), which are hard for non-zero $\chi$ but become gapless for $\chi=0$, will be referred to as the H-S modes. The modes (2,0), (3,0), (2,1), (2,2), which remain hard for all $\chi$ will be referred to as the H-H modes. Among the H-S modes, the modes labelled (1,0), (1,1) and (1,2), become gapless at $\chi=0$ simply because 3 points from each unit cell decouple from their neighbours at $\chi=0$. This can be seen by looking at the expressions for the corresponding eigenvectors at $\chi = 0$, as given in Appendix B. Whereas the modes (3,1) and (3,2) are the ones which truly soften and become gapless at $\chi =0$. A look at the contribution from these different modes to the reduction of the staggered magnetization, gives an idea about how these modes affect the physics close to the Kagome end. The staggered magnetization $M_I$ is given by, \beq M_{I}=\frac{\tilde{s}}{12}\sum_{i,\alpha}U_{Ii\alpha}^\dagger \frac{\tau^{1}}{2} U_{Ii\alpha}\eeq where, $ U_{Ii\alpha}= \exp({i \over {\sqrt {\tilde s}}}w_{Ii\alpha})$ \\ Expanding $M_I$ up to terms quadratic in $w_{Ii\alpha}$ we have, \beq M_{I}=\frac{\tilde s}{12}\sum_{i\alpha}\frac{1}{2}[ \tau^{1}-\frac{2i}{\sqrt{\tilde s}} \wIi\tau^{1}-\frac{2}{\tilde s}\wIi^{2}\tau^{1}]\eeq the average staggered magnetization is, \beq< M_{I}>=M_{I}^{cl}(1-\bigtriangleup M_I)\eeq where $\bigtriangleup M_I$ is given by, \beq \bigtriangleup M_{I} = \frac{1}{24}\sum_{a,i,\alpha}[<\wIi^{a}\wIi^{a}>-1] \eeq The contributions to $\bigtriangleup M_I$ coming from the hard and soft modes, plotted as a function of $\chi$ are shown in fig.2. The contribution of the hard modes is seen to dominate close to \ki = 0. This is because two of the H-S modes start softening in this region and give a large contribution to $\bigtriangleup$M. This indicates that while deriving the low energy effective field theory of this model we should allow for large fluctuations of the modes (3,1) and (3,2) near $\chi = 0$. Hence, near the KLAF end, the theory should be described by 5 parameters which include three corresponding to the S-S modes and two corresponding to the H-S modes. But before we look at this theory, we take a brief look at the physics close to the TLAF. \noindent \section{The Field Theory near $\chi = 1$} As mentioned in the introduction, we expect the lattice spin system to be well described by a field theory near phase transitions where the physics is dominated by the low energy, long wavelength modes. This field theory has previously been derived in reference \cite{domb} for the TLAF. Near $\chi=1$, the low energy modes are the three Goldstone modes, the S-S modes. So we must take into consideration the large amplitude fluctuations of these modes. To do this, we write the spins as, \beq S_{Ii\alpha}={\tilde s}U_{Ii\alpha}^\dagger n_{\alpha}U_{Ii\alpha} \label{uspin} \eeq We seperate the hard and the soft modes by rewriting the $U_{Ii\alpha}$ as, \begin{equation} U_{Ii\alpha} = \expw W_I \label{param1} \end{equation} Here $ w_{Ii\alpha}$ contains only the H-S and the H-H modes. In the derivation of the field theory, $w_{Ii\alpha}$ are assumed to be small. There is no assumption about $W_I$ and they can take any value. The $W_{I}$ correspond to rigid rotations of all the spins in the unit cell. These are therefore exactly the Goldstone modes. Therefore, if $w_{Ii\alpha}$ is assumed to be small, we have a parameterization of the spins such which allows for large fluctuations of the soft (S-S) modes and small fluctuations of the hard (H-S and H-H) modes. The effective action in the long wavelength, low energy approximation is obtained by keeping only the terms quadratic in the hard fluctuations, and then integrating them out. This leaves us with the effective field theory of the soft modes. The details of this method of deriving the field theory will be described elsewhere. The final expression for the action that we get is, \beq S=\int d^{3}x \frac{1}{2}\sum_{\mu,a} \rho_{\mu}^{a}L_{\mu}^aL_{\mu}^{a} \label{chi1action} \eeq where, $L_{\mu}^{a}~=~ \frac{1}{2} Tr[\tau^{a}\partial_{\mu}W(x)W(x)^{\dagger}]$, $ \rho_{0}^{0}~=~\frac{1}{J}\frac{4}{9\sqrt{3}}\frac{(3-\chi)}{\chi(2-\chi)}~, ~ \rho_{0}^{1,2}~=~ \frac{1}{J}\frac{4}{9\sqrt{3}}\frac{(3+7\chi)}{\chi(4+\chi)}~,~ \\ \rho_{i}^{0}~=~JS^2\sqrt{3}~(1+\chi)~$ and $ \rho_{i}^{1,2}~=~JS^2\sqrt{3}~\frac{\chi(5-\chi)}{(3+\chi)}$, for $i=1,2$. Our expressions for the parameters, evaluated at $\chi=1$ coincides with the values given in reference\cite{domb}. Before ending this section we describe the symmetries of this model. The original spin hamiltonian is invariant under the $SO(3)$ spin rotations. This corresponds to the spins $S_{Ii \alpha}$ transforming as follows, \beq S^a_{Ii \alpha} \rightarrow (\Omega_R)^a_b S^b_{Ii \alpha} \nonumber \eeq Where $\Omega_R$ is a $SO(3)$ matrix. In terms of the matrices $S_{I i \alpha}$, \beq S_{Ii\alpha} \rightarrow X^\dagger S_{Ii\alpha} X \label{srt} \eeq where X is the SU(2) representative of the matrix $\Omega_R$. From equations(\ref{uspin}, \ref{param1}), we see that this corresponds to the transformation, \beq W(x) \rightarrow W(x)X \label{urt} \eeq $L_{\mu}$ and hence the action in equation (\ref{chi1action}) are invariant under this transformation. We refer to this symmetry as the $SO(3)_R$ symmetry. In addition the action is also invariant under the transformation, \beq W(x) \rightarrow Y W(x) \label{ult} \eeq where $Y~ \epsilon~ SO(2)$ and consists of matrices of the form $ \exp{i \theta \tau^3}$. We refer to this symmetry as the $SO(2)_L$ symmetry. It acts on the spins as follows, \beq S^a_{Ii\alpha} \rightarrow (\Omega_L)_{\alpha}^{\beta} S^a_{Ii\beta} \label{slt} \eeq where $\Omega_L$ is the $SO(2)$ matrix corresponding to $Y$. This transformation is not a symmetry of the lattice model. It only becomes so in the continuum field theory. The rudiment of this symmetry is observed in the lattice spin wave hamiltonian as a discrete $Z_3$ symmetry. This comes from the way the translation symmetry of the original spin hamiltonian is realised and is discussed in more detail in reference \cite{az1}. The full internal symmetry group of the model is therefore $SO(3)_R \times SO(2)_L$. \noindent \section{ The field theory near $\chi=0$ } We now turn to the low energy physics near the $ \chi = 0 $. To include the large amplitude fluctuations of the H-S modes, we look for a parametrization of the spins as in equation (\ref{param1}), in which both the S-S and the H-S modes are allowed to have large fluctuations. We also want the parameterisation to be in terms of quantities defined over the whole unit cell (i.e. independent of the indices $i$ and $\alpha$) just as the S-S modes were represented by $W_I$. The small fluctuations of the classical ground state configuration due to the H-S modes can be written using equation (\ref{uspin}) as, \beq S_{Ii\alpha}= {\tilde s}n_{\alpha}+i{\sqrt {\tilde s}}[n_{\alpha},w_{Ii\alpha}] \label{sfluchs} \eeq where, $w_{Ii\alpha} = [P^{+} \Psi^{3,1}_{i\alpha} + P^{-} \Psi^{3,2}_{i\alpha}]\epsilon^1 _{\alpha}$ with $P^+ =(P^-)^{*} = P_1 - iP_2 $. We find that if we parameterise the $U_{Ii\alpha}$ matrices in terms of a unit vector $\hat m_I$ as follows, \beq U_{Ii\alpha} = \exp{i\frac{(i-1)\phi \tau ^3} {2}} \exp{i \frac{\phi m_I}{2}} \exp{i \frac{(1-2i)\phi \tau ^3}{4}} \label{paramhs} \eeq where, $\phi=2\pi / 3 $, $m_{I}=\hat {m}_{I}.\vec {\tau}$. Then the small fluctuations in equation (\ref{sfluchs}) are exactly reproduced when $\hat m_I$ is taken to be a small deviation from the $z$ axis. Namely, $m_I = \tau^3 + \pi ^1 \tau ^1 + \pi ^2 \tau^2 $ and taking upto linear terms in $ \pi^a$. Where $\pi _1 = \frac {1}{3} P_2 $ and $\pi_2 = \frac{1}{3} P_1 $. Equation (\ref{paramhs}) thus gives a parameterisation, in terms of a unit vector field, of the large amplitude fluctuations caused by the H-S modes. The complete expression for $U_{Ii\alpha}$ including the effects of the H-S, the H-H and the S-S modes can be written as, \beq U_{Ii\alpha} = \expwd ~ V_{Ii} ~W_I \label {param2}\eeq where the $w_{Ii\alpha}$ is expanded in terms of the H-H modes alone , the $ V_{Ii}$ is given by the R.H.S of equation (\ref{paramhs}) and the SU(2) matrix $ W_I$ contains the S-S modes. The expression (\ref{paramhs}) shows that the H-S modes cause a deformation of the spin arrangements within the unit cell. The fluctuations corresponding to $W_I$ cause rigid rotations of the spins within a unit cell as before and are the Goldstone modes. We now examine the transformation properties of the new fields under the symmetries of the theory. First we consider the $SO(3)_R$ spin rotation symmetry of the hamiltonian. The transformation of the spins under this symmetry is given in equation(\ref{srt}). This transformation of the spins is obtained if $W_I$ and $m_I$ transform as follows, \beq W_I \rightarrow W_I X \label{wrt} \eeq and \beq m_I \rightarrow m_I \label{mrt} \eeq $\hat{m} $ is therefore a spin singlet. Next we consider the $SO(2)_L$ symmetry described in section 5. As mentioned there, this is not a symmetry of the spin system but is however a symmetry of the low energy, longwavelength field theory near $\chi=1$. We assume that this symmetry persists near $\chi=0$ also. The transformation of the spins in equation(\ref{slt}) is obtained if we have, \beq W\rightarrow Y W \label{wlt} \eeq and \beq m_I\rightarrow Y m_IY^\dagger \label{mlt} \eeq Equations (\ref{wrt}~-\ref{mlt}) then specify the transformation properties of the fields under the $SO(3)_R \times SO(2)_L$ symmetry of the low energy theory. We now motivate the form of the action that will effectively describe the the phases of the DTLAF for small $\chi$. We split up the action as, \beq S = S_W[W] +S_{int}[W,m] +S_m[m] \label{chi0action} \eeq As stated above, we assume that the full symmetry of the model to be $SO(3)_R \times SO(2)_L$ in the continuum limit. Retaining terms quadratic in the derivatives, the most general form of the $S_m$ is, \beq S_m ~=~\int d^3x \frac{1}{g_2} \partial_\mu m^a \partial_\mu m^a+V(m^3) \label{seffm} \eeq This action is trivially invariant under the $SO(3)_R$ symmetry since $\hat m$ is a singlet under this symmetry. We have taken the derivative terms to be fully $SO(3)_L$ symmetric. We could have introduced an XY anisotropy but it does not make any qualitative difference in the one-loop approximation we will be working with. $V(m^3)$ however is symmetric only under $SO(2)_L$. At the classical level, a model defined by $S_m$ has two phases. The disordered $SO(2)_L$ symmetric phase which occurs when $V(m^3)$ is minimised at $m^3= \pm 1$ and the ordered $SO(2)_L$ broken phase when it is minimised at $m^3 \ne 1$. For definiteness, we take the potential to be, \beq V(m^3)~=~\frac{\lambda_0}{2} (m^3-\eta_0)^2 \label{mpot} \eeq Thus for $\eta_0 > 1$, we have the symmetric phase (classically), there are two modes with equal gaps which are equal to $g_2\lambda_0(\eta_0-1)/2$. For $\eta_0 < 1$, the $SO(2)_L$ symmetry is broken. There is one gapless Goldstone mode and the other mode has a gap equal to $g_2\lambda \sqrt {(1-\eta^2_0)}/2$. The spin wave analysis in section 3 showed that the two H-S modes had equal gaps which went to zero as $\chi \rightarrow 0$. We therefore take the unrenormalised value of $\eta_0$ to be equal to 1. The general form of $S_W$ that retains terms quadratic in the derivatives and consistent with the symmetries of the theory is given by equation (\ref {chi1action}). To motivate the form of the interaction term, we note that the deviation of $m^3$ from $\pm 1$ implies that the spin configuration is nonplanar. To see this, we define vectors $\hat C_{Ii}$ as, \beq C_{Ii}= -{2i \over 3 \sqrt 3} \sum_\alpha [S_{Ii\alpha},S_{Ii\alpha+1}] \label{chivec} \eeq where as usual $C_{Ii}=\hat C_{Ii}.\vec \tau$. $\hat C_{Ii}$ is the normal to the plane on which the 3 spins labelled by a particular value of $i$ lie. Using equation (\ref{paramhs}) we have \beq C_{Ii}= e^{i \frac{(i-1)\phi \tau ^3} {2}} e^{i \frac{\phi m_I}{2}} ~\tau^3~ e^{-i \frac{\phi m_I}{2}} e^{-i \frac{(i-1)\phi \tau ^3} {2}} \label{chiform} \eeq It is clear that when $\hat m$ deviates from $\hat z$, the vectors $\hat C_{Ii}$ are non coplanar. It is known that nonplanarity of the background spin configuration makes the gapless spin waves stiffer \cite{shend}. We therfore write down an interaction term of the form, \beq S_{int}= \int d^3x f(m^3) L_{\mu}^a L_{\mu}^a \label{sint} \eeq Where $f(m^3)$ increases as $|m^3|$ decreases. For simplicity, we take $f(m^3)~=~-\alpha (m^3)^2$ with $\alpha > 0$. \section{Integrating out the W fields} We now investigate the phases of the field theory that has been proposed in the previous section. In particular we are interested in seeing if there is a phase in which the $SO(2)_L$ symmetry is broken and the $SO(3)_R$ spin symmetry is unbroken. At values of $\chi$ where the system is effectively described by a field theory of form given in equation (\ref{chi1action}), it is known that this does not happen \cite{fried,az1,az2}. However, as mentioned in the previous section, this does occur in the field theory given in equation (\ref{chi0action}) at the classical level if $\eta_0 < 1$. We have also argued that the unrenormalised value of $\eta_0$ is equal to 1. The potential $V(m^3)$ in equation (\ref{mpot}) will get modified by the fluctuations of both the $W$ and the $\hat m$ fields. In this section we will integrate out the $W$ fields and compute the above mentioned change. We then investigate the effect of the $\hat m$ fluctuations by a renormalization group analysis of $S_m$ in the next section. If $\Delta V(m^3)$ is the change in the bare potential due to the $W$ fluctuations, then we have, \beq e^{-\int_x \Delta V(m^3)}~=~\int_W e^{-(S_W[W]+S_{int}[W,m^3])} \label{delpotdef} \eeq $S_W$, as stated earlier, is of the form given in equation (\ref{chi1action}). It is known \cite{az1}, that the two renormalised spin wave velocities tend to become equal. So we make the simplifying assumption of space-time isotropy and work with $S_W$ of the form, \beq S_W~=~ \int d^3x \frac{1}{g_1} \sum_{a=1}^2L^a_\mu L^a_\mu ~+~ \frac{1}{g_3} L^3_\mu L^3_\mu \label{waction} \eeq We first consider the weak coupling regime when $g_1,g_3~<<~1$. In this regime the $W$ fields are ordered and the $SO(3)_R$ symmetry is broken. The $W$ integration can be done semiclassically and we get, \beq \Delta V(m^3) = -(g_1+g_2)\alpha (m^3)^2 \label{delpotw} \eeq Thus in the weak coupling regime, where the $W$ field is ordered, we have $\eta_0 \to \eta_0 / (1-{(g_1+g_2)\alpha) \over \lambda})$. Therefore, in the ordered phase, the $W$ field fluctuations increase the value of $\eta_0$. Next we consider the strong coupling regime, $g_1,g_3 >>1$, where the $W$ fields are disordered. In this regime, the $SO(3)_R$ symmetry is unbroken. We first rewrite the theory in terms of a set of three orthogonal vectors defined as below \beq \phi_r^a~=~{1 \over 2\gamma_a}tr (\tau_a W^{\dagger} \tau_r W) \label{phidef} \eeq Where $\tau_a$ are the Pauli matrices, the indices $a,r~=~1,2,3$ and ${1 \over \gamma_1}={1 \over \gamma_2}~=~{1 \over g_1}-\alpha(m^3)^2,~{1 \over \gamma_3} ~=~{1 \over g_3}-\alpha(m^3)^2$. From the definition, $\phi_r^a$ satisfy the orthogonality conditions, \beq \sum_{r=1,3}\phi_r^a \phi_r^b ~=~{1 \over \gamma_a} \delta^{ab} \label{orthrel} \eeq The action in equation(\ref{delpotdef}) can be rewritten in terms of these fields as, \beq S_W+S_{int}~=~\int d^3x \sum_{a=1,3}\sum_{r=1,3} [ \partial_{\mu} \phi_r^a \partial_{\mu} \phi_r^a + i \Lambda^{ab} (\phi_r^a \phi_r^b -{1 \over \gamma_a}\delta^{ab})] \label{phiaction} \eeq $\Lambda^{ab}$ are Lagrange multiplier fields that impose the constraint in equation (\ref{orthrel}). The action is quadratic in the $\phi$ fields and they can be integrated out. We are then left with the integral over the $\Lambda$ fields with an effective action given by, \beq S_{eff}= \frac{1}{2} Tr ln (-(\partial_\mu^2)\delta^{ab} + i\Lambda^{ab}) -i\int d^3x {1 \over \gamma_a}\Lambda^{aa} \label{lamaction} \eeq We now do the integration over the $\Lambda$ fields in the saddle point approximation. This is a well known technique that is exact in the large N limit where the index $r$ runs from 1 to N and the coupling constants are suitably rescaled. The saddle point equations are, \beq \int {d^3k \over (2\pi)^3}({1 \over k^2+i\Lambda})^{ab}= {2 \over \gamma_a}\delta^{ab} \label{speq} \eeq In the strong coupling regime, the solution is $i\Lambda^{ab}=M^2_a\delta^{ab}$, where $M^2_a$ are non-zero. The $W$ fields are thus disordered with correlation lengths $\xi_a=M^{-1}_a$. In the saddle point approximation then, $\Delta V(m^3)$ is given by, \beq \Delta V(m^3)~=~ \frac{1}{2} Tr ln (-(\partial_\mu^2)+M^2_a)\delta^{ab}) -\int d^3x {1 \over \gamma_a}M^2_a \label{delpots} \eeq Here $M_a$ are the solutions of the saddle point equations. Thus both $\gamma_a$ and $M_a$ in equation (\ref{delpots}) are functions of $m^3$. To see the form of the dependence of $\Delta V(m^3)$ on $m^3$ in equation (\ref{delpots}), we differntiate it with respect to $(m^3)^2$. Using the saddle point equation (\ref{speq}), we obtain, \beq {\partial \Delta V \over \partial (m^3)^2}~=~\sum_a M^2_a \alpha \label{delpotdiff} \eeq Thus $\Delta V$ is a monotonically increasing function of $(m^3)^2$ and is minimised at $m^3=0$. Therefore, in the strong coupling regime, the $W$ fluctuations decrease the value of $\eta_0$. The important conclusion that we draw from the above results is that in the weak coupling regime, where the $W$ fields are ordered and the $SO(3)_R$ symmetry is broken, the $W$ field fluctuations increase $\eta_0$ and therefore tend to restore the $SO(2)_L$ symmetry. On the other hand in the strong coupling regime when the $W$ fields are disordered, and the $SO(3)_R$ symmetry is unbroken, the fluctuations decrease the value of $\eta_0$ and hence tend to break the $SO(2)_L$ symmetry. \section{The $\hat m$ field fluctuations} In this section, we investigate the effects of the $\hat m$ field fluctuations by a renormalization group analysis of $S_m$. The theory has three coupling constants, $g_2, \lambda$ and $\eta$. The one loop renormalization group equations that govern their flow can be computed using standard techniques. They turn out to be, \beqar \frac{\partial g_2}{\partial l}~&=&~ -g_2 +g_2^2 \\ \frac{\partial\lambda}{\partial l}~&=&~ 3\lambda (1-g_2)\\ \frac{\partial \eta}{\partial l}~&=&~ 2g_2(1+\eta) \label{rgeq} \eeqar These equations can be explicitly solved to get, \beqar g_2&=&\frac{g_{20}\exp(-l)}{1-g_{20}(1-\exp(-l))}\\ \lambda&=&\lambda_0~(1-g_{20}(1-\exp(-l)))^3~\exp(3l)\\ 1+\eta&=&(1+\eta_0)(1-g_{20}(1-\exp(-l)))^{-2} \label{rgsol} \eeqar When $g_{20} < 0$, $g_2$ flows to 0 and $\lambda$ flows to $\infty$. Therefore, in this range of $g_{20}$, The $SO(2)_L$ symmetry will be broken if $\eta(\infty) < 1$ and will be intact otherwise. The phase boundary is then given by the equation, \beq (1+\eta_0)~=~2(1+g_{20})^2 \label{phbndry} \eeq Thus the $\hat m$ field fluctuations do not succeed in restoring the $SO(2)_L$ symmetry everywhere. There is a region of the couplings $g_{20}$ and $\eta_0$ shown in figure (3) for which the $SO(2)_L$ symmetry remains broken. We can use the vectors $\hat C_{Ii}$ defined in equation(\ref{chivec}) to define an order parameter for this transition in terms of the spins. We define \beq \Psi=1-{\hat C}_{Ii}.{\hat C}_{Ii+1} \label{lopdef} \eeq $\Psi_I$ can be expressed in terms of $W_I$ and $\hat m_I$. It is independent of $W_I$ (since it is a spin singlet) and is equal to \beq \Psi_I={9 \over 8}sin^2(\theta)(3cos^2(\theta)+1) \label{lopval} \eeq So $\Psi$ is 0 in the $SO(2)_L$ unbroken phase and is $\ne 0$ in the broken phase. \section{Summary} To summarize, we have studied the DTLAF which interpolates between the TLAF and the KLAF. The classical ground state, in the region, $0<\chi\le 2$, is the $\sqrt 3 \times \sqrt 3$ Neel ordered state. We have computed the spin wave spectrum of the DTLAF in the above mentioned regime. There are 3 gapless Goldstone modes which we have called the S-S modes. 5 have gaps which go to zero as $\chi \rightarrow 0$, the H-S modes. The remaining 4 have a gap throughout the region and we have called them the H-H modes. The S-S and the H-S modes are important for the field theory that would describe the low energy long wavelength physics of the system in the small $\chi$ region. There are 3+5=8 such modes. In the $\chi \rightarrow 0$ limit, the system decouples into the KLAF and a bunch of decoupled individual spins (3 per unit cell) sitting of the triangular lattice sites that do not belong to the Kagome lattice. If we are interested only in the spins of the KLAF, then we have only 5 of these 8 modes are left. We then allowed for large fluctuations of these modes and found that they can be thought of as fluctuations of an order parameter that takes values in $SO(3) \times S_2$. Namely a $SO(3)$ matrix W and a unit vector $\hat m$. Based on this, we have written down an effective action in terms of the fluctuations of W and $\hat m$. We assume that the symmetry of the theory is enhanced to $SO(3)_R \times SO(2)_L$ in the continuum limit as it happens in the $\chi=1$ end. We also allow for a simple interaction between these fields that is consistent with symmetry requirements and other known facts about the system. We have then integrated out the $W$ fields in the weak and strong coupling regimes, and have analysed the resulting effective theory of the $\hat m$ fields by a one loop renormalization group calculation. We find that in the region where $g_1$ is small and the $W$ field is ordered, the $SO(2)_L$ symmetry remains unbroken and the gap of the $\hat m$ field is increased due to quantum fluctuations. In the regime $g_1 >1$ where the spins are quantum disordered and the SO(3)$_R$ spin symmetry is unbroken, the $W$ field fluctuations drive the $\hat m$ system to a phase where the $SO(2)_L$ symmetry between is broken and there exists one gapless Goldstone mode in the spectrum. This is our proposal for the mechanism that produces a gapless excitation while keeping the symmetries of the hamiltonian intact. While we have shown the existence of this phase in the continuum field theory, we cannot say if the spin system is actually realised in this phase. To answer this question within the framework we are working in, we have to derive the values of the coupling constants in the field theory from the spin system as we have done in the $\chi=1$ end. This work is in progress. \vskip 2cm \centerline{\bf APPENDICES} \vskip 1cm
2,869,038,155,769
arxiv
\section{Introduction} Living systems maintain their physiological equilibrium for survival, called homeostasis~\cite{homeo1, homeo2}. It literally means `staying the same', and is also an important concept for controllers such as thermostats~\cite{thermostat} in engineering. Under fluctuating environment with uncertainty, it is crucial to keep dynamical equilibrium for the proper functioning of living systems. The regulation of blood glucose levels is one of the most primitive examples of the homeostasis to keep energy balance for living systems. The islets of Langerhans in the pancreas respond to varying glucose levels, and produce hormones in an oscillatory manner to regulate the glucose homeostasis~\cite{Langerhans}. The phase of hormone oscillation is modulated by the glucose stimulus depending on glucose levels. As the glucose level increases, the ratio of active to silent phases of the oscillation increases, while its period changes minimally~\cite{hormoscill}. Here oscillatory hormone secretion from physically separated islets can be synchronized by the common stimulus of glucose. The coordination of hormone secretion from multiple islets originates from the phase modulation responding to the common environment of glucose concentration~\cite{synch_horm, synch1,gluins}. The entrainment through the interaction between systems and environment is an important mechanism for biological systems. Cells or organs secrete hormones with different patterns depending on environment, and then these messengers of hormones regulate the physiological environment~\cite{hormone}. The synchronization of Gonadotropin-releasing hormone (GnRH) neurons in the hypothalamus is another example that the GnRH pulses secreted by multiple GnRH neurons act as a common feedback stimulator~\cite{khadra2006}. \begin{figure}[h] \includegraphics[width=0.82\linewidth]{fig1.png} \caption{\textbf{Biological homeostasis and electric circuit}. (a) The multiple elements (denoted as A, B, C) secrete messengers (small circles) responding to their surrounding environment $h$. The secretion patterns depend on the state of environment. Here external stimulus $s$ and the messengers change the state of environment. The interaction between elements and environment is controlled by a coupling strength $K(h)$. (b) An equivalent analog electric circuit of the model system. The red dashed boxes represent each element. In particular, the last box shows an explicit circuit with electric devices. } \label{fig1} \end{figure} Figure~\ref{fig1}(a) summarizes this mechanism of synchronization. Environment stimulates multiple components in a system (A, B, and C in Fig.~\ref{fig1}(a)), and then they secrete messengers (small circles in Fig.~\ref{fig1}(a)) that regulate the state of the environment. We consider an active rotator as a building unit of each component that generates non-sinusoidal oscillations of which phases are modulated by the state of the environment. The active rotator is a well-known model of limit cycle oscillators in excitable systems~\cite{AR}, which has been adopted to describe Josephson junction array, chemical reactions, charge density waves, and neuronal firing~\cite{AR_jj, AR_cr, AR_cdw, AR_nf}. In electric engineering, the active rotator model is also known as Adler's equation~\cite{Adler} approximating second-order LC oscillators, and it is widely used for describing injection-locked oscillators~\cite{ilo}. Recently, the active rotator model is also adopted to describe the phase modulation of biological hormone secretion~\cite{ho-ar}. The synchronization of interacting oscillators has been intensively studied, particularly for globally coupled oscillators through mean field approaches~\cite{syn1, syn2, syn3}. In the absence of the direct coupling between oscillators, even common noise can induce synchronization between uncoupled oscillators~\cite{syn4, syn5}. Similarly, a dynamic common environment can also induce synchronization between uncoupled periodic oscillators~\cite{syn6} and between uncoupled chaotic oscillators~\cite{syn7}. In this study, we propose a minimal model for the synchronization induced by the common dynamic environment. Unlike previous studies considering general dynamical systems~\cite{syn7} or explicitly considering amplitude and phase dynamics of oscillators~\cite{syn6}, we focus on the phase of oscillations for active rotators. Their phases are modulated by the state of environment, and the environment is regulated by the phases of oscillators. Then we demonstrate this synchronization mechanism by realizing an analog electric circuit with microelectronic components, such as UA741, as shown in Fig.~\ref{fig1}(b). The realization of the mechanism can suggest a bio-mimetic device for coordinating multiple components to regulate environmental states. This paper is organized as follows. In Sec.~\ref{sec2}, we introduce our model system, and then present the environment-dependent synchronization with the boundary of parameter space for synchrony using Ott-Antonesen ansatz~\cite{OAansatz}, which is our primary finding. In Sec.~\ref{sec3}, we experimentally demonstrate the synchronization mechanism. Here we design an analog electric circuit to realize active rotators. Finally, in Sec.~\ref{sec4}, we summarize our results and discuss their potential applications. \section{Active rotators interacting with environment} \label{sec2} We consider a system with multiple rotators of which phases are perturbed by an environment. The phase of the $n$-th rotator $\theta_n$ and the environment $h$ evolve with time $t$ as follows: \begin{eqnarray} \frac{d \theta_n}{dt} &=& \omega_n - K(h)\cos\theta_n, \label{thdot} \\ \frac{dh}{dt} &=& F( \{\theta_n \}, h, s), \label{hdot} \end{eqnarray} where $\omega_n$ is an intrinsic angular velocity of the $n$-th rotator, and $K(h)$ represents the interaction between phase $\theta_n$ and environment $h$. The interaction strength controls the degree of phase modulation. The first equation represents the response of rotators to environment, while the second equation describes the regulation of environment by rotators. The regulation rate $F( \{\theta_n \}, h, s)$ could be generally dependent on the phases $\{ \theta_n \} \equiv (\theta_1, \theta_2, \dots, \theta_N)$ of every rotator, the present status $h$ of environment, and the external stimulus $s$. As a simple but reasonable choice, we consider $F =a [s - \sum_n (1+ \cos \theta_n)]$ where the external stimulus $s$ increases the environmental variable $h$, whereas the active phases $\theta_n = (-\pi, \pi)$ of rotators decrease $h$. The regulation term of $(1+\cos\theta_n)$ corresponds to the instantaneous area under the curve (AUC) for the phase oscillator $r_n \exp(i\theta_n)$ with a fixed amplitude ($r_n=1$). The instantaneous AUC includes a shift with the value of 1 to represent negligible (instead of negative) regulation at silent phases of $\theta_n = \pm \pi$. Since the scale of stimulus $s$ and the amplitude of regulation rate $F$ are arbitrary, we set \begin{equation} F( \{\theta_n \}, h, s)= s- \frac{1}{N} \sum_{n=1}^N \cos \theta_n, \label{hdotp} \end{equation} with reparameterized $s=s-1$ and $F=F/a$. Note that the phase rotators cannot bound the increase of $h$ under too large external stimulus $s$. We numerically solve the coupled differential equations of Eqs.~(\ref{thdot}) and (\ref{hdot}) using the fourth order Runge-Kutta method~\cite{RK4} with a sufficiently small time step, $\Delta t=0.001$. We then demonstrate that the system-environment interaction can entrain non-interacting rotators to be synchronized. This feedback-induced entrainment is markedly different from the unidirectional entrainment by an external oscillatory driving with a characteristic frequency $\omega$ that is not interacting with systems: $d \theta_n /dt = \omega_n + K \sin(\omega t - \theta_n)$. \subsection{Environment-dependent synchronization} The interaction between rotators and environment is mediated by the phase modulation function $K(h)$, which is a monotonic and smoothly saturating function of $h$, e.g., $K(h)=K_0 \tanh h$. Depending on the strength of the phase modulation, the active rotator has two regimes of distinct dynamic behavior: phase-locked and oscillatory regimes~\cite{AR}. Since we are interested in biological oscillation, we consider the oscillatory regime guaranteed by a constraining condition of $|K(h)| \leq K_0 \leq \omega_n$. Given this condition, the sign of $K(h)$ determines the oscillation pattern and the ratio of active to silent phases. For example, given constant $K(h)=K_0$ with quenched environment ($dh/dt=0$), active rotators showed distinct oscillation patterns depending on $K_0$ (Fig.~\ref{fig2}(a)). The positive and negative plateau indicates active and silent phases, respectively. \begin{figure}[t] \includegraphics[width=0.82\linewidth]{fig2_revised.pdf} \caption{\textbf{Environment-dependent synchronization of active rotators}. (a) Phase modulation of rotators depending on the modulation factor $K(h)=0.8$ (upper) and $K(h)=-0.8$ (lower). Phase dynamics of randomly selected $20$ rotators (gray lines and one black line), their degree of synchronization ($|\rho(t)|$, green line), and the state of environment ($h(t)$, green line) for (b) a synchronizing condition ($s=0.4$, $K_0=0.8$) and (c) a non-synchronizing condition ($s=0.4$, $K_0=0.4$). For the plot, we used $N=200$ identical oscillators ($\omega_n=\omega_0=1$) with a phase modulation function, $K(h)=K_0 \tanh h$. } \label{fig2} \end{figure} Once we turned on the dynamics of environment $h$, active rotators showed either complete synchronization or desynchronization depending on the interaction parameter $K_0$ and stimulus $s$. We numerically examined the two distinct regimes for rotators' synchrony. Given $N = 200$ identical rotators ($\omega_n=\omega_0=1$), we explored the synchronization boundary for $K(h)=K_0 \tanh h$ and $F(\{\theta_n\},h,s)$ in Eq.~(\ref{hdotp}) by controlling the parameters $K_0$ and $s$. Figure~\ref{fig2}(b) shows the phase traces of randomly selected 20 rotators from total 200 rotators. Initial states of rotators are all different. However, as rotators interact with the common environment, they are modulated to be synchronized. Here to probe the degree of synchronization, we used the absolute value of the complex Kuramoto order parameter, $\rho(t) \equiv \frac{1}{N}\sum_{n=1}^N\exp(i\theta_n)$~\cite{AR_cr}. $| \rho(t) |$ initially fluctuates, continuously increases, and finally saturates at the unity representing complete synchronization. \subsection{Synchronization boundary} Unless the absolute level of the stimulus $|s|$ is too large, rotators always become synchronized through the dynamic feedback between rotators and environment. In other words, if the phase modulation of rotators can manage to regulate the stimulus, the rotators are synchronized. However, if the stimulus is too large beyond the manageable capacity of the phase rotators, the environmental variable blows up, and the rotators have drifting phases without synchronization (Fig.~\ref{fig2}(c)). Here we obtained the threshold external stimulus $s_b$ determining the boundary for complete synchronization by using a linear stability analysis based on the Ott-Antonesen ansatz~\cite{OAansatz}: \begin{equation} s_{b} = \frac{\omega_0}{K_0} -\sqrt{\left(\frac{\omega_0}{K_0} \right)^2-1}, \label{gam} \end{equation} of which detailed derivation is referred to Appendix \ref{sec:aa}. The synchronized area of numerical results are denoted by green area in Fig.~\ref{fig3} and the theoretical boundary for synchronization is denoted by red solid lines. Note that the time trajectory of $|\rho(t)|$ depends on the specific shape of $K(h)$, whereas the synchronization boundary does not depend on the shape, but it depends on the saturation value $K_0$. As shown in Fig.\ref {fig3}, we confirmed that the synchronization boundary did not change in the presence of small heterogeneity of intrinsic frequencies $\omega_n$ and under different numbers of rotators. \begin{figure}[] \includegraphics[width=\linewidth]{fig3_revised.pdf} \caption{\textbf{Boundary for complete synchronization}. Heat maps of degree of synchronization $|\bar{\rho}|$ for the maximum coupling strength $K_0$ and stimulus $s$. Red line represents the theoretical synchronization boundary in Eq.~(\ref{gam}). The boundary is robust for varying total number $N$ of oscillators and heterogeneity of their intrinsic frequencies $\omega_n$. We sampled $\omega_n$ from a normal distribution with a mean $\omega_0=1$ and and standard deviation $\Delta \omega$. The plots are obtained from averages of 100 ensembles for (a) ($N, \Delta \omega$)=(200, 0), (b) (500, 0), (c) (200, 0.05), (d) (100, 0), (e) (200, 0.1), and (f) (10, 0). We numerically computed $|\bar{\rho}| \equiv\frac{\omega_0}{2\pi} \int_{T-\tau}^T |\rho(t)| dt$ with a burning period $T=1000$ and a sufficiently long period $\tau =20\pi/\omega_0$. } \label{fig3} \end{figure} \section{Experimental realization}\label{sec3} Now we build an analog electric circuit to realize the theoretical model as shown in Fig.~\ref{fig1}(b). The circuit mapping is straightforward by introducing new variables: $V_{xn} \equiv\cos\theta_n$ and $V_{yn} \equiv\sin\theta_n$. Then, the dynamics of $\dot{V}_{xn}$ can be obtained by multiplying $-V_{yn}$ to Eq.\eqref{thdot}, and $\dot{V}_{yn}$ can be similarly obtained by multiplying $V_{xn}$ to Eq.\eqref{thdot}. Since the functional shape of $K(h)$ does not affect stationary responses of rotators, we choose a simple modulation function for experimental convenience, $K(h)=K_0 h^3$ for $h =[-1, 1]$, $K(h)=-K_0$ for $h<-1$, and $K(h)=K_0$ for $h>1$. After changing variables with a fixed frequency $\omega_0$, Equations~(\ref{thdot}) and (\ref{hdot}) can be rewritten as follows: \begin{eqnarray} \dot{V}_{xn}&=&- \big[ \omega_0-K(V_h) V_{xn} \big] V_{yn}, \label{vxdot} \\ \dot{V}_{yn}&=& \big[ \omega_0-K(V_h) V_{xn} \big] V_{xn}, \label{vydot} \\ \dot{V}_{h}&=&V_s - s_N \sum_{n=1}^N V_{xn}, \label{vhdot} \end{eqnarray} where $V_h$ and $V_s$ correspond to the variables of $h$ and $s$, respectively, and $s_N$ is introduced for proper normalization. Then we could successfully implement an electric circuit for the theoretical model. \begin{figure} \includegraphics[width=0.45\textwidth]{sfig1.png} \caption{{\bf Schematic diagram of the electric circuit for single active rotators.} The $\omega_{0}$ and feedback signal $K(V_{h})$ from environment are considered as input parameters. The $\otimes$ denotes multiplier implemented by analog multiplier AD633. The red and blue dots indicate reference nodes for $V_{x}$ and $V_{y}$, respectively. } \label{sfig1} \end{figure} Equations~(\ref{vxdot}) and~(\ref{vydot}) were realized on an electric circuit by using an operational amplifier (op-amp) and an analog multiplier (Fig.~\ref{sfig1}). The op-amps (UA741CP) were basic building blocks in our circuit design. Except for the multiplication with an analog multiplier (AD633JN), other logic calculations were implemented by op-amp circuits for integrator, summing adder, voltage follower, and inverting amplifier~\cite{aoelec}. In particular, we operated integration by using the lossy integrator of ten millisecond RC time with shunt resister preventing charge storage of capacitors in the integrator. We monitored the signals from the circuit by using Agilent oscilloscope (DSO-X 2012A) and function generator (Agilent 33220A). Furthermore, we used green LEDs to visualize the activities of electric rotators. \begin{figure}[] \includegraphics[width=\linewidth]{fig3.pdf} \caption{\textbf{Controllable synchronization of electric elements}. Depending on the external signal $V_s$, the degree of synchronization $|\rho_V|$ changes with time (green lines). For clear demonstration, its moving average (red line), $|\bar{\rho}_V (t)|\equiv \frac{1}{\Delta T}\int_{t-\Delta T}^{t} |\rho_V (t')| dt'$ with $\Delta T=\omega_0/2\pi\approx 0.43~\rm{s}$, is also plotted. The zoom-in plot of $130\leq t \leq 145$ time window shows the transition detail from desynchronization to synchronization. } \label{fig5} \end{figure} Varying system parameters such as $\omega_0$ to set a proper value of circuit components, we monitored voltage $V_{xn}$ (red filled dot in Fig.~\ref{sfig1}) and $V_{yn}$ (blue filled dot). Depending on the amplitude of the external stimulus $V_s$, the four electric rotators showed either synchronized or desynchronized behaviour that was measured by the order paramter $|\rho_V|$ (Fig.~\ref{fig5}). We directly measured outputs of the trigonometric functions $V_{xn}$ and $V_{xn}$ (refer Appendix \ref{sec:ac} for the recording set-up), and computed the order parameter $|\rho_V|=\sqrt{(\sum_{n=1}^N V_{xn})^2+(\sum_{n=1}^N V_{yn})^2}/N$. Since $|\rho_V(t)|$ largely fluctuates for a small number $N=4$ of rotators, we used a moving average. For this particular demonstration, we used a fixed $K_0=0.75\omega_0$ and two values of stimulus ($s=0$ for synchronization and $s=15\omega_0$ for desynchronization), and set the natural frequency $\omega_0 / 2\pi$ corresponding to $2.33~\rm{Hz}$. \section{Discussion} \label{sec4} Synchronization of oscillators has been extensively studied in various contexts including biological~\cite{bio_synch} and engineering systems~\cite{grid_synch}. The control of synchronization has been mainly achieved by changing the coupling strength between oscillators~\cite{AR}. In this study, however, we considered interactions between systems and environment as a natural way to induce synchronization between non-interacting elements in a system. The active system-environment feedback has been proved to be useful for the adaptation of robot locomotion~\cite{AR_robot}. To control the robot locomotion, Owaki and colleagues have considered the interaction between robot legs and local reaction force from ground (environment). The motion of legs has been modeled by active rotators: $d \theta_n /dt = \omega - K(h_n) \cos \theta_n$, while the local reaction force $h_n$ for each leg should depend on the posture of the four legs with different phases such as $h_n (\theta_1, \theta_2, \theta_3, \theta_4)$. Unlike the heterogeneous local environment $h_n$, our model considered a homogeneous global enviromnent $h$. Bio-mimetic devices have been emphasized with their advanced functions of redundancy, low power, high sensitivity, and multiple purposes~\cite{biomimetics}. PID controller~\cite{PID} is a state-of-the-art technology as a closed loop controller to maintain a desirable set point. Inspired by the biological homeostasis, the biological mechanism may propose a bio-mimetic device for controlling set point. Unlike the single-unit PID controller, our model suggested that the phase coordination of multiple units could be another mechanism for regulating environment in addition to the amplitude modulation of single units. The synchronization response of multiple units could be used as a sensor for monitoring varying environment, and also as an amplification of signals for regulating environment. In summary, we proposed a simple model for describing phase coordination between multiple rotators influenced by environment. Based on the closed loop interaction between the environment and multiple rotators, we found that the dynamic environment could entrain non-interacting rotators if the phase responses of rotators could manage external perturbation on environment. We analyzed the synchronization boundary depending on the environment-system coupling strength $K_0$ and the level of external perturbation $s$, and showed that either synchronization or desynchronization regime existed with clear separation through the boundary. Moreover, we realized the synchronization mechanism using an electric analog circuit. The circuit can be potentially applicable for practical purposes as an analog controller, and it can serve as a bio-mimetic platform to further understand the regulation of biological oscillation. \begin{acknowledgments} This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Korea government (MSIT) through NRF-2019R1F1A1052916 (J.J.), and by the Ministry of Education through NRF-2017R1D1A1B03032864 (S.-W.S.), and by the Ministry of Science, ICT $\&$ Future Planning through NRF-2017R1D1A1B03034600 (T.S.). \end{acknowledgments} \setcounter{figure}{0} \renewcommand{\thefigure}{A\arabic{figure}}
2,869,038,155,770
arxiv
\section{Introduction} A complex projective manifold $X$ is \emph{rationally connected} if any two general points can be joined by a chain of rational curves. On a rationally connected manifold one can find (many) rational curves $C \subseteq X$ such that $T_X \vert_C$ is ample and deduce that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) = 0\quad\text{for all } m \geq 1.$$ We refer to \cite{Kol96} for a detailed discussion of rationally connected varieties. A well-known conjecture of Mumford says that the converse is also true \cite[Conjecture IV.3.8.1]{Kol96}. \begin{con} \label{mum} Let $X$ be a projective manifold such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) = 0\quad\text{for all } m \geq 1.$$ Then $X$ is rationally connected. \end{con} This conjecture holds when the dimension of $X$ is at most $3$ \cite{KMM92}, and not much has been known in higher dimensions. It is, however, well known that this conjecture is equivalent to a weaker statement which says that in the context of Conjecture \ref{mum}, the variety $X$ is \emph{uniruled}, i.e.\ covered by rational curves: \begin{con} \label{mum2} Let $X$ be a projective manifold such that $K_X$ is pseudoeffective. Then there exists a positive integer $m$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0. $$ \end{con} The connection to uniruledness comes from the main result of \cite{BDPP}, stating that a projective manifold $X$ is uniruled if and only if $K_X$ is not pseudoeffective. A short proof of the equivalence of these two conjectures is given in Proposition \ref{pro:mumford}. A similar proof yields the following weaker characterization of rational connectedness obtained in \cite{Pet06,CDP15}: \begin{thm} Let $X$ be a projective manifold. Then $X$ is rationally connected if and only if for some ample line bundle $A$ on $X$ there is a constant $C$ depending on $A$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m} \otimes A^{\otimes k}\big) = 0 $$ for all positive integers $k$ and $m$ with $m \geq Ck.$ \end{thm} We also mention a stronger conjecture from \cite{BC15}, stating that $X$ is rationally connected if and only if \begin{equation}\label{eq:11} H^0\big(X, S^k\Omega^p_X \big) = 0 \end{equation} for all positive integers $k$ and $p$. The main result of \cite{BC15} is that condition \eqref{eq:11} implies that $X$ is simply connected. \medskip In this paper, we prove several results towards Conjecture \ref{mum2}. Notice that if $\kappa (X,K_X) \geq 0$ in Conjecture \ref{mum2}, then in particular $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0 $$ for some $m$, see Remark \ref{rem:tensor}, but Conjecture \ref{mum2} is much weaker than the nonvanishing $\kappa (X,K_X) \geq 0$. An important invariant of a projective manifold $X$ with pseudoeffective canonical bundle $K_X$ is its numerical dimension $ \nu(X,K_X)$. If $Y$ is a minimal model of $X$, so that $K_Y$ is nef, then $\nu(X,K_X)=\nu(Y,K_Y)$ is the largest non-negative integer $d$ such that $K_Y^d \not \equiv 0$; the general definition is in Section \ref{prelim}. It is well known that $\nu(X,K_X) \geq \kappa (X,K_X) $, and one of the equivalent formulations of the abundance conjecture is that $$ \nu(X,K_X) = \kappa (X,K_X).$$ The abundance conjecture is known to hold when $\nu(X,K_X) = 0$ by \cite{Nak04} and when $\nu(X,K_X) = \dim X$ by \cite{Sho85,Kaw85b}, which in particular proves Conjecture \ref{mum2} in these cases. Thus it remains to prove Conjecture \ref{mum2} when $$ 1 \leq \nu(X,K_X) \leq \dim X -1.$$ In this paper we deal with the extremal cases $\nu(X,K_X) = 1$ and $\nu(X,K_X) = \dim X - 1$. Our main result is the following. \begin{thm}\label{main} Conjecture \ref{mum2} holds when $\dim X=4$ and $\nu(X,K_X)\neq2$. \end{thm} The theorem is a consequence of much more general results which work in every dimension. We first prove Conjecture \ref{mum2} when $\nu(X,K_X)=1$ and $X$ has a minimal model, see Theorem \ref{thm:nu11}: \begin{thm}\label{thm:1} Let $X$ be a projective manifold such that $K_X$ is pseudo-effective, and assume that $X$ has a minimal model. If $\nu(X,K_X) =1$, then there exists a positive integer $m$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0. $$ \end{thm} The main input are our techniques from \cite{LP16}, where -- among other results -- we proved the abundance conjecture for varieties with $\nu(X,K_X)=1$ and $\chi(X,\mathcal{O}_X)\neq0$. We discuss these ideas in Section \ref{sec:nd1}. When $\nu(X,K_X) = \dim X -1$, we show in Theorem \ref{thm:nu-n}: \begin{thm} Let $X$ be a minimal terminal $n$-fold with $\nu(X,K_X) = n-1$ and $n \geq 4$, and let $\pi\colon Y \to X$ be a resolution which is an isomorphism over the smooth locus of $X$. Assume one of the following: \begin{enumerate} \item[(a)] $(\pi^*K_X)^{n-2} \cdot c_2(Y) \neq 0$; \item[(b)] $(\pi^*K_X)^{n-2} \cdot c_2(Y) = 0$ and $(\pi^*K_X)^{n-3} \cdot K_Y \cdot c_2(Y) \ne 0$. \end{enumerate} Then $K_X$ is semiample. \end{thm} The result is more precise when $n=4$, see Theorem \ref{thm:nu3}, which then implies Theorem \ref{main} when $\nu(X,K_X)=3$. The proof is by a careful analysis of the Hirzebruch-Riemann-Roch for two different sets of line bundles, together with a well-known slight refinement of the Kawamata-Viehweg vanishing. Finally, we note that results from \cite{LP16} immediately give the following. \begin{thm} Let $X$ be a projective manifold of dimension $n$ with $K_X$ pseudoeffective. Assume that $K_X$ has a metric with algebraic singularities and semipositive curvature current. \begin{enumerate} \item[(i)] If good minimal models for klt pairs exist in dimensions at most $n-1$, then there is a positive integer $m$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0. $$ \item[(ii)] If $n=4$, then there is a positive integer $m$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0. $$ \end{enumerate} \end{thm} This is Theorem \ref{thm4} below. It is expected that the assumptions in the theorem always hold, see Remark \ref{rem:metric}. All the results of this paper apply also in the context of a stronger conjecture from \cite{BC15} mentioned above. \section{Prelimimaries} \label{prelim} We work over the complex numbers, and all varieties are normal and projective. For the basic notions of the Minimal Model Program we refer to \cite{KM98}. In particular, a normal projective variety is {\it terminal} if it has terminal singularities. We shortly review the definition of the numerical dimension of a pseudoeffective divisor \cite{Nak04,Kaw85}; we are mostly interested in the case when the divisor is $K_X$. \begin{dfn}\label{dfn:kappa} Let $X$ be a smooth projective variety and let $D$ be a pseudoeffective $\mathbb{Q}$-divisor on $X$. If we denote $$\sigma(D,A)=\sup\big\{k\in\mathbb{N}\mid \liminf_{m\rightarrow\infty}h^0(X, \mathcal O_X(\lfloor ( mD\rfloor+A))/m^k >0\big\}$$ for a Cartier divisor $A$ on $X$, then the {\em numerical dimension\/} of $D$ is $$\nu(X,D)=\sup\{\sigma(D,A)\mid A\textrm{ is ample}\}.$$ Note that this coincides with various other definitions of the numerical dimension by \cite{Leh13,Eck16}. If $X$ is a projective variety and if $D$ is a pseudoeffective $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$, then we set $\nu(X,D)=\nu(Y,f^*D)$ for any birational morphism $f\colon Y\to X$ from a smooth projective variety $Y$. \end{dfn} If $D$ is nef, then $\nu(X,D)$ is the largest positive integer $e$ such that $ D^e \not \equiv 0$. Using a refined intersection theory, this can be generalized to pseudoeffective divisors \cite{BDPP}. One of the most important properties we use is that the numerical dimension is preserved by the operations of a Minimal Model Program. The following well-known result \cite[Corollary 10.38(2)]{Kol13} is a consequence of the usual Kawamata-Viehweg vanishing theorem. The proof is analogous to \cite[Corollary]{Kaw82}; we include a short argument for the convenience of the reader. \begin{lem}\label{lem:KVvanishing} Let $(X,\Delta)$ be a $\mathbb{Q}$-factorial projective klt pair of dimension $n$ and let $D$ be a Cartier divisor on $X$ such that $D\sim_\mathbb{Q} K_X+\Delta+L$, where $L$ is a nef $\mathbb{Q}$-divisor with $\nu(X,L)=k$. Then $$H^i\big(X,\mathcal{O}_X(D)\big) =0\quad\text{for all }i>n-k.$$ \end{lem} \begin{proof} The proof is by induction on $n$. If $k=n$, then this is the usual Kawamata-Viehweg vanishing \cite[Theorem 1-2-5 and Remark 1-2-6]{KMM87}. Now, assume that $k<n$ and let $H$ be an irreducible very ample divisor on $X$ which is general in the linear system $|H|$. Consider the exact sequence \begin{equation}\label{eq:seq} 0\to\mathcal{O}_X(D)\to\mathcal{O}_X(D+H)\to\mathcal{O}_H(D+H)\to 0. \end{equation} For $i>n-k$ we have $H^i\big(X,\mathcal{O}_X(D+H)\big)=0$ by Kawamata-Viehweg vanishing. Since $$(D+H)|_H\sim_\mathbb{Q} K_H+\Delta|_H+L|_H$$ by the adjunction formula, see e.g.\ \cite[Proposition 4.5]{Kol13}, since the pair $(H,\Delta|_H)$ is klt by \cite[Lemma 5.17]{KM98} and since $\nu(H,L|_H)=k$, we have $$H^{i-1}\big(H,\mathcal{O}_H(D+H)\big)=0$$ by induction. Then the result follows from the long exact sequence in cohomology associated to \eqref{eq:seq}. \end{proof} We frequently use the following theorem \cite[Theorem 4.4]{Lai11}. \begin{thm} \label{thm:kaw} Assume the existence of good models in dimension $n-q$. Let $X$ be a minimal terminal projective variety of dimension $n$. If $\kappa (X,K_X) = q$, then $K_X$ is semiample. \end{thm} As promised in the introduction, we show the equivalence of Mumford's conjecture and the weaker Conjecture \ref{mum2}. \begin{pro}\label{pro:mumford} Assume that Conjecture \ref{mum2} holds in dimensions at most $n$. Then Conjecture \ref{mum} holds in dimension $n$. \end{pro} \begin{proof} We follow closely the proof of \cite[Proposition IV.5.7]{Kol96}. Let $X$ be a projective manifold of dimension $n$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) = 0\quad\text{for all } m \geq 1.$$ Then $X$ is uniruled by Conjecture \ref{mum2} and by \cite{BDPP}, and let $\pi\colon X \dashrightarrow Z$ be an MRC fibration of $X$, see \cite[\S IV.5]{Kol96}. By blowing up $X$ and $Z$, we may additionally assume that $\pi$ is a morphism. By \cite[Corollary 1.4]{GHS03}, $Z$ is not uniruled. If $X$ is not rationally connected, then $\dim Z\geq1$ and $K_Z$ is pseudoeffective by \cite{BDPP}, hence $$ H^0\big(Z,(\Omega^1_Z)^{\otimes m_0}\big) \neq 0 $$ for some positive integer $m_0$ by Conjecture \ref{mum2}. Since $$(\pi^*\Omega_Z^1)^{\otimes m_0}\subseteq(\Omega_X^1)^{\otimes m_0},$$ we obtain $H^0\big(X,(\Omega^1_X)^{\otimes m_0}\big) \neq 0 $, a contradiction. \end{proof} \begin{rem}\label{rem:tensor} We often use without explicit mention that any effective tensor representation of a vector bundle $\mathcal E$ on a variety $X$ can be embedded as a submodule in its high tensor power, see \cite[Chapter III, \S6.3 and \S7.4]{Bou98}. In particular, if $H^0\big(X,(\bigwedge^q\mathcal E)^{\otimes p}\big)\neq0$ for some $p,q>0$, then $H^0\big(X,\mathcal E^{\otimes m}\big) \neq 0 $ for some $m>0$. \end{rem} We finish the section by commenting on log and singular cases. \begin{rem} (1) Let $(X,\Delta)$ be a projective klt pair and let $\pi\colon \widehat X \to X$ be a log resolution. Assume that $-(K_X + \Delta)$ nef and that $$ H^0\big(\widehat X, \big(\Omega^1_{\widehat X}\big)^{\otimes m}\big) = 0 $$ for all positive integers $m$. Then $\widehat X$ and $X$ are rationally connected. Indeed, if $K_{\widehat X}$ is pseudoeffective, then $X$ has canonical singularities and $\Delta = 0$. In this case $K_X\sim_\Q0$ and hence $ H^0\big(\widehat X, \big(\Omega^1_{\widehat X}\big)^{\otimes m}\big) \neq0 $ for some $m$, contradicting our assumption. Therefore $K_{\widehat X}$ is not pseudoeffective and $\widehat X$ is uniruled by \cite{BDPP}. Let $\widehat X \dasharrow Z$ be an MRC fibration to a projective manifold $Z$. By \cite[Main Theorem]{Zh05} we have $\kappa (Z,K_Z) = 0$, and we conclude that $\dim Z= 0$ as in the proof of Proposition \ref{pro:mumford}. Therefore $\widehat X$ as well as $X$ are rationally connected. (2) In a singular setting, one might hope to characterize rational connectedness using reflexive differentials. However, \cite[Example 3.7]{GKP14} constructs a rational surface $X$ with only rational double points such that $$ H^0\big(X,((\Omega^1_X)^{\otimes 2})^{**}\big) \neq 0.$$ \end{rem} \section{Numerical dimension 1}\label{sec:nd1} The basis of this section is the following result \cite[Theorem 6.7]{LP16}. \begin{thm}\label{thm:nu1} Let $X$ be a minimal $\mathbb{Q}$-factorial projective terminal variety such that $\nu(X,K_X)=1$. If $\chi(X,\mathcal{O}_X)\neq0$, then $\kappa(X,K_X)\geq0$. \end{thm} We give some comments on the proof of Theorem \ref{thm:nu1} in \cite{LP16}. Assuming for contradiction that $\kappa(X,K_X)=-\infty$, the main step is to show that then for a resolution $\pi\colon Y\to X$, for all $m \neq 0$ sufficiently divisible and for all $p$ we have \begin{equation}\label{eq:20} H^0\big(Y,\Omega^p_Y \otimes \mathcal{O}_Y(m\pi^*K_X)\big)=0. \end{equation} There are two crucial inputs here: the first one is the birational stability of the cotangent bundle \cite{CP11,CP15}; the second is a criterion which says that if a nef Cartier divisor $L$ with $\nu(X,L)=1$ can be written as $L=P+D$, where $P$ is a pseudoeffective divisor and $D\neq0$ is an effective divisor, then $\kappa(X,L)\geq0$. Now we distinguish two cases: if there exists a positive integer $m$ such that $\pi^*\mathcal{O}_X(mK_X)$ has a singular metric $h_m$ such that the multiplier ideal $\mathcal I(h_m)$ does not equal $\mathcal{O}_Y$, then one uses the criterion above to conclude; note that in this case the assumption $\chi(X,\mathcal{O}_X) \neq0$ is not needed. Otherwise, for each $m$ there is a singular metric $h_m$ as above such that $\mathcal I(h_m)=\mathcal{O}_Y$, and then the Hard Lefschetz theorem from \cite{DPS01} gives $$H^q\big(Y,\mathcal{O}_Y(K_Y+m\pi^*K_X)\big)=0\quad\text{for all }q.$$ An easy argument involving the Hirzebruch-Riemann-Roch allows to conclude $\chi(X,\mathcal{O}_X) = 0$, which gives a contradiction. \medskip The theorem implies quickly the main result of this section. \begin{thm} \label{thm:nu11} Let $X$ be a projective manifold such that $K_X$ is pseudo-effective, and assume that $X$ has a minimal model. If $\nu(X,K_X) =1$, then there exists a positive integer $m$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0. $$ In particular, Conjecture \ref{mum2} holds if $\dim X=4$ and $\nu(X,K_X)=1$. \end{thm} \begin{proof} Assume to the contrary that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) = 0\quad\text{for all } m \geq 1,$$ so that, in particular, $$ H^q(X,\mathcal{O}_X)\simeq H^0(X,\Omega^q_X) = 0\quad\text{for all } q \geq 1.$$ Therefore $ \chi(X,\mathcal{O}_X) = 1$. If $Y$ is a minimal model of $X$, then $\nu(Y,K_Y)=1$ and $\chi(Y,\mathcal{O}_Y) = 1$, hence $\kappa(X,K_X)=\kappa(Y,K_Y)\geq0$ by Theorem \ref{thm:nu1}. This is a contradiction. The second statement follows immediately since minimal models of ca\-no\-ni\-cal fourfolds exist by \cite{BCHM,Fuj05}. \end{proof} \section{Numerical codimension 1} \begin{thm} \label{thm:nu-n} Let $X$ be a minimal terminal $n$-fold with $\nu(X,K_X) = n-1$ and $n \geq 4$, and let $\pi\colon Y \to X$ be a resolution which is an isomorphism over the smooth locus of $X$. Assume one of the following: \begin{enumerate} \item[(a)] $(\pi^*K_X)^{n-2} \cdot c_2(Y) \neq 0$; \item[(b)] $(\pi^*K_X)^{n-2} \cdot c_2(Y) = 0$ and $(\pi^*K_X)^{n-3} \cdot K_Y \cdot c_2(Y) \ne 0$. \end{enumerate} Then $K_X$ is semiample. \end{thm} \begin{proof} Since $X$ has terminal singularities, the singular locus of $X$ is of dimension at most $n-3$ by \cite[Corollary 5.18]{KM98}, hence \begin{equation}\label{eq:dot} (\pi^*K_X)^n=(\pi^*K_X)^{n-1} \cdot K_Y = (\pi^*K_X)^{n-2} \cdot K_Y^2 = 0. \end{equation} Let $m$ be any positive integer such that $mK_X$ is Cartier. Then by Hirze\-bruch-Riemann-Roch, by Serre duality, by \eqref{eq:dot} and since $X$ has rational singularities, we obtain \begin{align} \chi\big(Y,\mathcal{O}_Y(K_Y &+ \pi^*(mK_X))\big) = (-1)^n\chi\big(Y,\mathcal{O}_Y(-\pi^*(mK_X))\big)\label{eq:RR}\\ &= \frac{1}{12(n-2)!} m^{n-2} (\pi^*K_X)^{n-2} \cdot c_2(Y)\notag \\ &+ \frac{1}{24(n-3)!} m^{n-3} (\pi^*K_X)^{n-3} \cdot K_Y \cdot c_2(Y) + O(m^{n-4})\notag \end{align} and \begin{align} \chi(X,\mathcal{O}_X(m&K_X))= \chi\big(Y,\mathcal{O}_Y(\pi^*(mK_X))\big)\label{eq:RR2}\\ &= \frac{1}{12(n-2)!} m^{n-2} (\pi^*K_X)^{n-2} \cdot c_2(Y)\notag \\ &-\frac{1}{24(n-3)!} m^{n-3} (\pi^*K_X)^{n-3} \cdot K_Y \cdot c_2(Y) + O(m^{n-4}).\notag \end{align} By Miyaoka's inequality \cite[\S6]{Miy87}, we have $$ (\pi^*K_X)^{n-2} \cdot c_2(Y)\geq0.$$ Suppose first that $(\pi^*K_X)^{n-2} \cdot c_2(Y) > 0$. Since \begin{equation}\label{eq:vanish} H^i\big(X,\mathcal{O}_X(mK_X)\big) = 0\quad\text{ for }i\geq2 \end{equation} by Lemma \ref{lem:KVvanishing}, by \eqref{eq:RR2} there exists a constant $C_1>0$ such that $$ h^0\big(X,\mathcal{O}_X(mK_X)\big) \geq C_1m^{n-2}. $$ We conclude that $\kappa (X,K_X) \geq n-2$, hence $K_X$ is semiample by Theorem \ref{thm:kaw}. From now on suppose that $$(\pi^*K_X)^{n-2} \cdot c_2(Y) = 0\quad\text{and}\quad (\pi^*K_X)^{n-3} \cdot K_Y \cdot c_2(Y) \neq 0.$$ If $$ (\pi^*K_X)^{n-3} \cdot K_Y \cdot c_2(Y) < 0, $$ then \eqref{eq:RR2} and \eqref{eq:vanish} imply that there exists a constant $C_2>0$ such that $$ h^0\big(X,\mathcal{O}_X(mK_X)\big) \geq C_2m^{n-3}, $$ and $K_X$ is semiample by Theorem \ref{thm:kaw}. If $$ (\pi^*K_X)^{n-3} \cdot K_Y \cdot c_2(Y) > 0,$$ then since $$ H^i\big(X,\mathcal{O}_Y(K_Y+\pi^*(mK_X))\big) = 0\quad\text{ for }i\geq2 $$ by Lemma \ref{lem:KVvanishing}, by \eqref{eq:RR} there exists a constant $C_3>0$ such that \begin{equation}\label{eq:22} h^0\big(Y,\mathcal{O}_Y(K_Y + \pi^*(mK_X))\big) \geq C_3m^{n-3}. \end{equation} We claim that then $\kappa(X,K_X)\geq n-3$, and therefore that $K_X$ is semiample as before. Indeed, by \eqref{eq:22} there exists a positive integer $m_0$ and an effective divisor $D$ such that $K_Y + \pi^*(m_0K_X)\sim D$, hence $(m_0+1)K_X\sim_\mathbb{Q} \pi_*D$ and $\kappa(X,K_X)\geq0$. Fix a positive integer $p$ such that $pK_X$ is Cartier and $h^0(X,pK_X)>0$. Then \eqref{eq:22} gives \begin{multline*} h^0\big(X,\mathcal{O}_X(2pmK_X)\big)\geq h^0\big(X,\mathcal{O}_X(p(m+1)K_X)\big)\\ =h^0\big(Y,\mathcal{O}_Y(pK_Y + \pi^*(pmK_X))\big)\geq C_3m^{n-3}= C_4(2pm)^{n-3}, \end{multline*} where $C_4=C_3/(2p)^{n-3}$. This finishes the proof. \end{proof} In dimension $n = 4$ we obtain more precisely: \begin{thm} \label{thm:nu3} Let $X$ be a minimal terminal $4$-fold with $\nu(X,K_X) = 3$, and let $\pi\colon Y \to X$ be a resolution which is an isomorphism over the smooth locus of $X$. Assume one of the following: \begin{enumerate} \item[(a)] $(\pi^*K_X)^2 \cdot c_2(Y) \neq 0$; \item[(b)] $(\pi^*K_X)^2 \cdot c_2(Y) = 0$ and $\chi(X,\mathcal{O}_X) > 0$. \end{enumerate} Then $\kappa (X,K_X) \geq 0$. \end{thm} \begin{proof} By Theorem \ref{thm:nu-n}, we may assume that $$(\pi^*K_X)^2 \cdot c_2(X) = 0\quad\text{and}\quad (\pi^*K_X) \cdot K_Y \cdot c_2(Y) = 0.$$ In this case Hirzebruch-Riemann-Roch gives $$ \chi\big(Y,\mathcal{O}_Y(K_Y + m \pi^*(mK_X)\big) = \chi(Y,\mathcal{O}_Y) = \chi(X,\mathcal{O}_X)$$ and $$ \chi\big(X,\mathcal{O}_X(mK_X)\big) = \chi(X,\mathcal{O}_X).$$ Hence, arguing as in the proof of Theorem \ref{thm:nu-n} we obtain $\kappa (X,K_X) \geq 0$. Note that we can no longer apply Theorem \ref{thm:kaw} to deduce semiampleness. \end{proof} \begin{rem} Let $X$ be a minimal terminal $n$-fold with $\nu(X,K_X) = n-1$, and let $\pi\colon Y \to X$ be a resolution which is an isomorphism over the smooth locus of $X$. We argue that if we have the vanishing \begin{equation}\label{eq:21} (\pi^*K_X)^{n-2} \cdot c_2(Y) = 0, \end{equation} then the geometry of $X$ is special. Indeed, assume that $K_X$ is semiample, and let $f\colon X \to Z$ be the associated Iitaka fibration. We claim that $f$ is almost smooth in codimension one. Indeed, there is a positive integer $m$ and a very ample divisor $A$ on $Z$ such that $mK_X = f^*A$. If $D_1, \dots,D_{n-2} \in \vert A \vert $ are general elements, then $C = D_1 \cap \ldots \cap D_{n-2}$ is a smooth curve and $S = f^{-1}(C)$ is a smooth surface proportional to $K_X^{n-2}$, hence \eqref{eq:21} implies $$ c_2(T_X \vert_S) = c_2(T_S) = 0. $$ Hence the only singular fibres of $f \vert_S$ are multiple elliptic curves (this is a classical fact: use Proposition V.12.2 and Remark before that proposition in \cite{BHPV04} together with Hirzebruch-Riemann-Roch). Consequently, there is a subset $B \subseteq Z$ of codimension at least $2$ such that for each $b \in X \setminus B$, the fibre of $f$ over $b$ is a smooth elliptic curve or a multiple of an elliptic curve. \end{rem} Finally, we obtain the proof of Conjecture \ref{mum2} on $4$-folds of numerical dimension $3$. \begin{cor} Let $X$ be a smooth projective $4$-fold with $K_X$ pseudoeffective and $\nu(X,K_X) = 3$. Then there is a positive integer $m$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0.$$ \end{cor} \begin{proof} Assume to the contrary that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) = 0\quad\text{for all } m \geq 1,$$ so that $ \chi(X,\mathcal{O}_X) = 1$ as in the proof of Theorem \ref{thm:nu11}. Let $Y$ be a minimal model of $X$, which exists by \cite{BCHM,Fuj05}. Then $\nu(Y,K_Y)=3$ and $\chi(Y,\mathcal{O}_Y) = 1$, hence $\kappa (Y,K_Y) \geq 0$ by Theorem \ref{thm:nu3}. This is a contradiction. \end{proof} \section{Metrics with algebraic singularities} We recall the definition of a singular hermitian metric with algebraic singularities, following \cite{DPS01} and \cite{Dem01}. \begin{dfn} Let $X$ be a normal projective variety and let $D$ be a $\mathbb{Q}$-Cartier divisor. We say that $D$, or $\mathcal O_X(D)$, has a \emph{metric with algebraic singularities and semipositive curvature current}, if there exists a positive integer $m$ such that $mD$ is Cartier and if there exists a resolution of singularities $\pi\colon Y \to X$ such that the line bundle $\pi^*\mathcal{O}_X(D)$ has a singular metric $h$ whose curvature current is semipositive (as a current), and the local plurisubharmonic weights $\varphi$ of $h$ are of the form $$ \varphi = \sum \lambda_j \log \vert g_j \vert + O(1),$$ where $\lambda_j$ are positive rational numbers, $O(1)$ is a bounded term, and the divisors $D_j$ defined locally by $g_j$ form a simple normal crossing divisor on $Y$. \end{dfn} \begin{rem}\label{rem:metric} It is well known that a line bundle $L$ on a normal projective variety is pseudoeffective if and only if it has a singular metric whose curvature current is semipositive. It is a consequence of the Minimal Model Program that on a terminal variety with the pseudoeffective canonical sheaf, the canonical sheaf always has a metric with algebraic singularities and semipositive curvature current. \end{rem} The following is one of the main results of \cite{LP16}. \begin{thm} \label{thm:LP4.3} Assume the existence of good minimal models of klt pairs in dimensions at most $n-1$ and let $X$ be a projective terminal variety of dimension $n$ with $K_X$ pseudoeffective. Suppose that $K_X$ has a metric with algebraic singularities and semipositive curvature current. If $\chi(X,\mathcal{O}_X) \neq 0$, then $\kappa(X,K_X) \geq 0$. \end{thm} Towards Mumford's conjecture, this implies: \begin{thm} \label{thm4} Let $X$ be a projective manifold of dimension $n$ with $K_X$ pseudoeffective. Assume that $K_X$ has a metric with algebraic singularities and semipositive curvature current. \begin{enumerate} \item[(i)] If good minimal models for klt pairs exist in dimensions at most $n-1$, then there is a positive integer $m$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0. $$ \item[(ii)] If $n=4$, then there is a positive integer $m$ such that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) \neq 0. $$ \end{enumerate} \end{thm} \begin{proof} Assume to the contrary that $$ H^0\big(X,(\Omega^1_X)^{\otimes m}\big) = 0\quad\text{for all } m \geq 1,$$ so that $ \chi(X,\mathcal{O}_X) = 1$ as in the proof of Theorem \ref{thm:nu11}. Then $\kappa(X,K_X)\geq0$ by Theorem \ref{thm:LP4.3}, which is a contradiction that shows (i). The second statement follows immediately since good models of ca\-no\-ni\-cal threefolds exist by \cite{Mor88,Sho85,Miy87,Miy88a,Miy88b,Kaw92}. \end{proof} \bibliographystyle{amsalpha}
2,869,038,155,771
arxiv
\section{Introduction} In this paper, we consider a silicene-based superconductor-normal-superconductor Josephson junction with the silicene nanoribbons lies along the $k_{x}$ (zigzag) direction in a two-terminal geometry as shown in Fig.1. The silicene is divided into three parts, the middle part is deposited on the SiO$_{2}$ substrate, and thus with the dielectric constant $\epsilon=2.45$ ($\epsilon_{SiO_{2}}=3.9$), while the left- and right-side parts are deposited on the conventional superconductor electrodes with the $s$-wave superconductors realized by the superconducting proximity effect. The perpendicular electric field and off-resonance circularly polarized light are applied on the middle normal region. We set a finite chemical potential $\mu_{n}$ in the middle region by slightly doping, while at the superconductive regions, the chemical potential $\mu_{s}$ required by the high carriers density is much larger than the Fermi wave-vector ${\bf k}_{F}$ and the Dirac-mass $m_{D}^{\eta\sigma_{z}\tau_{z}}$, which has ${\bf k}_{F}=\sqrt{\mu_{s}^{2}-(m_{D}^{\eta\sigma_{z}\tau_{z}}-U)^{2}}/\hbar v_{F}$ with $U$ the electrostatic potential induced by the doping or gate voltage in the superconducting region, which breaks the electron-hole symmetry, and lifts the zero-model (between the lowest conduction band and the highest valence band) above the Fermi level (imaging the zero Dirac-mass here)\cite{Rainis D}. The $U$ has been experimentally proved to be valid in controlling the phase shift of the $\phi_{0}$-junction as well as the $0-\pi$ transition\cite{Szombati D B} like the bias voltage. Note that the Dirac-mass here is related to the band gap by $\Delta=2m_{D}^{\eta\sigma_{z}\tau_{z}}$. Thus we can know that $\mu_{s}\gtrsim \sqrt{(m_{D}^{\eta\sigma_{z}\tau_{z}}-U)^{2}}$ and the incident angle is larger than the transmission angle due to the relation ${\bf k}_{F}{\rm sin}\theta_{n}=\mu_{s}{\rm sin}\theta_{s}$\cite{Linder J} where $\theta_{n}$ is the incident angle from the normal region and $\theta_{s}$ is the transmission angle in the superconducting region. Furthermore, we can estimate the transmission angle as $\theta_{s}\approx 0$ to obtain the zero scattering angle and the smooth propagation. The tight-binding Hamiltonian of the monolayer silicene reads \begin{equation} \begin{aligned} H=&\hbar v_{F}(\eta\tau_{x}k_{x}+\tau_{y}k_{y})+\eta\lambda_{{\rm SOC}}\tau_{z}\sigma_{z}+a\lambda_{R_{2}}\eta\tau_{z}(k_{y}\sigma_{x}-k_{x}\sigma_{y})\\ &-\frac{\overline{\Delta}}{2}E_{\perp}\tau_{z}+\frac{\lambda_{R_{1}}}{2}(\eta\sigma_{y}\tau_{x}-\sigma_{x}\tau_{y})+M_{s}s_{z}+M_{c}, \end{aligned} \end{equation} where $E_{\perp}$ is the perpendicularly applied electric field, $a=3.86$ is the lattice constant, $\overline{\Delta}$ is the buckled distance in $z$-diraction between the upper sublattice and lower sublattice, $\sigma_{z}$ and $\tau_{z}$ are the spin and sublattice (pseudospin) degrees of freedom, respectively. $\eta=\pm 1$ for K and K' valley, respectively. $M_{s}$ is the spin-dependent exchange field and $M_{c}$ is the charge-dependent exchange field. $\lambda_{SOC}=3.9$ meV is the strength of intrinsic spin-orbit coupling (SOC) and $\lambda_{R_{2}}=0.7$ meV is the intrinsic Rashba coupling which is a next-nearest-neightbor (NNN) hopping term and breaks the lattice inversion symmetry. $\lambda_{R_{1}}$ is the electric field-induced nearest-neighbor (NN) Rashba coupling which has been found that linear with the applied electric field in our previous works\cite{Wu C H1,Wu C H2,Wu C H3,Wu C H4,Wu C H5}, which as $\lambda_{R_{1}}=0.012E_{\perp}$. For circularly polarized light with the electromagnetic vector potential has ${\bf A}(t)=A(\pm{\rm sin}\ \Omega t,{\rm cos}\ \Omega t)$, where $\pm$ denotes the right and left polarization, respectively. Due to the perpendicular electric field $E_{\perp}$ and the off-resonance circularly polarized light which with frequency $\Omega>1000$ THz, the Dirac-mass and the corresponding quasienergy spectrum of the normal region are \begin{equation} \begin{aligned} m_{Dn}^{\eta\sigma_{z}\tau_{z}}=&|\eta\lambda_{{\rm SOC}}s_{z}\tau_{z}-\frac{\overline{\Delta}}{2}E_{\perp}\tau_{z}+M_{s}s_{z}+M_{c}-\eta\hbar v_{F}^{2}\frac{\mathcal{A}}{\Omega}|,\\ \varepsilon_{n}=&s\sqrt{(\sqrt{\hbar^{2}v_{F}^{2}{\bf k}^{2}+(m_{Dn}^{\eta\sigma_{z}\tau_{z}})^{2}}+s\mu_{n})^{2}}, \end{aligned} \end{equation} respectively, where the dimensionless intensity $\mathcal{A}=eAa/\hbar$ is in a form similar to the Bloch frequency, and $s=\pm 1$ is the electron/hole index, and the subscript $e$ and $h$ denotes the electron and hole, respectively. The off-resonance circularly polarized light results in the asymmetry band gap in two valleys (see Ref.\cite{Wu C H5}) and breaks the time-reversal symmetry in the mean time, and thus provides two pairs of the different incident electrons that may leads to the josephson current reversal due to the valley-polarization. The index "$n$" in above equations is to distinct them from the Dirac-mass and quasienergy spectrum in superconducting regions, which are \begin{equation} \begin{aligned} m_{D}^{\eta\sigma_{z}\tau_{z}}=&|\eta\lambda_{{\rm SOC}}s_{z}\tau_{z}+M_{s}s_{z}+M_{c}|,\\ \varepsilon=&s\sqrt{(\sqrt{\hbar^{2}v_{F}^{2}{\bf k}^{2}+(m_{D}^{\eta\sigma_{z}\tau_{z}})^{2}}+s \mu_{s})^{2}+\Delta_{s}^{2}}, \end{aligned} \end{equation} where $\Delta_{s}$ is the superconducting gap (complex pair potential) which obeys the BCS relation and can be estimated as $\Delta_{s}=\Delta_{0}{\rm tanh}(1.74\sqrt{T_{c}/T-1})e^{i\phi/2}$ (here we only consider the right superconducting lead)\cite{Zhou X,Annunziata G} with $\phi$ the macroscopic phase-difference between the left and right superconducting leads, $\Delta_{0}$ the zero-temperature energy gap which estimated as 0.001 eV\cite{Rainis D} here and $T_{c}$ the superconducting critical temperature which estimated as $5.66\times 10^{-4}$ eV. The superconducting gap is often used to compared with the excitation gap in normal region, thus we simply use $\Delta_{s}$ to replace the $\Delta_{s}+m_{D}^{\eta s_{z}\tau_{z}}$ in below. Obviously, the quasienergy spectrum here is distinct from the one obtained by the Floquet technique in low-momentum limit as presented in the Refs.\cite{López A,Wu C H5}. Note that since the exchange field considered here ($M_{s}=M_{c}=0.0039$ eV), the critical electric field (for the zero Dirac-mass) is no more 0.017 eV/\AA, but $\lesssim 0.051$ eV/\AA for small light-intensity. \section{Andreev bound state} The Andreev reflection (AR) happen in the middle normal region (or insulating barrier) for the bias voltage smaller than $\Delta_{s}$ and $U$ (gate voltage) which been applied across the junction\cite{Rainis D}. We consider the quasi-normal incidence of the electrons from normal region, i.e., with constant $k_{y}$ and non-constant $k_{x}$. The Andreev bound state, which is common in the $d$-wave superconductor\cite{Kashiwaya S}, exist in the middle normal region unless there is a band insulator, that can be realized by control the electric field and the light field which takes effect in the middle normal region. For SNS Josephson junction, in contrast to the normal reflection (NR; like the elastic cotunneling (ECT) process in ferromagnet (normal)/superconductor/ferromagnet (normal) (NSN) junction) whose reflection and transmission coefficients obeys the relation $|r_{e}|^{2}+|t_{e}|^{2}=1$, while the AR process obeys $|r_{h}|^{2}+|t_{e}|^{2}=1$, where $r_{h}$ is the reflection coefficient of the reflected hole from conduction band (retro-like) or valence band (specular), and $t_{e}$ is the transmission coefficient of the electron-like quasiparticle. The ECT relys on the coherent superposition of states through the, e.g., quantum dot orbit in the parallel configuration between two electrodes. For the AR process, here the reflection coefficient reads \begin{equation} \begin{aligned} r_{h}=\frac{e^{i(\eta\theta_{n}-\phi)}{\bf k}_{F}{\rm cos}\theta_{n}}{i{\bf k}_{F}{\rm sin}\beta{\rm cos}\theta_{n}+\varepsilon_{n}{\rm cos}\beta}, \end{aligned} \end{equation} where $\theta_{n}={\rm asin}\frac{\mu_{s}{\rm sin}\theta_{s}}{{\bf k}_{F}}$ and $s=\pm 1$ is the electron(+1)/hole(-1) index and the Fermi wave vector can be obtained by Eq.(2) \begin{equation} \begin{aligned} {\bf k}_{F}=\frac{\sqrt{(\varepsilon_{n}-s\mu_{n})^{2}+(m_{D}^{\eta s_{z}\tau_{z}})^{2}}}{\hbar v_{F}}. \end{aligned} \end{equation} We can see that the reflection probability is $|r_{h}|^{2}$ and it equas to one for vertical incidence $\theta_{n}=0$ and $\beta=0$ (i.e., $\varepsilon_{n}=\Delta_{s}$). In Fig.1, we present the Andreev reflection probability $|r_{h}|^{2}$ versus the phase difference with different superconducting gap $\Delta_{s}$ and electric field. We can obtained that, in the case of $\varepsilon_{n}>\Delta_{s}$, the Andreev reflection probability decrease with the increase of Dirac-mass $m_{Dn}^{\eta s_{z}\tau_{z}}$ or $\Delta_{s}$. The factor $\beta$ here is associate with the relation between the supra-gap or subgap excitation energy and the superconducting gap \begin{equation} \begin{aligned} \beta=\left\{ \begin{array}{rcl} -i{\rm acosh}\frac{\varepsilon_{n}}{\Delta_{s}},&\ |\varepsilon_{n}|>\Delta_{s},\\ {\rm acos}\frac{\varepsilon_{n}}{\Delta_{s}},&\ |\varepsilon_{n}|<\Delta_{s}, \end{array} \right. \end{aligned} \end{equation} for a propagating scattering waves and a evanescent scattering waves, respectively, and we have \begin{equation} \begin{aligned} e^{i\beta}=\frac{\varepsilon_{n}\mp\sqrt{\varepsilon_{n}^{2}-\Delta_{s}^{2}}}{\Delta_{s}}, \end{aligned} \end{equation} where the sign $"\mp"$ takes $"+"$ for $|\varepsilon_{n}|<\Delta_{s}$, and takes $"-"$ for $|\varepsilon_{n}|>\Delta_{s}$. Then the transmission coefficient can be obtained as \begin{equation} \begin{aligned} t_{e}=\sqrt{1-\frac{|{\bf k}_{F}{\rm cos}\theta|^{2}e^{2({\rm Im}\phi-{\rm Im}(\eta\phi))}}{|\varepsilon_{n}{\rm cos}\beta+i{\bf k}_{F}{\rm sin}\beta{\rm cos}\theta|^{2}}}, \end{aligned} \end{equation} where ${\rm Im}$ denotes the imaginary part. Both the reflection coefficient and transmission coefficient implies that the AR is complete in the NS interface (as depicted in Fig.1) even when the interface is clean (without impurity) and without the Fermi wave vector mismatch, that's opposite to the case of ferromagnet-superconductor interface\cite{De Jong M J M}. In N-S junction, to ensure the AR to occur, the excitation energy in the normal region must be smaller than the band gap in the superconducting regions, which including the proximity-induced superconducting gap $\Delta_{s}$\cite{Linder J}, i.e., only the subgap excitation energy is allowed, which would leads to the coherent superposition of the electron and hole excitations in small excitation energy. In this case of $\varepsilon_{n}<\Delta_{s}$, electrons can enter into the superconductor lead only by forming a Cooper pair which consisted of a electron with up-spin in K valley and a electron with up-spin in K' valley, or it can't penetrate into the superconductor lead due to the small excitation energy\cite{De Jong M J M}. Further, when $\varepsilon_{n}>\mu_{n}+m_{D}$ the AR is specular (interband), while it's retro-like (intraband) for $\varepsilon_{n}<\mu_{n}+m_{D}$. For the former one, the large excitation energy also results in the thermal transport between two superconducting leads by the propagating model\cite{Beenakker C W J} and it's related to the Fermi distribution term ${\rm tanh}(\varepsilon_{n}/2k_{B}T)$, while for the latter one, it contributes only to the localized model. Then we focus on the dispersion of the Andreev bound level, which is \begin{equation} \begin{aligned} \varepsilon_{A}=s\frac{\Delta_{s}}{\sqrt{2}}\sqrt{1-\frac{A(C-{\rm cos}\phi)+s_{z}\sqrt{B^{2}[A^{2}+B^{2}-(C-{\rm cos}\phi)^{2}]}}{A^{2}+B^{2}}}, \end{aligned} \end{equation} where we have use the definitions \begin{equation} \begin{aligned} A=&C_{1}C_{2}+\frac{(S_{1}S_{2}(\frac{f_{2}}{f_{1}}+1)(\frac{f_{4}}{f_{3}}-1))}{4\sqrt{\hbar^{2}v_{F}^{2}k_{y}^{2}f_{2}/f_{4}+1}\sqrt{\frac{-f_{4}}{f_{3}}}\sqrt{-\hbar^{2}v_{F}^{2}k_{y}^{2}f_{1}/f_{3}+1}\sqrt{\frac{f_{2}}{f_{1}}}},\\ B=&\frac{S_{1}C_{2}(\frac{f_{3}}{2f_{1}}+\frac{1}{2})}{\sqrt{-(\hbar^{2}v_{F}^{2}k_{y}^{2}f_{1})/f_{3}+1}\sqrt{f_{2}/f_{1}}}- \frac{C_{1}S_{2}(\frac{f_{4}}{2f_{2}}-\frac{1}{2})}{\sqrt{(\hbar^{2}v_{F}^{2}k_{y}^{2}f_{2})/f_{4}+1}\sqrt{-f_{4}/f_{3}}},\\ C=&\frac{\hbar^{2}v_{F}^{2}k_{y}^{2}S_{1}S_{2}}{\sqrt{\hbar^{2}v_{F}^{2}k_{y}^{2}f_{2}/f_{4}+1}\sqrt{-f_{4}/f_{3}}\sqrt{-(\hbar^{2}v_{F}^{2}k_{y}^{2}f_{1})/f_{3}+1}\sqrt{f_{2}/f_{1}}}\\ &-[1\cdot\Theta(\varepsilon_{n}-\mu_{n}-m_{D}^{\eta s_{z}\tau_{z}})+(-1)\cdot\Theta(-\varepsilon_{n}+\mu_{n}+m_{D}^{\eta s_{z}\tau_{z}}]\\ &\times\frac{(S_{1}S_{2}(f_{2}/f_{1}-1)(f_{4}/f_{3}+1))}{4\sqrt{\hbar^{2}v_{F}^{2}k_{y}^{2}f_{2}/f_{4}+1}\sqrt{-f_{4}/f_{3}}\sqrt{-(\hbar^{2}v_{F}^{2}k_{y}^{2}f_{1})/f_{3}+1}\sqrt{f_{2}/f_{1}}}, \end{aligned} \end{equation} with the Heaviside step function $\Theta$ which distinguish the two kinds of AR: retroreflection and specular AR, and thus makes this expression valid for both of these two case. The wave vectors $f_{1}\sim f_{4}$ and parameters $C_{1},\ C_{2},\ S_{1},\ S_{2}$ are defined as \begin{equation} \begin{aligned} f_{1}=m_{Dn}^{\eta\sigma_{z}\tau_{z}}+\varepsilon_{n}+\mu_{s},\ f_{2}=m_{Dn}^{\eta\sigma_{z}\tau_{z}}+\varepsilon_{n}-\mu_{s},\\ f_{3}=\varepsilon_{n}-m_{Dn}^{\eta\sigma_{z}\tau_{z}}+\mu_{s},\ f_{4}=m_{Dn}^{\eta\sigma_{z}\tau_{z}}-\varepsilon_{n}+\mu_{s},\\ C_{1}={\rm cos}(L\sqrt{f_{1}f_{3}/\hbar^{2}v_{F}^{2}-k_{y}^{2}}),\\ C_{2}={\rm cos}(L\sqrt{-f_{2}f_{4}/\hbar^{2}v_{F}^{2}-k_{y}^{2}}),\\ S_{1}={\rm sin}(L\sqrt{f_{1}f_{3}/\hbar^{2}v_{F}^{2}-k_{y}^{2}}),\\ S_{2}={\rm sin}(L\sqrt{-f_{2}f_{4}/\hbar^{2}v_{F}^{2}-k_{y}^{2}}). \end{aligned} \end{equation} The $x$-component of the wave vector for the electron channel and hole channel, $k_{xe}$ and $k_{xh}$, are incorporated in the above wave vectors, specially, the electron and hole wave vectors here are both complex, which implies the inclusion of the subgap solutions with the evanescent scattering waves\cite{Rainis D}, and it has $\frac{\Delta_{s}}{2\varepsilon_{n}}2{\rm cos}\beta=1$ for $|\varepsilon_{n}|<\Delta_{s}$. Note that we only consider the dc Josephson effcet in the thermodynamic equilibrium state here rather than the ac Josephson effcet which with time-dependent phase difference (e.g., by a time-dependent bias voltage). There exist a critical angles for AR process. When the incident angle of the scattering wave excess this critical angle, it becomes exponentially-decayed. The critical angle for AR process is \begin{equation} \begin{aligned} \theta_{AR}={\rm asin}\frac{f_{2}(-f_{4})}{f_{1}f_{3}}, \end{aligned} \end{equation} when the quasienergy and chemical potential $\mu_{n}$ are much smaller than the Dirac-mass $m_{Dn}^{\eta s_{z}\tau_{z}}$, the above critical angle can be simplified as $\theta_{AR}={\rm asin}\frac{(-f_{4})}{f_{3}}$, which is consistent with the result of Ref.\cite{Linder J2}. While for the critical angle of ECT in the NSN junction, it is related to the electron/hole index: $s=1$ for the charge channel which corresponds to the parallel configuration\cite{Linder J2}, and $s=-1$ for the hole channel which corresponds to the antiparallel configuration. The critical angle of ECT can be written as \begin{equation} \begin{aligned} {\rm sin}\theta_{ECT}=\frac{f_{3}\delta_{s,1}+f_{1}\delta_{s,-1}}{f_{1}}. \end{aligned} \end{equation} While for the case of heavily doping in the middle normal region, that's $\mu_{n}\gg \varepsilon_{n}$, then the AR is dominated by the retro one. In this case, when the chemical potential is comparable with the Dirac-mass, the AR process is suppressed by the destructive interband interference\cite{Golubov A A} among the Cooper pairs due to the large Dirac-mass (and thus the large band gap) which brings a large dissipative effect and reduce the AR. The minimum free energy in this SNS planar junction is $\phi$- and temperature-dependent, and local st the 0 or $\pi$-junction. With the defined incident angle $\theta_{s}$ for the quasiparticle injected from the one of the superconducting electrodes to the middle normal region, the minimum free energy with AR process can be written as \begin{equation} \begin{aligned} E(\phi,T)=-k_{B}T\sum_{\eta s_{z}\tau_{z}}\int^{\pi/2}_{-\pi/2}D{\rm ln}[2{\rm cosh}(\frac{\varepsilon_{A}}{2k_{B}T})]{\rm cos}\theta_{n} d\theta_{n}. \end{aligned} \end{equation} The reversal of the Josephson effect will be rised by the valley, spin, and pseudospin polarization, which with dramatic dipole oscillation between the two components induced by the off-resonance circularly polarized light as we have discussed\cite{Wu C H5}, and results in a nonzero center-of-mass (COM) wave vector, just like the Josephson current realized by a ferromagnetic middle silicene (in superconductor/ferromagnet/superconductor ballistic Josephson junction\cite{Annunziata G}). It's found that the oscillation of valley polarization is related to the carrier-phonon scattering due to the photoexcitation which has a relaxation time in picosecond range\cite{Kumar S} (since the frequency of light is setted in the terahertz range) and larger than that of the electron-eletron scattering. On the other hand, since the silicene in normal region is been deposited on a SiO$_{2}$ substrate, the relaxation of the valley-polarization is also associated with the screened scattering by the charged impurities within the substrate. It's found that for a certain concentration of the impurity and the vertical distance to the middle silicene layer $d$, the relaxation time can be obtained, which is in a dimensionless form \begin{equation} \begin{aligned} \tau=\left[\frac{n_{{\rm imp}}\varepsilon_{n}}{\hbar^{3}v_{F}^{2}} \left(\frac{\frac{2\pi e^{2}}{\epsilon_{0}\epsilon {\bf q}}}{1+N_{f}\frac{2\pi e^{2}}{\epsilon_{0}\epsilon {\bf q}}\Pi({\bf q},\omega)}\right)^{2}\right], \end{aligned} \end{equation} where $\epsilon_{0}$ and $\epsilon$ are the vacuum dielectric constant and background dielectric constant, respectively. $N_{f}$ is the number of the degenerate which can be treated as $N_{f}=g_{s}g_{v}=4$ here. ${\bf q}$ is the scattering wave vector and $\Pi({\bf q},\omega)$ is the dynamical polarization within the random-phase-approximation (RPA) which contains the screening effect by the high energy state with large charge density of state (DOS) $D=W|f_{1}|/(\pi\hbar v_{F})$ where $W$ is the width of the silicene ribbron. For the case that the screening is ignored, the relaxation time has a simply relation wih the distance to the impurity, $\tau\propto e^{d/\ell}$\cite{Boross P}, where $\ell=0.47$ \AA for silicene. For ballistic Josephson junction, the diffusive effect is not considered, however, due to the existence of the screened (or unscreened) charged impurities in the substrate, the diffusive effect (e.g., to the conductivity or the valley-polarization) need to be taken into accout. Note that we don't consider the edge states in the wave vector space, however, if the middle normal region is replaced by a ferromagnetic one, e.g., by applying the out-of-plane ferromagnetic exchange field on both the upper edge and the lower edge of middle region, the junction will always be the 0($\pi$)-junction (unless the length of the upper edge and lower edge is unequal) when without applying the electric field or off-resonance light. That's because the out-of-plane ferromagnetic exchange field won't gap out the edge models (when without the Rashba-coupling) while the in-plane ferromagnetic exchange field\cite{Rachel S,An X T} which can easily gap out the gapless edge models even in the absence of Rashba-coupling. In such case, the Curie temperature is required to much larger than the superconducting critical temperature to prevent the dramastic variety of the dipole polarization. If we apply an in-plane ferromagnetic exchange field, the gapless edge model will be gapped out and the time revesal invariance will be broken, and thus the edge states-supported Josephson effect disappear. While for the out-of-plane antiferromagnetic exchange field, which may leads to the single valley characteristic, both the edge and bulk state support the Josephson effect in the normal region\cite{Zhou X2}. In this case, the Andreev level of the upper edge and lower edge can be easily obtained just by inversing the related degrees of freedom, due to the chiral character-induced opposite phase between the upper edge and lower edge, e.g., the helical edge model in quantum spin Hall phase which with up- and down-spin carriers flow toward opposite directions in each edge; while for the chiral edge model in the quantum anomalous Hall phase, the edge models with up- and down-spin carriers flow toward the same direction in each edge. Further, for the a ferromagnetic middle region, the spin-rotation symmetry no more exist, and thus leads to the anomalous conductance\cite{De Jong M J M}. In short junction case ($L\le 50$ \AA), the Andreev level can be approximatively obtained as \begin{equation} \begin{aligned} \varepsilon_{AR}^{*}=\Delta_{s}{\rm cos}\left[\frac{1}{2}\left(\phi-L(\frac{f_{3}}{\hbar v_{F}}+\frac{f_{2}}{\hbar v_{F}})\right)\right], \end{aligned} \end{equation} which can be further simplified as $\varepsilon_{AR}^{*}=\Delta_{s}{\rm cos}(\phi/2)$ in the case that the $\varepsilon_{n}$ is small and that's consistent with the result of Refs.\cite{Kulik I O,Fu L}, where the transmission coefficient equals 1 here due to the partial Klein tunneling of Cooper pairs which happen at zero Dirac-mass\cite{Missault N} (while the AR doesn't requires the zero Dirac-mass). \section{Results and discussion} Due to the presence of the degrees of freedom $\eta,\ s_{z},\ \tau_{z}$ and the electron/hole index $s$ in the above expression of the Andreev bound level, it should has $2^{4}=16$ levels under a certain electric field and light field, however, there may be less levels due to the degenerative case. The Josephson current is a nonlocal supercurrent carried by the cooper pairs in the superconducting leads (while it's by quasiparticles in the middle normal region), it can be found in such SNS junction, and its slope is affected by the dynamical polarizations of the degrees of freedom as mentioned above\cite{Wu C H5}, for the case of, e.g., $\sqrt{\varepsilon_{n}-m_{Dn}^{+++}}\neq\sqrt{\varepsilon_{n}-m_{Dn}^{---}}$. For simplicity, we only present the result of one of the levels, where all the degrees of freedom are takes 1. For a Cooper pair penetrated into the bulk state, the macroscopic wave function is affected by the scattering process by the charged impurity, and thus has $\Psi(x)\propto e^{-x/\xi}e^{d/\ell}e^{i\phi/2}$, where $\xi$ is the superconducting coherance length and $x<L$ is the mean free path in the bulk region. The Josephson effect here only consider the low-temperature condition ($\sim T_{c}$), which is dominated by the elastic scattering, while at higher temperature, the rised inelastic scattering may leads to the switching of the Fermi parity\cite{Fu L}, and the frequency-dependent noise which is induced by the current-current correlation in nonequilibrium Josephson setup leads to the $4\pi$ period of the Josephson current due to the existence of Majorana bound states. That also implies that the dissipation (related to the Dirac-mass) plays a important role during the quasiparticle transportation. For simulation, we set the parameters as follows: The frequency of the off-resonance light is setted larger than 3000 THz which is much higher than the critical value. The critical value here is $3t=4.8$ eV$=1200$ THz for silicene. In the case that frequency is $3000$ THz, the dimensionless intensity is $\mathcal{A}=0.3$ which is much larger than the SOC parameter $\lambda_{SOC}$. The length of the normal region $L$ is much shorter than the superconducting coherence length $L\ll\frac{\hbar v_{F}}{\Delta_{0}}$, i.e., $L\ll 5350$ \AA, where we estimate $\hbar v_{F}=\frac{\sqrt{3}}{2}at=5.35$ here. The $y$-component of the momentum as well as the off-resonance circularly polarized light (which we only use the right-polarization from now on) is conserved due to the translational invariance, thus it's a good quantum number in the computation, and we set $k_{y}=2$ meV. ${\bf k}$ is in an order of $0.01$ eV here. While the detail setting of the parameters are labeled in the each following plot. Fig.3 depicts the dispersion of the Andreev bound level for the AR process in 0 state (we don't consider the breaking of inversion symmetry by the Rashba-coupling here since it's very small). We can easily see than the period of Andreev level is $4\pi$, which obeys the generate relation $\varepsilon_{AR}\propto {\rm cos}(\phi/2)$\cite{Zhou X2}, while the period of $\varepsilon_{AR}/\Delta_{s}$ is $2\pi$ as shown in Fig.4, which is consistent with the results of Ref.\cite{Annunziata G}. We detect the effects of the electric field, off-resonance circularly polarized light, chemical potential, and length of the normal region to the Andreev bound level numerically. We found that, in condition: $L=2000$ \AA , the slope of the Andreev level is always negative in the range [$0,\pi)$ and positive in the range $[\pi,2\pi]$, i.e., in the 0$(\pi)$-junction (which depends on the product between the sign and slope of the Andreev level). From Fig.3(b), we can see that the amplitude of Andreev level exhibits non-monotonous change with the increase of length $L$. In this case, when we increase the light-intensity parameter $\mathcal{A}$, the slope of the Andreev level changes, and we found the $\mathcal{A}$ should be $<0.7$ in the condition: $E_{\perp}=0.034$ eV, $\mu_{n}=2$ eV, $L=2000$ \AA, for a available Andreev level (or the slope is nearly vanish for $\mathcal{A}\ge 0.7$). The anomalous Josephson effect as well as the $\phi_{0}$-junction can be found in Fig.3(b). From the blue dash-dot line ($L=1500$ \AA) and green line ($L=1200$ \AA) in Fig.3(b), we can see that the sign reversal happen even in the range $[0,\pi)$, this phenomenon emerges also in yellow line in Fig.3(a) which with $E_{\perp}=2$ eV. While for other levels, the sign change can happen only at $\phi=\pi$. In Fig.4(a), we present the Andreev level $\varepsilon_{AR}/\Delta_{s}$ whose period is $2\pi$\cite{Zhou X}. The slope of the blue line, which corresponds to the condition: $E_{\perp}=0.01$ eV, $\mu_{n}=0.03$ eV, $\mathcal{A}=0.03$, $L=2000$ \AA, is opposite to the green line which corresponds to the condition: $E_{\perp}=0.26$ eV, $\mu_{n}=0.03$ eV, $\mathcal{A}=0.02$, $L=2000$ \AA. That implies the occurence of $0-\pi$ transition which with the reversed current compared to the usual $0$-junction. Thus we can obtain that the change of electric field and the intensity of light is valid for the generation of $0-\pi$ transition because of the sublattice degree of freedom and the valley degree of freedom consider here, respectively. Thus if the subalttice degree of freedom of electric field is not considered, the variety of electric field won't support the $0-\pi$ transition or the transition to the $\phi_{0}$-junction, which is consistent with the result in Ref.\cite{Zhou X2,Zhou X}. In the absence of the electric field, off-resonance light and the antiferromagnetic exchange field, it is always be the 0-junction due to the presence of time-reversal invariance (and chiral symmetry), and the Josephson supercurrent vanishes at $\phi=n\pi$ ($n$ is integer) in this case. However, the broken symmetry or chiral leads to the nonzero supercurrent at $\phi=n\pi$ and the $\phi_{0}$-junction which can also be implemented by a external magnetic field in nanowire quantum dots-based junction (SQUID)\cite{Szombati D B}. Fig.5 shows the free energy versus phase difference, we can see that change of electric field or the length of normal region are also valid for the generation of $\phi_{0}$ transition, which with a phase shift $\phi_{0}\in(0,\pi)$ (or $\in (\pi,2\pi)$) for the minimum free energy. The period of free energy is $2\pi$, which is consistent with the theory\cite{Radovi? Z} and experimental\cite{Baselmans J J A} results, and it's similar to the Josephson current whose period is $2\pi$ in thermodynamic equilibrium state\cite{Kwon H J}, and becomes $4\pi$ in nonequilibirum state due to the majorana bounding\cite{Wu C H6}. Note that the free energy here is related to the width $W$ of the nanoribbon due to the finite-size effect. We find that the 0-$\pi$ transition and the $\phi_{0}$-junction can be realized by changing the length of the normal region or the electric field. The approximated Andreev level for short length $L$ is shown in Fig.6 according to Eq.(16), where the $0-\pi$ transition (see the black line and gray line) and the $\phi_{0}$-junction can be implemented by changing the electric field, $\mathcal{A}$, and the length $L$. Phenomenologically, through Fig.6, the Eq.(16) can be rewritten as $\varepsilon_{AR}^{*}=\Delta_{s}{\rm cos}\left[\frac{1}{2}(\phi+\phi_{0})\right]$ with $\phi_{0}\in[0,2\pi)$, the $\phi_{0}$ here is controlled by the variables presented in the plot, and this expression is similar to that of anomalous switching current\cite{Szombati D B}. The usual current-phase relation can be deduced from the derivative of $\varepsilon_{AR}^{*}$ as $J=-\frac{2e}{\hbar}{\rm sin}\left[\frac{1}{2}(\phi+\phi_{0})\right]$ in low-temperature limit. Note that these results (including the anomalous Josephson effects) are all base on the first-order perturbation theory with the perturbation term $V$ which couples the left and right superconducting leads. The free energy of the approximated Andreev level in Fig.6 is presented in Fig.7, where we show the free energy in one period, 2$\pi$. It's obvious that the free energy exhibits the characteristic of the $\phi_{0}$-junction with the minimum free energy (i.e., the maximum value of the -$E(\phi,T)$ in the plot) change their positions under different parameters. By comparing the black, purple, yellow lines in Fig.7(a), we can obviously see the phase shift induced by the variety of length of normal region. While for the temperature near (but larger than) the critical one $T_{c}$, as shown in Fig.7(b), it doesn't affects the existence of 0($\pi$)-junction and the time reversal invariance, and thus the minimum free energy is always localed at $\phi=3.5$. For temperature lower than the critical one $T_{c}$ as shown in Fig.6(c), the minimum free energy localed at $\phi=5.24$ for temperature lower than 0.6$T_{c}$, while it localed at $\phi=5.48$ for temperature larger or equals 0.6$T_{c}$ as shown in Fig.6(b). In contrast to the Rashba-coupling-induced helical $p$-wave spin-triplet superconductor, the Josephson current of conventional $s$-wave superconductor saturates at low-temperature (e.g., at $T<0.6T_{c}$)\cite{Asano Y}, and the free energy doesn't change gradually with the variational temperature. \section{Conclusion} In the interface between the normal nanoribbon (strip) region and the superconducting region, the AR is related to the width $W$ (the number of edge states along the armchair direction) of the ribbon: For $W$ is even, the AR supressed by the opposite pseudospin degree of freedom between first sublattice and last subalttice in the $k_{y}$ direction, while for $W$ is odd, the AR is allowed. In this strip scenario, the model spacing between bands above the Dirac level depends on $W$ and the NN hopping when consider the finite-size effect, the model spacing is $\delta\Delta=3\pi t/(3W-2)$ for $W$ is even and $\delta\Delta=\pi t/(W-1)$ for $W$ is odd. The band structure of the zigzag silicene nanoribbon in strip geometry is shown in Fig.8, where the model spacing $\delta\Delta$ is indicated in the inset. The finite-size effect here is important in large energy range, but it can be ignored for low-energy limit as the tight-binding model depicted in Sec.1. In this paper we investigate the Josephson effcet in superconductor-normal-superconductor junction base on the doped silicene with a dc Josephson current in zero voltage. We found that the dynamical polarizations of the degrees of freedom mentioned above can induce the $0-\pi$ transition and the emergence of the $\phi_{0}$-junction. For example, the change of the electric field or off-resonance circularly polarized light may induce the $0-\pi$ transition by the pseudospin degree of freedom or valley degree of freedom, respectively; the interaction between antiferromagnetic exchange field and the SOC may induce the $0-\pi$ transition by the valley polarization\cite{Zhou X2}; the interaction between the internal exchange field and Josephson superconducting current may induce the $\phi_{0}$-junction when the normal region is noncentrosymmetric\cite{Buzdin A}. \end{large} \renewcommand\refname{References}
2,869,038,155,772
arxiv
\section{Introduction} In this article, we study the following form of the Langevin dynamics equations (``kinetic Langevin dynamics") : \begin{equation}\label{eq:underdamped_langevin} \begin{split} dX_{t} &= V_{t}dt,\\ dV_{t} &= -\nabla U(X_t) dt - \gamma V_{t}dt + \sqrt{2\gamma}dW_t, \end{split} \end{equation} where $U$ is the potential energy, $\gamma > 0$ is a friction parameter, and $W_{t}$ is $d$-dimensional standard Brownian motion. It can be shown under mild conditions that this process has invariant measure with density proportional to $\exp(-U(X)-||V||^2/2)$ \cite{pavliotis2014stochastic}. Normally, Langevin dynamics is developed in the physical setting with additional parameters representing temperature and mass. However, our primary aim in using (\ref{eq:underdamped_langevin}) is, ultimately, the computation of statistical averages involving only the position $X$, and in such situations both parameters can be neglected without any loss of generality, or alternatively incorporated into our results through suitable rescalings of time and potential energy. In this article we focus on the properties of (\ref{eq:underdamped_langevin}) in relation to numerical discretization and variation of the friction coefficient. Taking the limit as $\gamma \to \infty$ in (\ref{eq:underdamped_langevin}), and introducing a suitable time-rescaling ($t' =\gamma t$) results in the overdamped Langevin dynamics given by (see \cite{pavliotis2014stochastic}[Sec 6.5]) \begin{equation}\label{eq:overdamped} dX_{t} = -\nabla U(X_t) dt + \sqrt{2}dW_t. \end{equation} This equation has, again, a unique invariant measure with density proportional to $\exp(-U(x))$. Under the assumption of a Poincar\'e inequality, convergence rate guarantees can be established for the continuous dynamics \cite{bakry2014analysis}. In the case of kinetic dynamics a more delicate argument is needed to establish exponential convergence, due to the hypoelliptic nature of the SDE (see \cite{cao2019explicit,villani2006hypocoercivity,dolbeault2015hypocoercivity,dolbeault2009hypocoercivity,bakry2006diffusions,baudoin2016wasserstein,baudoin2017bakry}). Langevin dynamics, in its kinetic and overdamped forms, is the basis of many widely used sampling algorithms in machine learning and statistics \cite{cheng2018underdamped,welling2011bayesian,vollmer2016exploration}. In sampling, Langevin dynamics is discretized and the individual timesteps generated by integration are viewed as approximate draws from the target distribution, however there is an inherent bias due to the finite difference numerical approximation. This bias is usually addressed by choosing a sufficiently small stepsize, or by adding bias correction by use of methods like Metropolis-Hastings adjustment. The choice of the discretization method has a significant effect on the quality of the samples and also on the computational cost of producing accurate samples, through stability properties, convergence rates and asymptotic bias. Overdamped Langevin dynamics has been heavily studied both in the continuous and the discretized settings, with popular integrators being the Euler-Maruyama and the limit method of the BAOAB scheme \cite{leimkuhler2013rational}. The kinetic Langevin system has been extensively studied in the continuous case, but there are still many open questions around the design of the numerical integrator. A metric that is typically used to quantify the performance of a sampling scheme is the number of steps required to reach a certain level of accuracy in Wasserstein distance. Non-asymptotic bounds in Wasserstein distance reflect computational complexity, convergence rate and accuracy. Achieving such bounds relies on two steps: (1) determining explicit convergence rates of the process to its invariant measure and (2) proving non-asymptotic bias estimates for the invariant measure. The focus of the current article is the convergence of the time-discrete system to its invariant measure. The approach that we use to obtain convergence rates is based on proving contraction for a synchronous coupling, as in \cite{monmarche2021high,dalalyan2020sampling}. Proving contraction of a coupling has been a popular method for establishing convergence both in the continuous time setting and for the discretization for Langevin dynamics and Hamiltonian Monte Carlo (\cite{eberle2019couplings,bou2020coupling,deligiannidis2021randomized,bou2023mixing,bou2022couplings,riou2022metropolis,schuh2022global}), since a consequence of such a contraction is convergence in Wasserstein distance (viewed as the infimum over all possible couplings with respect to some norm). Synchronous coupling has been a popular means of achieving explicit convergence rates for discretizations \cite{monmarche2020almost,monmarche2022hmc} due to its simplicity. There has been other recent work aimed at providing convergence rates for kinetic Langevin dynamics under explicit restrictions on the parameters (\cite{cheng2018underdamped,dalalyan2020sampling,monmarche2021high,monmarche2022hmc}), but these guarantees are valid only with sharp restrictions on stepsize. There has also been the work of \cite{sanz2021wasserstein} which considers a slightly different version of the SDE (\ref{eq:underdamped_langevin}), where time is rescaled depending on $M$ and $m$ to optimize contraction rates and bias. We have included their results in Table \ref{Table:results} after converting them into our framework using \cite{dalalyan2020sampling}[Lemma 1]. The results of \cite{sanz2021wasserstein} rely on a stepsize restriction of $\mathcal{O}(1/\gamma)$, but their analysis does not provide the stepsize threshold \cite{sanz2021wasserstein}[Example 9], and the class of schemes considered is different, with only the stochastic Euler scheme in common. Other works on contraction of kinetic Langevin and its discretization include \cite{foster2021shifted, dalalyan2022bounding}. In the current article, we apply direct convergence analysis to various popular integration methods, and provide a general framework for establishing convergence rates of kinetic Langevin dynamics with tight explicit stepsize restrictions of $\mathcal{O}\left(1/\gamma\right)$ or $\mathcal{O}(1/\sqrt{M})$ (depending on the scheme). As a consequence we improve the contraction rates significantly for many of the available algorithms (see Table \ref{Table:results}). For a specific class of schemes, we establish explicit bounds on the convergence rate for stepsizes of $\mathcal{O}(1/\sqrt{M})$. In the limit of large friction, we distinguish two types of integrators -- those that converge to overdamped dynamics (``$\gamma$-limit-convergent") and those that do not. We demonstrate with examples that this property is not universal: some seemingly reasonable methods have the property that the convergence rate falls to zero in the $\gamma\rightarrow \infty$ limit. This is verified numerically and analytically for an anisotropic Gaussian target. The remainder of this article is structured as follows. We first introduce overdamped Langevin dynamics, the Euler-Maruyama (EM) and the high friction limit of BAOAB (LM) and discuss their convergence guarantees. Next, we introduce kinetic Langevin and describe various popular discretizations, and give our results on convergence guarantees with mild stepsize assumptions. These schemes include first and second order splittings and the stochastic Euler scheme (SES). Further we compare the results of overdamped Langevin and kinetic Langevin and show how schemes like BAOAB and OBABO exhibit the positive qualities of both cases with the GLC property, whereas schemes like EM and SES do not perform well for a large range of $\gamma$. \begin{table} \begin{center} \begin{tabular}{ |c|c|c|c|c } \hline Algorithm & stepsize restriction & optimal one-step contraction rate \\ \hline EM & $\mathcal{O}(1/\gamma)$ & $\mathcal{O}(m/M)$\\ BAO, OBA, AOB &$\mathcal{O}(1/\sqrt{M})$ & $\mathcal{O}(m/M)$\\ OAB, ABO, BOA &$\mathcal{O}(1/\gamma)$ & $\mathcal{O}(m/M)$\\ BAOAB &$\mathcal{O}(1/\sqrt{M})$ & $\mathcal{O}(m/M)$ \\ OBABO &$\mathcal{O}(1/\sqrt{M})$ & $\mathcal{O}(m/M)$ \\ SES &$\mathcal{O}(1/\gamma)$ & $\mathcal{O}(m/M)$\\ \hline \end{tabular} \vspace{5mm} \begin{tabular}{ |c|c|c| } \hline Algorithm & previous stepsize restriction & previous explicit best rate\\ \hline OBABO &$\mathcal{O}(m/\gamma^{3})$ & $\mathcal{O}(m^{2}/M^{2})$ \cite{monmarche2021high} \\ SES &$\mathcal{O}(1/\gamma)$ & $\mathcal{O}(m/M)$ \cite{sanz2021wasserstein}\\ \hline \end{tabular} \end{center} \caption{The first table provides our stepsize restrictions and optimal contraction rates of the discretized kinetic Langevin dynamics. The second provides previous best results. Further there are no previous results regarding the EM scheme, the first order splittings and BAOAB to the best of our knowledge.} \label{Table:results} \end{table} \section{Assumptions and definitions} \subsection{Assumptions on $U$} \label{sec:assumptions} We will make the following assumptions on the target measure $\exp{\left(-U\right)}$ to obtain convergence rates. We assume that the potential is $M$-smooth and $m$-convex: \begin{assumption}[$M$-$\nabla$Lipschitz] There exists a $M > 0$ such that for all $X, Y \in \mathbb{R}^{d}$ \[ \left|\nabla U\left(X\right) - \nabla U\left(Y\right)\right| \leq M \left|X-Y\right|. \] \end{assumption} \begin{assumption}[$m$-convexity] There exists a $m > 0$ such that for all $X,Y \in \mathbb{R}^{d}$ \[ \left\langle \nabla U(X) - \nabla U(Y),X-Y \right\rangle \geq m \left|X-Y\right|^{2}. \] \end{assumption} The two assumptions are popular conditions used to obtain explicit convergence rates, see \cite{dalalyan2017theoretical,dalalyan2020sampling} for example. It is worth mentioning that these assumptions can also produce explicit convergence rates for gradient descent \cite{boyd2004convex}. \subsection{Modified Euclidean Norms}\label{Sec:Quadratic_Norm} For kinetic Langevin dynamics it is not possible to prove convergence with respect to the standard Euclidean norm due to the fact that the generator is hypoelliptic. We therefore work with a modified Euclidean norm as in \cite{monmarche2021high}. For $z = (x,v) \in \mathbb{R}^{2d}$ we introduce the weighted Euclidean norm \[ \left|\left| z \right|\right|^{2}_{a,b} = \left|\left| x \right|\right|^{2} + 2b \left\langle x,v \right\rangle + a \left|\left| v \right|\right|^{2}, \] for $a,b > 0$ which is equivalent norm as long as $b^{2}<a$. More precisely we have \[ \frac{1}{2}||z||^{2}_{a,0} \leq ||z||^{2}_{a,b} \leq \frac{3}{2}||z||^{2}_{a,0}.\] \subsection{Wasserstein Distance} \label{sec:wasserstein_def} We define $\mathcal{P}_{p}\left(\mathbb{R}^{2d}\right)$ to be the set of probability measures which have finite $p$-th moment, then for $p \in \left[0,\infty\right)$ we define the $p$-Wasserstein distance on this space. Let $\mu$ and $\nu$ be two probability measures. We define the $p$-Wasserstein distance between $\mu$ and $\nu$ with respect to the norm $||\cdot||_{a,b}$ (introduced in Sec. \ref{Sec:Quadratic_Norm}) to be \[\mathcal{W}_{p,a,b}\left(\nu,\mu\right) = \left( \inf_{\xi \in \Gamma\left( \nu, \mu \right)}\int_{\mathbb{R}^{2d}}||z_{1} - z_{2}||^{p}_{a,b}d\xi\left(z_{1},z_{2}\right)\right)^{1/p},\] where $\Gamma\left(\mu,\nu\right)$ is the set of measures with marginals $\mu$ and $\nu$ (the set of all couplings between $\mu$ and $\nu$). It is well known that the existence of couplings with a contractive property implies convergence in Wasserstein distance (which can be interpreted as the infimum over all such couplings). The simplest such coupling is to consider simulations with common noise, this is known as synchronous coupling, therefore if one can show contraction of two simulations which share noise increments with a explicit contraction rate. Then one has convergence in Wasserstein distance with the same rate. With all the constants and conditions derived for all the schemes for contraction, we have convergence in Wasserstein distance by the following proposition: \begin{proposition}\label{prop:Wasserstein} Assume a numerical scheme for kinetic Langevin dynamics with a $m$-strongly convex $M$-$\nabla$Lipschitz potential $U$ and transition kernel $P_{h}$. If any two synchronously coupled chains $\left(x_{n},v_{n}\right)$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n}\right)$ of the numerical scheme have the contraction property \begin{equation}\label{eq:contraction_inequality} ||(x_{n} - \Tilde{x}_{n},v_{n} - \Tilde{v}_{n})||^{2}_{a,b} \leq C(1 - c\left(h\right))^{n}||(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0})||^{2}_{a,b}, \end{equation} for $\gamma^{2} \geq C_{\gamma}M$ and $h \leq C_{h}\left(\gamma,\sqrt{M}\right)$ for some $a,b >0$ such that $b^{2} > a$. Then we have that for all $\gamma^{2} \geq C_{\gamma}M$, $h \leq C_{h}\left(\gamma,\sqrt{M}\right)$, $1 \leq p \leq \infty$ and all $\mu,\nu \in \mathcal{P}_{p}(\mathbb{R}^{2d})$, and all $n \in \mathbb{N}$, \[ \mathcal{W}^{2}_{p}\left(\nu P^n_{h} ,\mu P^n_{h} \right) \leq 3C\max{\left\{a,\frac{1}{a}\right\}}\left(1 - c\left(h\right)\right)^{n} \mathcal{W}^{2}_{p}\left(\nu,\mu\right). \] Further to this, $P_{h}$ has a unique invariant measure which depends on the stepsize, $\pi_{h}$, where $\pi_{h} \in \mathcal{P}_{p}(\mathbb{R}^{2d})$ for all $1 \leq p \leq \infty$. \end{proposition} \begin{proof} The proof is given in \cite{monmarche2021high}[Corollary 20], which relies on \cite{villani2009optimal}[Corollary 5.22, Theorem 6.18]. \end{proof} The focus of this article is to prove contractions of the form (\ref{eq:contraction_inequality}), and hence to achieve Wasserstein convergence rates by Prop. \ref{prop:Wasserstein}. With convergence to the invariant measure of the discretizations of kinetic Langevin dynamics considered here it will be possible to combine our results with estimates of the bias of each scheme as in \cite{dalalyan2020sampling}, \cite{monmarche2021high}, \cite{sanz2021wasserstein} and \cite{cheng2018underdamped} to obtain non-asymptotic estimates. \section{Overdamped Langevin discretizations and contraction} We first consider two discretizations of the SDE (\ref{eq:overdamped}), namely the Euler-Maruyama discretization and the high friction limit of the popular kinetic Langevin dynamics scheme BAOAB \cite{leimkuhler2013rational}. The simplest discretization of overdamped Langevin dynamics is using the Euler-Maruyama (EM) method which is defined by the update rule \begin{equation} X_{n+1} = X_{n} - h\nabla U\left(X_{n}\right) + \sqrt{2h}\xi_{n+1}. \end{equation} This scheme is combined with Metropolization in the popular MALA algorithm. An alternative method is the BAOAB limit method of Leimkuhler and Matthews (LM)(\cite{leimkuhler2013rational}, \cite{leimkuhler2014long}) which is defined by the update rule \[ X_{n+1} = X_{n} - h\nabla U\left(X_{n}\right) + \sqrt{2h}\frac{\xi_{n+1} + \xi_{n}}{2}. \] The advantage of this method is that it gains a weak order of accuracy asymptotically. \subsection{Convergence guarantees} \label{sec:conv_overdamped} The convergence guarantees of overdamped Langevin dynamics and its discretizations have been extensively studied under the assumptions presented (see \cite{dalalyan2017theoretical,durmus2017nonasymptotic,cheng2018convergence,dalalyan2017further,durmus2019high, durmus2019analysis, dwivedi2018log}). We use synchronous coupling as a proof strategy to obtain convergence rates as in \cite{dalalyan2017theoretical}. We first consider two chains $x_{n}$ and $y_{n}$ with shared noise such that \begin{align*} x_{n+1} = y_{n} - h\nabla U(x_{n}) + \sqrt{2h}\xi_{n+1}, \quad y_{n+1} = y_{n} - h\nabla U(y_{n}) + \sqrt{2h}\xi_{n+1}. \end{align*} Then we have that \begin{align*} &||x_{n+1} - y_{n+1}||^2 = ||x_{n} - y_{n} + (-\nabla U(x_{n}) - (-\nabla U(y_{n}))||^{2} \\ &= ||x_{n} - y_{n}||^{2} - 2h \langle \nabla U(x_{n}) - \nabla U(y_{n}) , x_{n} - y_{n}\rangle + h^{2}||\nabla U(x_{n}) - \nabla U(y_{n})||^{2}\\ &= ||x_{n} - y_{n}||^{2} - 2h \langle x_{n} - y_{n}, Q(x_{n} - y_{n})\rangle + h^{2}\langle x_{n} - y_{n}, Q^{2} (x_{n} - y_{n}) \rangle, \end{align*} where $Q = \int^{1}_{t = 0}\nabla^{2}U(x_{n} + t(y_{n} - x_{n}))dt$. $Q$ has eigenvalues which are bounded between $m$ and $M$, so $Q^{2} \preceq MQ$, and hence \[ h^{2}\langle x_{n} - y_{n}, Q^{2} (x_{n} - y_{n}) \rangle \leq h^{2}M \langle x_{n} - y_{n}, Q(x_{n} - y_{n}) \rangle. \] Therefore \begin{align*} ||x_{n+1} - y_{n+1}||^2 &\leq ||x_{n} - y_{n}||^{2} - h(2 - hM)\langle x_{n} - y_{n},Q(x_{n} - y_{n}) \rangle\\ &\leq ||x_{n} - y_{n}||^{2}(1 - hm(2-hM)), \end{align*} assuming that $h \leq \frac{2}{M}$. We have a contraction and \[ ||x_{n} - y_{n}|| \leq (1 - hm(2 - hM))^{n/2}||x_{0} - y_{0}||. \] A consequence of this contraction result is that we have convergence in Wasserstein distance to the invariant measure with rate $hm\left(2 - hM\right)$, under the imposed assumptions on $h$ (as discussed in Sec. \ref{sec:wasserstein_def})\cite{monmarche2021high,villani2009optimal}. Note that this argument is exactly the same for the LM discretization of overdamped Langevin dynamics as all the noise components are shared. The stepsize assumption for convergence of overdamped Langevin dynamics in this setting is weak and is the same assumption as is needed to guarantee convergence of gradient descent in optimisation \cite{boyd2004convex}[Eq. (9.18)]. \section{Kinetic Langevin Dynamics} We now consider many discretizations of the SDE (\ref{eq:underdamped_langevin}) using a framework established in Sec. \ref{sec:strategy}, where we construct an alternative Euclidean norm in which we can prove contraction (it is not possible to prove contraction in the standard Euclidean norm). Essentially, we convert the problem of proving contraction to the problem of showing that certain matrices are positive definite. \subsection{Proof Strategy}\label{sec:strategy} We will consider a modified Euclidean norm as defined in Sec. \ref{Sec:Quadratic_Norm} for some choice of $a$ and $b$. Our aim is to construct an equivalent Euclidean norm such that contraction occurs for two Markov chains simulated by the same discretization $z_{n} = (x_{n},v_{n}) \in \mathbb{R}^{2d}$ and $\Tilde{z}_{n} = (\Tilde{x}_{n},\Tilde{v}_{n}) \in \mathbb{R}^{2d}$ that are syncronously coupled. That is, for some choice of $a$ and $b$ such that $a,b >0$ and $b^{2} < a$ \begin{equation}\label{eq:cont_1} ||\Tilde{z}_{k+1} - z_{k+1}||^{2}_{a,b} < \left(1 - c\left(h\right)\right)||\Tilde{z}_{k} - z_{k}||^{2}_{a,b}, \end{equation} where $a$ and $b$ are chosen to provide reasonable explicit assumptions on the stepsize $h$ and friction parameter $\gamma$. Our initial choices of $a$ and $b$ for simple schemes are motivated by \cite{monmarche2021high}, and are derived by considering contraction of the continuous dynamics. Let $\overline{z}_{j} = \Tilde{z}_{j} - z_{j}$ for $j \in \mathbb{N}$, then (\ref{eq:cont_1}) is equivalent to showing that \begin{equation}\label{eq:contraction_matrix_form} \overline{z}^{T}_{k}\left(\left(1 - c\left(h\right)\right)M- P^{T}MP\right )\overline{z}_{k} > 0, \quad \textnormal{where} \quad M = \begin{pmatrix} 1 & b \\ b & a \end{pmatrix}, \end{equation} and $\overline{z}_{k+1} = P\overline{z}_{k}$ ($P$ depends on $z_{k}$ and $\Tilde{z}_{k}$, but we omit this in the notation). \begin{example} As an example we have for the Euler-Maruyama scheme the update rule for $\overline{z}_{k}$ \begin{align*} \overline{x}_{k+1} = \overline{x}_{k} + h \overline{v}_{k}, \qquad \overline{v}_{k+1} = \overline{v}_{k} - \gamma h \overline{v}_{k} -hQ\overline{x}_{k}, \end{align*} where by mean value theorem we can define $Q = \int^{1}_{t = 0}\nabla^{2}U(\Tilde{x}_{k} + t(x_{k} - \Tilde{x}_{k}))dt$, then $\nabla U(\Tilde{x}_{k}) - \nabla U(x_{k}) = Q\overline{x}$. One can show that in the notation of equation \eqref{eq:contraction_matrix_form} we have \begin{equation}\label{eq:P_matrix} P = \begin{pmatrix} I & hI\\ -hQ & \left(1- \gamma h\right)I \end{pmatrix}. \end{equation} \end{example} Proving contraction for a general scheme is equivalent to showing that the matrix $\mathcal{H} := \left(1 - c(h)\right)M - P^{T}MP \succ 0$ is positive definite. The matrix $\mathcal{H}$ is symmetric and hence of the form \begin{equation}\label{eq:contraction_matrix} \mathcal{H} = \begin{pmatrix} A & B \\ B & C \end{pmatrix}, \end{equation} we can show that $\mathcal{H}$ is positive definite by applying the following Prop. \ref{Prop:PD}. \begin{proposition} \label{Prop:PD} Let $\mathcal{H}$ be a symmetric matrix of the form (\ref{eq:contraction_matrix}), then $\mathcal{H}$ is positive definite if and only if $A \succ 0$ and $C - BA^{-1}B \succ 0$. Further if $A$, $B$ and $C$ commute then $\mathcal{H}$ is positive definite if and only if $A\succ 0$ and $AC - B^{2} \succ 0$. \end{proposition} \begin{proof} The proof of the first result is given in \cite{horn2005basic}. To establish the second statement, observe from \cite{horn2012matrix} that if two matrices are positive definite and they commute then the product is positive definite. Also if $A \succ 0$ then $A^{-1} \succ 0$ (as $A$ is symmetric positive definite). Further $A, B$ and $C$ commute and hence $B$, $C$ and $A^{-1}$ commute. Therefore by applying the first result we have that $A \succ 0$ and \[A^{-1}\left(AC - B^{2}\right) = C - BA^{-1}B \succ 0,\] hence $\mathcal{H}$ is positive definite. If $\mathcal{H}$ is positive definite then $A \succ 0$ and $C - BA^{-1}B \succ 0$ by the first result. Thus as $A$, $B$ and $C$ commute we have $AC - B^2 \succ 0$. \end{proof} \begin{remark} An equivalent condition for a symmetric matrix $\mathcal{H}$ of the form (\ref{eq:contraction_matrix}) to be positive definite is $C\succ 0$ and $AC - B^{2} \succ 0$ when $A$, $B$ and $C$ commute. One could equivalently prove that $C \succ 0$ instead of $A \succ 0$ if it is more convenient. \end{remark} Our general approach to prove contraction of some popular kinetic Langevin dynamics schemes is to prove the conditions of Prop. \ref{Prop:PD} are satisfied to establish contraction. We will use the notation laid out in this section in the proofs given in the appendix. \subsection{Euler-Maruyama discretization} We define the EM chain with initial condition $(x_0,v_0)$ by $(x_n,v_n,\xi_n)$ where the $(\xi_n)_{n\in\mathbb{N}}$ are independent ${\cal N}(0,1)$ draws and $(x_{n},v_n)$ are updated according to: \begin{eqnarray} x_{k+1} & = & x_k + hv_k,\\ v_{k+1} & = & v_k - h\nabla U(x_k) - h\gamma v_k + \sqrt{2\gamma h}\xi_{k+1}. \end{eqnarray} \begin{theorem} \label{Theorem:EM} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $\gamma^{2} \geq 4M$ and $h < \frac{1}{2\gamma}$, we have that, for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$, the corresponding EM chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k})||_{a,b} \leq \left(1 - c\left(h\right)\right)^{\frac{k}{2}}||(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0})||_{a,b}, \] where $a = \frac{1}{M}$, $b = \frac{1}{\gamma}$ and \[ c\left(h\right) = \frac{mh}{2\gamma}. \] \end{theorem} \begin{example} \textit{An example to illustrate the tightness of the restrictions on the stepsize $h$ and the restriction on the friction parameter $\gamma$.} We consider the anisotropic Gaussian distribution on $\mathbb{R}^{2}$ with potential $U: \mathbb{R}^{2} \mapsto \mathbb{R}$ given by $U(x,y) = \frac{1}{2}mx^{2} + \frac{1}{2}My^{2}$. This potential satisfies the assumptions \ref{sec:assumptions} with constants $M$ and $m$ respectively. By computing the eigenvalues of the transition matrix $P$ (for contraction) we can see for what values of $h$ contraction occurs. For EM we have that \[ P = \begin{pmatrix} I & hI\\ -hQ & \left(1- \gamma h\right)I \end{pmatrix}\text{, where }Q = \begin{pmatrix} m & 0 \\ 0 & M \end{pmatrix}, \] with eigenvalues \[\frac{1}{2}\left(2 - \gamma h \pm h \sqrt{\gamma^{2} - 4\lambda}\right),\] for $\lambda = m,M$. For stability and contraction we require that \[ \frac{1}{2}\left(2 - \gamma h - h \sqrt{\gamma^{2} - 4m} \right) > 0, \quad \text{and} \quad \frac{1}{2}\left(2 - \gamma h + h \sqrt{\gamma^{2} - 4m} \right) < 1. \] The second condition is equivalent to $\gamma > \sqrt{\gamma^{2} - 4m}$ which trivially holds and the first condition is equivalent to $h \leq 2/(\gamma + \sqrt{\gamma^{2} - 4m}) \approx 1/\gamma$. \end{example} \section{First order splittings} \label{sec:splittings} A common discretization strategy for kinetic\\ Langevin dynamics is based on splitting up the dynamics into parts which can be integrated exactly, in the weak sense. An increasingly popular splitting choice used in molecular dynamics modelling is to divide the SDE into deterministic parts corresponding to linear positional drift and an impulse due to the force and a dissipative-stochastic term corresponding to an Ornstein-Uhlenbeck equation \cite{PhysRevE.75.056707}. These parts are denoted by $\mathcal{B}$, $\mathcal{A}$ and $\mathcal{O}$ with update rules given by \begin{equation}\label{eq:BAO} \begin{split} &\mathcal{B}: v \to v - h\nabla U(x), \\ &\mathcal{A}: x \to x + hv,\\ &\mathcal{O}: v \to \eta v + \sqrt{1 - \eta^{2}}\xi, \end{split} \end{equation} where \[ \eta := \exp{\left(-\gamma h \right)}. \] The reasoning for such a splitting is based on the fact that the infinitesimal generator of the SDE (\ref{eq:underdamped_langevin}) can be split as $\mathcal{L} = \mathcal{L}_{\mathcal{A}} + \mathcal{L}_{\mathcal{B}} + \gamma\mathcal{L}_{\mathcal{O}}$, where \[ \mathcal{L}_{\mathcal{A}} = \left\langle v, \nabla_{x}\right\rangle, \qquad \mathcal{L}_{\mathcal{B}} = -\left\langle\nabla U\left(x\right), \nabla_{v}\right\rangle, \qquad \mathcal{L}_{\mathcal{O}} = -\left \langle v, \nabla_{v} \right\rangle + \Delta_{v}. \] The dynamics associated to $\mathcal{L}_{\mathcal{A}}$ and $\mathcal{L}_\mathcal{B}$ are the deteriministic dynamics corresponding to $\mathcal{A}$ and $\mathcal{B}$. The dynamics associated to $\gamma\mathcal{L}_{\mathcal{O}}$ is the Ornstein-Uhlenbeck process, which can be solved exactly, in the sense of distributions. This corresponds to the $\mathcal{O}$ step. We use the convention that one applies the operators left to right. The BAO method would first apply $\mathcal{B}$ then $\mathcal{A}$ and lastly $\mathcal{O}$. For more details on these splittings we refer the reader to \cite{leimkuhler2016computation}. We will now consider contraction for all first order splitting (permutations of the $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{O}$ pieces), which are schemes with weak order $1$. We first consider BAO, where we define a BAO chain with initial condition $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ by $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$, using the update $\mathcal{BAO}$ (\ref{eq:BAO}) and $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ are vectors of standard normal random variables. \begin{theorem}[BAO] \label{Theorem:BAO} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $h < \frac{1 - \eta}{\sqrt{6M}}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the BAO chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k})||_{a,b} \leq \left(1 - c\left(h\right)\right)^{\frac{k}{2}}||(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0})||_{a,b}, \] where $a = \frac{1}{M}$ and $b = \frac{h}{(1-\eta)}$ and \[ c\left(h\right) = \frac{h^{2}m}{4\left(1 - \eta\right)}. \] \end{theorem} \begin{remark} The modified Euclidean norm has now been chosen to be stepsize dependent and is needed to eliminate the corresponding dependency of the stepsize on the strong convexity constant $m$. We note that that simply choosing $b = 1/\gamma$ does not result in a norm which guarantees a stepsize restriction which is independent of $m$, as is clear from the motivation of the construction of our choice of $b$. When $b \neq h/(1-\eta)$ one can always choose $m$ small enough such that $AC - B^{2}$ is not positive definite. We also point out that the stepsize restriction implicitly implies that $\gamma^{2}$ is larger than some constant factor multiplied by $M$. Further, for large $\gamma$ (for example $\gamma \geq 5\sqrt{M}$) we have convergence for stepsizes independent of the size of $\gamma$ (for example $h < 1/8\sqrt{M}$), which improves on the results of \cite{sanz2021wasserstein}. \end{remark} \begin{example} \textit{An example to illustrate the tightness of the restrictions on the stepsize $h$ and the restriction on the friction parameter $\gamma$.} We consider the anisotropic Gaussian distribution on $\mathbb{R}^{2}$ with potential $U: \mathbb{R}^{2} \mapsto \mathbb{R}$ given by $U(x,y) = \frac{1}{2}mx^{2} + \frac{1}{2}My^{2}$. By computing the eigenvalues of the transition matrix $P$ (for contraction) we can see for what values of $h$ contraction occurs. For BAO we have that \[ P = \begin{pmatrix} I - h^{2}Q & hI\\ -h\eta Q & \eta I \end{pmatrix}\text{, where }Q = \begin{pmatrix} m & 0 \\ 0 & M \end{pmatrix}, \] with eigenvalues \begin{align*} &\frac{1}{2}\left(1 + \eta - h^{2}\lambda \pm \sqrt{-4\eta + \left(-1 - \eta + h^{2}\lambda\right)^{2}}\right), \end{align*} for $\lambda = m,M$. For stability and contraction it is necessary and sufficient that \[ \left(1 + \eta - h^{2}M\right) > 0, \quad \text{and} \quad \frac{1}{2}\left(1 + \eta - h^{2}\lambda + \sqrt{-4\eta + \left(-1 - \eta + h^{2}\lambda\right)^{2}}\right) < 1. \] The first condition requires $h < \sqrt{\frac{1 + \eta}{M}}$, where $\frac{1}{\sqrt{M}}< \sqrt{\frac{1 + \eta}{M}} < \frac{2}{\sqrt{M}}$. The second condition holds when \[1 - \eta + h^{2}\lambda > \sqrt{-4\eta + \left(-1 - \eta + h^{2}\lambda \right)^{2}}, \] which is equivalent to $4h^{2}\lambda > 0$, which trivially holds. Due to these stability conditions the best contraction rate possible is $\mathcal{O}\left(\frac{m}{M}\right)$, which coincides with our results. Further we have that the contraction rate is precisely $1- \lambda_{max}$ which simplifies to \[ c_{\mathcal{N}} = 1- \eta + h^2m - \sqrt{\left(1 - \eta + h^2 m \right)^2 - 4h^2m}. \] Moreover, it can be shown that $4 c(h) > c_{\mathcal{N}}$ for $h < 1/\sqrt{22m}$ and $\gamma \geq 4\sqrt{m}$. It is shown in \cite{monmarche2020almost}[Proposition 4] that for the continuous dynamics this condition on $\gamma$ is necessary. \end{example} \begin{theorem}[OAB] \label{Theorem:OAB} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $h < \min{\{\frac{1}{4\gamma},\frac{1-\eta}{\sqrt{6M}}}\}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the OAB chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[|\left|(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k}\right)||_{a,b} \leq \left(1 - c\left( h\right)\right)^{\frac{k}{2}}||\left(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0}\right)||_{a,b}, \] where $a = \frac{1}{M}$, $b = \frac{\eta h}{\left(1-\eta\right)}$ and $c\left(h\right) = \frac{\eta h^{2}m}{4\left(1 - \eta\right)}$. \end{theorem} Considering other splittings one could use the same techniques as above or we can use the contractions results of BAO and OAB to achieve a contraction result for the remaining permutations by writing \begin{align*} (\mathcal{ABO})^{n} &= \mathcal{AB}(\mathcal{OAB})^{n-1}\mathcal{O}, \quad (\mathcal{BOA})^{n} = \mathcal{B}(\mathcal{OAB})^{n-1}\mathcal{OA}\\ (\mathcal{OBA})^{n} &= \mathcal{O}(\mathcal{BAO})^{n-1}\mathcal{BA}, \quad (\mathcal{AOB})^{n} = \mathcal{AO}(\mathcal{BAO})^{n-1} \mathcal{B} \end{align*} However by applying direct arguments as done for OAB and BAO one would achieve better preconstants. Let $\left(\Tilde{x}_{0}, \Tilde{v}_{0} \right) \in \mathbb{R}^{2d}$ and $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ be two initial conditions for a synchronous coupling of sample paths of the ABO splitting and $\overline{x}_{0} := \Tilde{x}_{0} - x_{0}$, $\overline{v}_{0} := \Tilde{v}_{0} - v_{0}$. In the following argument we let $Q$ be such that $\nabla U\left(\Tilde{x}_{0} + h \Tilde{v}_{0}\right) - \nabla U\left(x_{0} + h v_{0}\right) = Q\left(\overline{x}_{0} + h\overline{v}_{0}\right)$ by the mean value theorem. Using the notation $\Psi_{\rm{ABO}}$ to denote the one step map of the ABO discretization we have that for $h < \min{\{\frac{1}{4\gamma},\frac{1-\eta}{\sqrt{6M}}}\}$ \begin{align*} &||\Psi_{\rm{ABO}}\left(\Tilde{x}_{k}\right) - \Psi_{\rm{ABO}}\left(x_{k}\right)||^{2}_{a,b} = ||\left(\Psi_{\rm{ABO}}\right)^{k}\left(\Tilde{x}_{0}\right) - \left(\Psi_{\rm{ABO}}\right)^{k}\left(x_{0}\right)||^{2}_{a,b} \\ &= ||\Psi_{\rm{O}} \circ \left(\Psi_{\rm{OAB}}\right)^{k-1} \circ \Psi_{\rm{AB}} \left(\Tilde{x}_{0}\right) - \Psi_{\rm{O}} \circ \left(\Psi_{\rm{OAB}}\right)^{k-1} \circ \Psi_{\rm{AB}}\left(x_{0}\right)||^{2}_{a,b}\\ &\leq 3\left(1 - c\left(h\right) \right)^{k-1} ||\Psi_{\rm{AB}}\left(\Tilde{x}_{0},\Tilde{v}_{0}\right) - \Psi_{\rm{AB}}\left(x_{0},v_{0}\right)||^{2}_{a,b}\\ &\leq 9\left(1 - c\left(h\right) \right)^{k-1}\left(\left(1 + 2h^{2}M^{2}a\right)||\overline{x}_{0}||^{2} + \left(h^{2} + a + 2h^{4} M^{2}a \right) ||\overline{v}_{0}||^{2} \right)\\ &\leq 27\left(1 - c\left(h\right) \right)^{k-1}||\left(\overline{x}_{0},\overline{v}_{0}\right)||^{2}_{a,b}, \end{align*} where we have used the norm equivalence introduced in Sec. \ref{Sec:Quadratic_Norm}. The same method of argument can be used for the other first order splittings. \section{Higher order splittings} We now consider higher order schemes which are obtained by the splittings introduced in Sec. \ref{sec:splittings}. These schemes are weak order two and they are symmetric in the order of the operators, with repeated operators corresponding to multiple steps with half the stepsize. We will focus our attention to two popular splittings which are BAOAB and ABOBA (or OBABO) as in \cite{leimkuhler2013rational}. Due to the fact that the modified Euclidean norms developed in the previous section are different for different first order splittings we aren't able to simply compose the results of say OBA and ABO to obtain contraction of OBABO. First we consider the BAOAB discretization, where we denote a BAOAB chain with initial condition $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ by $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$, which are defined by the update $\mathcal{BAOAB}$ (\ref{eq:BAO}) and $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ are independent Gaussian random variables. \begin{theorem}[BAOAB] \label{Theorem:BAOAB} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $h \leq \frac{1-\eta}{2\sqrt{M}}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the BAOAB chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||\left(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k}\right)||_{a,b} \leq 7\left(1 - c\left(h\right)\right)^{\frac{k-1}{2}}||\left(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0}\right)||_{a,b}, \] where $a = \frac{1}{M}$ and $b = \frac{h}{\left(1-\eta\right)}$ and \[ c\left(h\right) = \frac{1}{4}\left(\frac{\eta h^{2}m}{\left(1 - \eta \right)} + h^2 m\right) = \frac{h^{2}m}{4\left(1- \eta\right)}. \] \end{theorem} Next we consider the OBABO discretization which has been studied in the recent work \cite{monmarche2021high}. In \cite{monmarche2022hmc} they analyse Hamiltonian Monte Carlo as $\mathcal{O}\left(\mathcal{ABA}\right)^{L}\mathcal{O}$ for $L$ leapfrog steps. In \cite{monmarche2022hmc} a similar norm is used to study Hamiltonian Monte Carlo, however they obtain stepsize restrictions of at least $\mathcal{O}\left(m/L^{3/2}\right)$. We note that the OABAO scheme can also be analysed in our framework. We denote a OBABO chain with initial condition $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ by $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$, which are defined by the update $\mathcal{OBABO}$ (\ref{eq:BAO}) and $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ are independent Gaussian random variables. \begin{theorem}[OBABO]\label{Theorem:OBABO} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $h < \frac{1 - \eta}{\sqrt{4M}}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the OBABO chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||\left(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k}\right)||_{a,b} \leq 7\left(1 - c\left(h\right)\right)^{\frac{k-1}{2}}||\left(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0}\right)||_{a,b}, \] where $a = \frac{1}{M}$, $b = \frac{h}{\left(1-\eta\right)}$ and \[ c\left(h\right) = \frac{h^{2}m}{4\left(1 - \eta\right)}. \] \end{theorem} \begin{remark} In \cite{dalalyan2020sampling} it is shown that the continuous dynamics converges with a rate of $\mathcal{O}(m/\gamma)$. There is a major difference in terms of contraction rate for large $\gamma$ between the rates achieved by BAOAB and OBABO and the continuous dynamics. As $\gamma \to \infty$ for BAOAB and OBABO you have convergence rates of $\mathcal{O}(h^{2}m)$, whereas the contraction rate of the continuous dynamics converges to zero. \end{remark} \begin{remark} In Theorem \ref{Theorem:BAOAB} and Theorem \ref{Theorem:OBABO} we have a prefactor of $7$ due to the fact that we have converted the problem of contraction into proving a simpler problem with one gradient evaluation. More specifically for BAOAB using the relation $(\mathcal{BAOAB})^{n} = \mathcal{BAO}\left(\mathcal{ABAO}\right)^{n-1}\mathcal{AB}$ and proving contraction for $\mathcal{ABAO}$ and similarly for OBABO. The prefactor comes from the remaining terms $\mathcal{BAO}$ and $\mathcal{AB}$. \end{remark} \section{Stochastic exponential Euler scheme} See \cite{durmus2021uniform} for an introduction to the Stochastic exponential Euler scheme and a derivation, based on keeping the gradient constant and analytically integrating the OU process with this constant gradient by combining the $\mathcal{B}$ and the $\mathcal{O}$ steps in the previous splitting. This scheme is the one considered in \cite{cheng2018underdamped,dalalyan2020sampling} and has gained a lot of attention in the machine learning community and we can apply our methods to this scheme. Similar schemes have also been considered in \cite{chandrasekhar1943stochastic,ermak1980numerical,skeel2002impulse} and it has been analysed in \cite{durmus2021uniform,shi2012convergence}. The scheme in the notation we have used is given by the update rule \begin{equation}\label{eq:SES} \begin{split} X_{k+1} &= X_{k} + \frac{1-\eta}{\gamma}V_{k} - \frac{\gamma h + \eta -1}{\gamma^{2}}\nabla U\left(X_{k}\right) + \zeta_{k+1},\\ V_{k+1} &= \eta V_{k} - \frac{1 - \eta}{\gamma} \nabla U\left(X_{k}\right) + \omega_{k+1}, \end{split} \end{equation} where \[ \zeta_{k+1} = \sqrt{2\gamma}\int^{h}_{0}e^{-\gamma\left( h - s\right)}dW_{h\gamma + s}, \qquad \omega_{k+1} = \sqrt{2\gamma} \int^{h}_{0} \frac{1 - e^{-\gamma\left( h - s\right)}}{\gamma}dW_{h\gamma + s}. \] $\left(\zeta_{k},\omega_{k}\right)_{k \in \mathbb{N}}$ are Gaussian random vectors with covariances matrices which are stated in \cite{durmus2021uniform}. Now we can couple two trajectories which have common noise $\left(\zeta_{k},\omega_{k}\right)_{k \in \mathbb{N}}$ then we can obtain contraction rates by the previously introduced methods. For the SES discretization where we denote a SES chain with initial condition $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ by $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$, which are defined by the update SES (\ref{eq:SES}) and $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ are independent Gaussian random variables. \begin{theorem}[Stochastic Euler Scheme]\label{Theorem:SES} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $\gamma \geq 5\sqrt{M}$ and $h \leq \frac{1}{2\gamma}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the SES chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||\left(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k}\right)||_{a,b} \leq \left(1 - c\left(h\right)\right)^{\frac{k}{2}}||\left(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0}\right)||_{a,b}, \] where $a = \frac{1}{M}$, $b = \frac{1}{\gamma}$ and \[ c\left(h\right) = \frac{mh}{4\gamma}. \] \end{theorem} \section{Overdamped Limit} We will now compare and analyze how the different schemes behave in the high friction limit. Starting with the first order schemes. It is a desirable property that the high friction limit is a discretization of the overdamped dynamics, therefore if a user of such a scheme sets a friction parameter $\gamma$ too large, they will not suffer from the $\mathcal{O}(1/\gamma)$ scaling of the convergence rate. We will call schemes with this desirable property $\gamma$-limit convergent (GLC), out of the schemes we have analysed it is only BAOAB and OBABO which are GLC. \subsection{BAO} If we consider the update rule of the BAO scheme \begin{align*} x_{k+1} = x_{k} + h\left(v_{k} - h \nabla U(x_{k})\right), \quad v_{k+1} = \eta v_{k} - h\eta \nabla U(x_{k}) + \sqrt{1 - \eta^{2}}\xi_{k+1}, \end{align*} and take the limit as $\gamma \to \infty$ we obtain \begin{align*} x_{k+1} &= x_{k} - h^{2} \nabla U(x_{k}) + h \xi_{k}, \end{align*} which is simply the Euler-Maruyama scheme with stepsize $h^{2}$ for potential $\Tilde{U} := 4U$, which imposes stepsize restrictions $h^{2} \leq \frac{2}{4M}$ and hence consistent with our analysis. Further if we take the limit of the contraction rate and the modified Euclidean norm we have \[ \lim_{\gamma \to \infty} c\left(h\right) = \frac{h^{2}m}{4}, \qquad \lim_{\gamma \to \infty} ||x||^{2} + 2b\langle x,v \rangle + a||v||^{2} = ||x||^{2} + 2h \langle x,v \rangle + \frac{1}{M}||v||^{2}, \] which is again consistent with the convergence rates achieved in Sec. \ref{sec:conv_overdamped} and the norm is essentially the Euclidean norm when considered on the overdamped process as $\overline{v} = 0$. Due to the fact that the potential is rescaled in the limit, this is not a discretization of the overdamped dynamics. \subsection{OAB} If we consider the update rule of the OAB scheme \begin{align*} x_{k+1} &= x_{k} + h\eta v_{k} + h \sqrt{1 - \eta^{2}}\xi_{k+1},\\ v_{k+1} &= \eta v_{k} \sqrt{1 - \eta^{2}}\xi_{k+1} - h\eta \nabla U(x_{k} + h\eta v_{k} + h \sqrt{1 - \eta^{2}}\xi_{k+1}), \end{align*} and take the limit as $\gamma \to \infty$ we obtain the update rule $x_{k+1} = x_{k} + h \xi_{k+1}$, therefore the overdamped limit is not inherited by the scheme and further we do not expect contraction. This is consistent with our analysis of OAB and our contraction rate which tends towards $0$ in the high friction limit. \subsection{BAOAB} If we consider the update rule of the BAOAB scheme \begin{align*} x_{k+1} &= x_{k} + \frac{h}{2}\left(1 + \eta\right)v_{k} - \frac{h^{2}}{4}\left(1 + \eta\right) \nabla U(x_{k}) + \frac{h}{2}\sqrt{1 - \eta^{2}}\xi_{k+1},\\ v_{k+1} &= \eta \left(v_{k} - \frac{h}{2}\nabla U(x_{k})\right) + \sqrt{1 - \eta^{2}}\xi_{k+1} - \frac{h}{2}\nabla U(x_{k+1}), \end{align*} and take the limit as $\gamma \to \infty$ we obtain \begin{align*} x_{k+1} &= x_{k} - \frac{h^{2}}{2} \nabla U(x_{k}) + \frac{h}{2}\left(\xi_{k} + \xi_{k+1}\right), \end{align*} which is simply the LM scheme with stepsize $h^{2}/2$ (as originally noted in \cite{leimkuhler2013rational}), which imposes stepsize restrictions $h^{2} \leq 2/M$ and hence consistent with our analysis. Further if we take the limit of the contraction rate and the modified Euclidean norm we have \[ \lim_{\gamma \to \infty} c\left(h\right) = \frac{h^{2}m}{4}, \qquad \lim_{\gamma \to \infty} ||x||^{2} + 2b\langle x,v \rangle + a||v||^{2} = ||x||^{2} + 2h \langle x,v \rangle + \frac{1}{M}||v||^{2}, \] which is again consistent with the convergence rates achieved in Sec. \ref{sec:conv_overdamped} and the modified Euclidean norm is essentially the Euclidean norm when considered on the overdamped process as $\overline{v} = 0$. \subsection{OBABO} If we consider the update rule of the OBABO scheme \begin{align*} x_{k+1} &= x_{k} + h\eta v_{k} + h\sqrt{1 - \eta^2}\xi_{1,k+1} - \frac{h^{2}}{2}\nabla U(x_{k}),\\ v_{k+1} &= \eta\left(\eta v + \sqrt{1 - \eta^2}\xi_{1,k+1} - \frac{h}{2}\nabla U(x_{k}) - \frac{h}{2}\nabla U(x_{k+1})\right) + \sqrt{1 - \eta^{2}}\xi_{2,k+1}, \end{align*} where ($\eta = \exp{\left(-\gamma h /2\right)}$) and for ease of notation in the above scheme and we have labelled the two noises of one step $\xi_{1}$ and $\xi_{2}$ . Now we take the limit as $\gamma \to \infty$ we obtain \begin{align*} x_{k+1} &= x_{k} -\frac{h^{2}}{2}\nabla U(x_{k}) + h \xi_{k+1}, \end{align*} which is the Euler-Maruyama scheme for overdamped Langevin with stepsize $h^{2}/2$, which has convergence rate $\mathcal{O}\left(h^{2}m\right)$. Hence consistent with our analysis of OBABO and our contraction rate which tends towards $h^{2}m/4$ in the high friction limit. \subsection{SES} If we consider the limit as $\gamma \to \infty$ of the scheme (\ref{eq:SES}) we obtain the update rule $x_{k+1} = x_{k}$ and therefore the overdamped limit is not inherited by the scheme and further we do not expect contraction. Hence consistent with our analysis of the stochastic Euler scheme as the contraction rate tends to $0$ in the high friction limit. \section{Discussion} \begin{figure}[H] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{0_01.pdf} \caption{Low friction} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{4.pdf} \caption{} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{10000.pdf} \caption{High friction} \end{subfigure} \caption{Contraction of two kinetic Langevin trajectories $x_{1}$ and $x_{2}$ with initial conditions $[-1,-1]$ and $[1,1]$ for a $2$-dimensional standard Gaussian with stepsize $h = 0.25 = 1/4\sqrt{M}$.} \label{Fig:1} \end{figure} We tested our observations numerically in Fig. \ref{Fig:1} with a $2$-dimensional standard Gaussian. Fig. \ref{Fig:1} is consistent with our analysis that all schemes are stable when $\gamma \approx 4\sqrt{M}$ and in the high friction regime EM, OAB and SES behave poorly compared to BAOAB and BAO. In the low friction regime again EM and SES perform poorly compared to the other schemes. In \cite{dalalyan2020sampling} it is shown that the optimal convergence rate for the continuous time dynamics is $\mathcal{O}(m/\gamma)$, therefore our contraction rates are consistent up to a constant for the discretizations, however for some of the schemes considered for example BAOAB and OBABO we have that the scheme inherits convergence to the overdamped Langevin dynamics (without time rescaling) and this is reflected in our convergence rate estimates. Therefore for MCMC applications it does not suffer from the scaling of $1/\gamma$ on the convergence rate, if the user picks a friction parameter which is too high. This robustness with respect to the friction parameter is shown in Fig. \ref{Fig:1}. The constants in our arguments can be improved by sharper bounds and a more careful analysis, but the restriction on $\gamma$ is consistent with other works on synchronous coupling for the continuous time Langevin diffusions \cite{bolley2010trend,cheng2018underdamped,dalalyan2020sampling,deligiannidis2021randomized,zajic2019non}. Further it is shown in \cite{monmarche2020almost}[Proposition 4] that the continuous time process yields Wasserstein contraction of synchronous coupling for all $M$-$\nabla$Lipschitz and $m$-strongly convex potentials $U$ if and only if $M - m < \gamma (\sqrt{M} + \sqrt{m})$ for the norms that we considered. This condition when $M$ is much larger than $m$ is $\mathcal{O}(\sqrt{M})$. It may be possible to achieve convergence rates for small $\gamma$, by using a more sophisticated argument like that of \cite{eberle2019couplings}. Using a different Lyapunov function or techniques may lead to being able to extend these results to all $\gamma > 0$ \cite{durmus2021uniform,qin2022geometric}, following results for the continuous case \cite{eberle2019couplings}, but this is beyond the scope of this paper. The restrictions on the stepsize $h$ are tight for the optimal contraction rate for EM and BAO and hence result in stability conditions of $\mathcal{O}\left(1/\gamma\right)$ for EM and SES. Also we have shown BAO, OBA, AOB, BAOAB and OBABO have convergence guarantees for stepsizes $\mathcal{O}(1/\sqrt{M})$ and BAOAB and OBABO have the desirable GLC property which is not common amongst the schemes we studied. For the choice of parameters which achieve optimal contraction rate we derive $\mathcal{O}(m/M)$ rates of contraction, which are sharp up to a constant and we achieve this for every scheme that we studied. \section*{Acknowledgments} The authors would like to thank Kostas Zygalakis for helpful comments on this work. The authors acknowledge the support of the Engineering and Physical Sciences Research Council Grant EP/S023291/1 (MAC-MIGS Centre for Doctoral Training). \bibliographystyle{siamplain} \section{A detailed example} Here we include some equations and theorem-like environments to show how these are labeled in a supplement and can be referenced from the main text. Consider the following equation: \begin{equation} \label{eq:suppa} a^2 + b^2 = c^2. \end{equation} You can also reference equations such as \cref{eq:matrices,eq:bb} from the main article in this supplement. \lipsum[100-101] \begin{theorem} An example theorem. \end{theorem} \lipsum[102] \begin{lemma} An example lemma. \end{lemma} \lipsum[103-105] Here is an example citation: \cite{KoMa14}. \section[Proof of Thm]{Proof of \cref{thm:bigthm}} \label{sec:proof} \lipsum[106-112] \section{Additional experimental results} \Cref{tab:foo} shows additional supporting evidence. \begin{table}[htbp] {\footnotesize \caption{Example table} \label{tab:foo} \begin{center} \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.~Dev. \\ \hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\ \hline \end{tabular} \end{center} } \end{table} \bibliographystyle{siamplain} \section{Introduction} In this article, we study the following form of the Langevin dynamics equations (``kinetic Langevin dynamics") : \begin{equation}\label{eq:underdamped_langevin} \begin{split} dX_{t} &= V_{t}dt,\\ dV_{t} &= -\nabla U(X_t) dt - \gamma V_{t}dt + \sqrt{2\gamma}dW_t, \end{split} \end{equation} where $U$ is the potential energy, $\gamma > 0$ is a friction parameter, and $W_{t}$ is $d$-dimensional standard Brownian motion. It can be shown under mild conditions that this process has invariant measure with density proportional to $\exp(-U(X)-||V||^2/2)$ \cite{pavliotis2014stochastic}. Normally, Langevin dynamics is developed in the physical setting with additional parameters representing temperature and mass. However, our primary aim in using (\ref{eq:underdamped_langevin}) is, ultimately, the computation of statistical averages involving only the position $X$, and in such situations both parameters can be neglected without any loss of generality, or alternatively incorporated into our results through suitable rescalings of time and potential energy. In this article we focus on the properties of (\ref{eq:underdamped_langevin}) in relation to numerical discretization and variation of the friction coefficient. Taking the limit as $\gamma \to \infty$ in (\ref{eq:underdamped_langevin}), and introducing a suitable time-rescaling ($t' =\gamma t$) results in the overdamped Langevin dynamics given by (see \cite{pavliotis2014stochastic}[Sec 6.5]) \begin{equation}\label{eq:overdamped} dX_{t} = -\nabla U(X_t) dt + \sqrt{2}dW_t. \end{equation} This equation has, again, a unique invariant measure with density proportional to $\exp(-U(x))$. Under the assumption of a Poincar\'e inequality, convergence rate guarantees can be established for the continuous dynamics \cite{bakry2014analysis}. In the case of kinetic dynamics a more delicate argument is needed to establish exponential convergence, due to the hypoelliptic nature of the SDE (see \cite{cao2019explicit,villani2006hypocoercivity,dolbeault2015hypocoercivity,dolbeault2009hypocoercivity,bakry2006diffusions,baudoin2016wasserstein,baudoin2017bakry}). Langevin dynamics, in its kinetic and overdamped forms, is the basis of many widely used sampling algorithms in machine learning and statistics \cite{cheng2018underdamped,welling2011bayesian,vollmer2016exploration}. In sampling, Langevin dynamics is discretized and the individual timesteps generated by integration are viewed as approximate draws from the target distribution, however there is an inherent bias due to the finite difference numerical approximation. This bias is usually addressed by choosing a sufficiently small stepsize, or by adding bias correction by use of methods like Metropolis-Hastings adjustment. The choice of the discretization method has a significant effect on the quality of the samples and also on the computational cost of producing accurate samples, through stability properties, convergence rates and asymptotic bias. Overdamped Langevin dynamics has been heavily studied both in the continuous and the discretized settings, with popular integrators being the Euler-Maruyama and the limit method of the BAOAB scheme \cite{leimkuhler2013rational}. The kinetic Langevin system has been extensively studied in the continuous case, but there are still many open questions around the design of the numerical integrator. A metric that is typically used to quantify the performance of a sampling scheme is the number of steps required to reach a certain level of accuracy in Wasserstein distance. Non-asymptotic bounds in Wasserstein distance reflect computational complexity, convergence rate and accuracy. Achieving such bounds relies on two steps: (1) determining explicit convergence rates of the process to its invariant measure and (2) proving non-asymptotic bias estimates for the invariant measure. The focus of the current article is the convergence of the time-discrete system to its invariant measure. The approach that we use to obtain convergence rates is based on proving contraction for a synchronous coupling, as in \cite{monmarche2021high,dalalyan2020sampling}. Proving contraction of a coupling has been a popular method for establishing convergence both in the continuous time setting and for the discretization for Langevin dynamics and Hamiltonian Monte Carlo (\cite{eberle2019couplings,bou2020coupling,deligiannidis2021randomized,bou2023mixing,bou2022couplings,riou2022metropolis,schuh2022global}), since a consequence of such a contraction is convergence in Wasserstein distance (viewed as the infimum over all possible couplings with respect to some norm). Synchronous coupling has been a popular means of achieving explicit convergence rates for discretizations \cite{monmarche2020almost,monmarche2022hmc} due to its simplicity. There has been other recent work aimed at providing convergence rates for kinetic Langevin dynamics under explicit restrictions on the parameters (\cite{cheng2018underdamped,dalalyan2020sampling,monmarche2021high,monmarche2022hmc}), but these guarantees are valid only with sharp restrictions on stepsize. There has also been the work of \cite{sanz2021wasserstein} which considers a slightly different version of the SDE (\ref{eq:underdamped_langevin}), where time is rescaled depending on $M$ and $m$ to optimize contraction rates and bias. We have included their results in Table \ref{Table:results} after converting them into our framework using \cite{dalalyan2020sampling}[Lemma 1]. The results of \cite{sanz2021wasserstein} rely on a stepsize restriction of $\mathcal{O}(1/\gamma)$, but their analysis does not provide the stepsize threshold \cite{sanz2021wasserstein}[Example 9], and the class of schemes considered is different, with only the stochastic Euler scheme in common. Other works on contraction of kinetic Langevin and its discretization include \cite{foster2021shifted, dalalyan2022bounding}. In the current article, we apply direct convergence analysis to various popular integration methods, and provide a general framework for establishing convergence rates of kinetic Langevin dynamics with tight explicit stepsize restrictions of $\mathcal{O}\left(1/\gamma\right)$ or $\mathcal{O}(1/\sqrt{M})$ (depending on the scheme). As a consequence we improve the contraction rates significantly for many of the available algorithms (see Table \ref{Table:results}). For a specific class of schemes, we establish explicit bounds on the convergence rate for stepsizes of $\mathcal{O}(1/\sqrt{M})$. In the limit of large friction, we distinguish two types of integrators -- those that converge to overdamped dynamics (``$\gamma$-limit-convergent") and those that do not. We demonstrate with examples that this property is not universal: some seemingly reasonable methods have the property that the convergence rate falls to zero in the $\gamma\rightarrow \infty$ limit. This is verified numerically and analytically for an anisotropic Gaussian target. The remainder of this article is structured as follows. We first introduce overdamped Langevin dynamics, the Euler-Maruyama (EM) and the high friction limit of BAOAB (LM) and discuss their convergence guarantees. Next, we introduce kinetic Langevin and describe various popular discretizations, and give our results on convergence guarantees with mild stepsize assumptions. These schemes include first and second order splittings and the stochastic Euler scheme (SES). Further we compare the results of overdamped Langevin and kinetic Langevin and show how schemes like BAOAB and OBABO exhibit the positive qualities of both cases with the GLC property, whereas schemes like EM and SES do not perform well for a large range of $\gamma$. \begin{table} \begin{center} \begin{tabular}{ |c|c|c|c|c } \hline Algorithm & stepsize restriction & optimal one-step contraction rate \\ \hline EM & $\mathcal{O}(1/\gamma)$ & $\mathcal{O}(m/M)$\\ BAO, OBA, AOB &$\mathcal{O}(1/\sqrt{M})$ & $\mathcal{O}(m/M)$\\ OAB, ABO, BOA &$\mathcal{O}(1/\gamma)$ & $\mathcal{O}(m/M)$\\ BAOAB &$\mathcal{O}(1/\sqrt{M})$ & $\mathcal{O}(m/M)$ \\ OBABO &$\mathcal{O}(1/\sqrt{M})$ & $\mathcal{O}(m/M)$ \\ SES &$\mathcal{O}(1/\gamma)$ & $\mathcal{O}(m/M)$\\ \hline \end{tabular} \vspace{5mm} \begin{tabular}{ |c|c|c| } \hline Algorithm & previous stepsize restriction & previous explicit best rate\\ \hline OBABO &$\mathcal{O}(m/\gamma^{3})$ & $\mathcal{O}(m^{2}/M^{2})$ \cite{monmarche2021high} \\ SES &$\mathcal{O}(1/\gamma)$ & $\mathcal{O}(m/M)$ \cite{sanz2021wasserstein}\\ \hline \end{tabular} \end{center} \caption{The first table provides our stepsize restrictions and optimal contraction rates of the discretized kinetic Langevin dynamics. The second provides previous best results. Further there are no previous results regarding the EM scheme, the first order splittings and BAOAB to the best of our knowledge.} \label{Table:results} \end{table} \section{Assumptions and definitions} \subsection{Assumptions on $U$} \label{sec:assumptions} We will make the following assumptions on the target measure $\exp{\left(-U\right)}$ to obtain convergence rates. We assume that the potential is $M$-smooth and $m$-convex: \begin{assumption}[$M$-$\nabla$Lipschitz] There exists a $M > 0$ such that for all $X, Y \in \mathbb{R}^{d}$ \[ \left|\nabla U\left(X\right) - \nabla U\left(Y\right)\right| \leq M \left|X-Y\right|. \] \end{assumption} \begin{assumption}[$m$-convexity] There exists a $m > 0$ such that for all $X,Y \in \mathbb{R}^{d}$ \[ \left\langle \nabla U(X) - \nabla U(Y),X-Y \right\rangle \geq m \left|X-Y\right|^{2}. \] \end{assumption} The two assumptions are popular conditions used to obtain explicit convergence rates, see \cite{dalalyan2017theoretical,dalalyan2020sampling} for example. It is worth mentioning that these assumptions can also produce explicit convergence rates for gradient descent \cite{boyd2004convex}. \subsection{Modified Euclidean Norms}\label{Sec:Quadratic_Norm} For kinetic Langevin dynamics it is not possible to prove convergence with respect to the standard Euclidean norm due to the fact that the generator is hypoelliptic. We therefore work with a modified Euclidean norm as in \cite{monmarche2021high}. For $z = (x,v) \in \mathbb{R}^{2d}$ we introduce the weighted Euclidean norm \[ \left|\left| z \right|\right|^{2}_{a,b} = \left|\left| x \right|\right|^{2} + 2b \left\langle x,v \right\rangle + a \left|\left| v \right|\right|^{2}, \] for $a,b > 0$ which is equivalent norm as long as $b^{2}<a$. More precisely we have \[ \frac{1}{2}||z||^{2}_{a,0} \leq ||z||^{2}_{a,b} \leq \frac{3}{2}||z||^{2}_{a,0}.\] \subsection{Wasserstein Distance} \label{sec:wasserstein_def} We define $\mathcal{P}_{p}\left(\mathbb{R}^{2d}\right)$ to be the set of probability measures which have finite $p$-th moment, then for $p \in \left[0,\infty\right)$ we define the $p$-Wasserstein distance on this space. Let $\mu$ and $\nu$ be two probability measures. We define the $p$-Wasserstein distance between $\mu$ and $\nu$ with respect to the norm $||\cdot||_{a,b}$ (introduced in Sec. \ref{Sec:Quadratic_Norm}) to be \[\mathcal{W}_{p,a,b}\left(\nu,\mu\right) = \left( \inf_{\xi \in \Gamma\left( \nu, \mu \right)}\int_{\mathbb{R}^{2d}}||z_{1} - z_{2}||^{p}_{a,b}d\xi\left(z_{1},z_{2}\right)\right)^{1/p},\] where $\Gamma\left(\mu,\nu\right)$ is the set of measures with marginals $\mu$ and $\nu$ (the set of all couplings between $\mu$ and $\nu$). It is well known that the existence of couplings with a contractive property implies convergence in Wasserstein distance (which can be interpreted as the infimum over all such couplings). The simplest such coupling is to consider simulations with common noise, this is known as synchronous coupling, therefore if one can show contraction of two simulations which share noise increments with a explicit contraction rate. Then one has convergence in Wasserstein distance with the same rate. With all the constants and conditions derived for all the schemes for contraction, we have convergence in Wasserstein distance by the following proposition: \begin{proposition}\label{prop:Wasserstein} Assume a numerical scheme for kinetic Langevin dynamics with a $m$-strongly convex $M$-$\nabla$Lipschitz potential $U$ and transition kernel $P_{h}$. If any two synchronously coupled chains $\left(x_{n},v_{n}\right)$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n}\right)$ of the numerical scheme have the contraction property \begin{equation}\label{eq:contraction_inequality} ||(x_{n} - \Tilde{x}_{n},v_{n} - \Tilde{v}_{n})||^{2}_{a,b} \leq C(1 - c\left(h\right))^{n}||(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0})||^{2}_{a,b}, \end{equation} for $\gamma^{2} \geq C_{\gamma}M$ and $h \leq C_{h}\left(\gamma,\sqrt{M}\right)$ for some $a,b >0$ such that $b^{2} > a$. Then we have that for all $\gamma^{2} \geq C_{\gamma}M$, $h \leq C_{h}\left(\gamma,\sqrt{M}\right)$, $1 \leq p \leq \infty$ and all $\mu,\nu \in \mathcal{P}_{p}(\mathbb{R}^{2d})$, and all $n \in \mathbb{N}$, \[ \mathcal{W}^{2}_{p}\left(\nu P^n_{h} ,\mu P^n_{h} \right) \leq 3C\max{\left\{a,\frac{1}{a}\right\}}\left(1 - c\left(h\right)\right)^{n} \mathcal{W}^{2}_{p}\left(\nu,\mu\right). \] Further to this, $P_{h}$ has a unique invariant measure which depends on the stepsize, $\pi_{h}$, where $\pi_{h} \in \mathcal{P}_{p}(\mathbb{R}^{2d})$ for all $1 \leq p \leq \infty$. \end{proposition} \begin{proof} The proof is given in \cite{monmarche2021high}[Corollary 20], which relies on \cite{villani2009optimal}[Corollary 5.22, Theorem 6.18]. \end{proof} The focus of this article is to prove contractions of the form (\ref{eq:contraction_inequality}), and hence to achieve Wasserstein convergence rates by Prop. \ref{prop:Wasserstein}. With convergence to the invariant measure of the discretizations of kinetic Langevin dynamics considered here it will be possible to combine our results with estimates of the bias of each scheme as in \cite{dalalyan2020sampling}, \cite{monmarche2021high}, \cite{sanz2021wasserstein} and \cite{cheng2018underdamped} to obtain non-asymptotic estimates. \section{Overdamped Langevin discretizations and contraction} We first consider two discretizations of the SDE (\ref{eq:overdamped}), namely the Euler-Maruyama discretization and the high friction limit of the popular kinetic Langevin dynamics scheme BAOAB \cite{leimkuhler2013rational}. The simplest discretization of overdamped Langevin dynamics is using the Euler-Maruyama (EM) method which is defined by the update rule \begin{equation} X_{n+1} = X_{n} - h\nabla U\left(X_{n}\right) + \sqrt{2h}\xi_{n+1}. \end{equation} This scheme is combined with Metropolization in the popular MALA algorithm. An alternative method is the BAOAB limit method of Leimkuhler and Matthews (LM)(\cite{leimkuhler2013rational}, \cite{leimkuhler2014long}) which is defined by the update rule \[ X_{n+1} = X_{n} - h\nabla U\left(X_{n}\right) + \sqrt{2h}\frac{\xi_{n+1} + \xi_{n}}{2}. \] The advantage of this method is that it gains a weak order of accuracy asymptotically. \subsection{Convergence guarantees} \label{sec:conv_overdamped} The convergence guarantees of overdamped Langevin dynamics and its discretizations have been extensively studied under the assumptions presented (see \cite{dalalyan2017theoretical,durmus2017nonasymptotic,cheng2018convergence,dalalyan2017further,durmus2019high, durmus2019analysis, dwivedi2018log}). We use synchronous coupling as a proof strategy to obtain convergence rates as in \cite{dalalyan2017theoretical}. We first consider two chains $x_{n}$ and $y_{n}$ with shared noise such that \begin{align*} x_{n+1} = y_{n} - h\nabla U(x_{n}) + \sqrt{2h}\xi_{n+1}, \quad y_{n+1} = y_{n} - h\nabla U(y_{n}) + \sqrt{2h}\xi_{n+1}. \end{align*} Then we have that \begin{align*} &||x_{n+1} - y_{n+1}||^2 = ||x_{n} - y_{n} + (-\nabla U(x_{n}) - (-\nabla U(y_{n}))||^{2} \\ &= ||x_{n} - y_{n}||^{2} - 2h \langle \nabla U(x_{n}) - \nabla U(y_{n}) , x_{n} - y_{n}\rangle + h^{2}||\nabla U(x_{n}) - \nabla U(y_{n})||^{2}\\ &= ||x_{n} - y_{n}||^{2} - 2h \langle x_{n} - y_{n}, Q(x_{n} - y_{n})\rangle + h^{2}\langle x_{n} - y_{n}, Q^{2} (x_{n} - y_{n}) \rangle, \end{align*} where $Q = \int^{1}_{t = 0}\nabla^{2}U(x_{n} + t(y_{n} - x_{n}))dt$. $Q$ has eigenvalues which are bounded between $m$ and $M$, so $Q^{2} \preceq MQ$, and hence \[ h^{2}\langle x_{n} - y_{n}, Q^{2} (x_{n} - y_{n}) \rangle \leq h^{2}M \langle x_{n} - y_{n}, Q(x_{n} - y_{n}) \rangle. \] Therefore \begin{align*} ||x_{n+1} - y_{n+1}||^2 &\leq ||x_{n} - y_{n}||^{2} - h(2 - hM)\langle x_{n} - y_{n},Q(x_{n} - y_{n}) \rangle\\ &\leq ||x_{n} - y_{n}||^{2}(1 - hm(2-hM)), \end{align*} assuming that $h \leq \frac{2}{M}$. We have a contraction and \[ ||x_{n} - y_{n}|| \leq (1 - hm(2 - hM))^{n/2}||x_{0} - y_{0}||. \] A consequence of this contraction result is that we have convergence in Wasserstein distance to the invariant measure with rate $hm\left(2 - hM\right)$, under the imposed assumptions on $h$ (as discussed in Sec. \ref{sec:wasserstein_def})\cite{monmarche2021high,villani2009optimal}. Note that this argument is exactly the same for the LM discretization of overdamped Langevin dynamics as all the noise components are shared. The stepsize assumption for convergence of overdamped Langevin dynamics in this setting is weak and is the same assumption as is needed to guarantee convergence of gradient descent in optimisation \cite{boyd2004convex}[Eq. (9.18)]. \section{Kinetic Langevin Dynamics} We now consider many discretizations of the SDE (\ref{eq:underdamped_langevin}) using a framework established in Sec. \ref{sec:strategy}, where we construct an alternative Euclidean norm in which we can prove contraction (it is not possible to prove contraction in the standard Euclidean norm). Essentially, we convert the problem of proving contraction to the problem of showing that certain matrices are positive definite. \subsection{Proof Strategy}\label{sec:strategy} We will consider a modified Euclidean norm as defined in Sec. \ref{Sec:Quadratic_Norm} for some choice of $a$ and $b$. Our aim is to construct an equivalent Euclidean norm such that contraction occurs for two Markov chains simulated by the same discretization $z_{n} = (x_{n},v_{n}) \in \mathbb{R}^{2d}$ and $\Tilde{z}_{n} = (\Tilde{x}_{n},\Tilde{v}_{n}) \in \mathbb{R}^{2d}$ that are syncronously coupled. That is, for some choice of $a$ and $b$ such that $a,b >0$ and $b^{2} < a$ \begin{equation}\label{eq:cont_1} ||\Tilde{z}_{k+1} - z_{k+1}||^{2}_{a,b} < \left(1 - c\left(h\right)\right)||\Tilde{z}_{k} - z_{k}||^{2}_{a,b}, \end{equation} where $a$ and $b$ are chosen to provide reasonable explicit assumptions on the stepsize $h$ and friction parameter $\gamma$. Our initial choices of $a$ and $b$ for simple schemes are motivated by \cite{monmarche2021high}, and are derived by considering contraction of the continuous dynamics. Let $\overline{z}_{j} = \Tilde{z}_{j} - z_{j}$ for $j \in \mathbb{N}$, then (\ref{eq:cont_1}) is equivalent to showing that \begin{equation}\label{eq:contraction_matrix_form} \overline{z}^{T}_{k}\left(\left(1 - c\left(h\right)\right)M- P^{T}MP\right )\overline{z}_{k} > 0, \quad \textnormal{where} \quad M = \begin{pmatrix} 1 & b \\ b & a \end{pmatrix}, \end{equation} and $\overline{z}_{k+1} = P\overline{z}_{k}$ ($P$ depends on $z_{k}$ and $\Tilde{z}_{k}$, but we omit this in the notation). \begin{example} As an example we have for the Euler-Maruyama scheme the update rule for $\overline{z}_{k}$ \begin{align*} \overline{x}_{k+1} = \overline{x}_{k} + h \overline{v}_{k}, \qquad \overline{v}_{k+1} = \overline{v}_{k} - \gamma h \overline{v}_{k} -hQ\overline{x}_{k}, \end{align*} where by mean value theorem we can define $Q = \int^{1}_{t = 0}\nabla^{2}U(\Tilde{x}_{k} + t(x_{k} - \Tilde{x}_{k}))dt$, then $\nabla U(\Tilde{x}_{k}) - \nabla U(x_{k}) = Q\overline{x}$. One can show that in the notation of equation \eqref{eq:contraction_matrix_form} we have \begin{equation}\label{eq:P_matrix} P = \begin{pmatrix} I & hI\\ -hQ & \left(1- \gamma h\right)I \end{pmatrix}. \end{equation} \end{example} Proving contraction for a general scheme is equivalent to showing that the matrix $\mathcal{H} := \left(1 - c(h)\right)M - P^{T}MP \succ 0$ is positive definite. The matrix $\mathcal{H}$ is symmetric and hence of the form \begin{equation}\label{eq:contraction_matrix} \mathcal{H} = \begin{pmatrix} A & B \\ B & C \end{pmatrix}, \end{equation} we can show that $\mathcal{H}$ is positive definite by applying the following Prop. \ref{Prop:PD}. \begin{proposition} \label{Prop:PD} Let $\mathcal{H}$ be a symmetric matrix of the form (\ref{eq:contraction_matrix}), then $\mathcal{H}$ is positive definite if and only if $A \succ 0$ and $C - BA^{-1}B \succ 0$. Further if $A$, $B$ and $C$ commute then $\mathcal{H}$ is positive definite if and only if $A\succ 0$ and $AC - B^{2} \succ 0$. \end{proposition} \begin{proof} The proof of the first result is given in \cite{horn2005basic}. To establish the second statement, observe from \cite{horn2012matrix} that if two matrices are positive definite and they commute then the product is positive definite. Also if $A \succ 0$ then $A^{-1} \succ 0$ (as $A$ is symmetric positive definite). Further $A, B$ and $C$ commute and hence $B$, $C$ and $A^{-1}$ commute. Therefore by applying the first result we have that $A \succ 0$ and \[A^{-1}\left(AC - B^{2}\right) = C - BA^{-1}B \succ 0,\] hence $\mathcal{H}$ is positive definite. If $\mathcal{H}$ is positive definite then $A \succ 0$ and $C - BA^{-1}B \succ 0$ by the first result. Thus as $A$, $B$ and $C$ commute we have $AC - B^2 \succ 0$. \end{proof} \begin{remark} An equivalent condition for a symmetric matrix $\mathcal{H}$ of the form (\ref{eq:contraction_matrix}) to be positive definite is $C\succ 0$ and $AC - B^{2} \succ 0$ when $A$, $B$ and $C$ commute. One could equivalently prove that $C \succ 0$ instead of $A \succ 0$ if it is more convenient. \end{remark} Our general approach to prove contraction of some popular kinetic Langevin dynamics schemes is to prove the conditions of Prop. \ref{Prop:PD} are satisfied to establish contraction. We will use the notation laid out in this section in the proofs given in the appendix. \subsection{Euler-Maruyama discretization} We define the EM chain with initial condition $(x_0,v_0)$ by $(x_n,v_n,\xi_n)$ where the $(\xi_n)_{n\in\mathbb{N}}$ are independent ${\cal N}(0,1)$ draws and $(x_{n},v_n)$ are updated according to: \begin{eqnarray} x_{k+1} & = & x_k + hv_k,\\ v_{k+1} & = & v_k - h\nabla U(x_k) - h\gamma v_k + \sqrt{2\gamma h}\xi_{k+1}. \end{eqnarray} \begin{theorem} \label{Theorem:EM} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $\gamma^{2} \geq 4M$ and $h < \frac{1}{2\gamma}$, we have that, for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$, the corresponding EM chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k})||_{a,b} \leq \left(1 - c\left(h\right)\right)^{\frac{k}{2}}||(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0})||_{a,b}, \] where $a = \frac{1}{M}$, $b = \frac{1}{\gamma}$ and \[ c\left(h\right) = \frac{mh}{2\gamma}. \] \end{theorem} \begin{example} \textit{An example to illustrate the tightness of the restrictions on the stepsize $h$ and the restriction on the friction parameter $\gamma$.} We consider the anisotropic Gaussian distribution on $\mathbb{R}^{2}$ with potential $U: \mathbb{R}^{2} \mapsto \mathbb{R}$ given by $U(x,y) = \frac{1}{2}mx^{2} + \frac{1}{2}My^{2}$. This potential satisfies the assumptions \ref{sec:assumptions} with constants $M$ and $m$ respectively. By computing the eigenvalues of the transition matrix $P$ (for contraction) we can see for what values of $h$ contraction occurs. For EM we have that \[ P = \begin{pmatrix} I & hI\\ -hQ & \left(1- \gamma h\right)I \end{pmatrix}\text{, where }Q = \begin{pmatrix} m & 0 \\ 0 & M \end{pmatrix}, \] with eigenvalues \[\frac{1}{2}\left(2 - \gamma h \pm h \sqrt{\gamma^{2} - 4\lambda}\right),\] for $\lambda = m,M$. For stability and contraction we require that \[ \frac{1}{2}\left(2 - \gamma h - h \sqrt{\gamma^{2} - 4m} \right) > 0, \quad \text{and} \quad \frac{1}{2}\left(2 - \gamma h + h \sqrt{\gamma^{2} - 4m} \right) < 1. \] The second condition is equivalent to $\gamma > \sqrt{\gamma^{2} - 4m}$ which trivially holds and the first condition is equivalent to $h \leq 2/(\gamma + \sqrt{\gamma^{2} - 4m}) \approx 1/\gamma$. \end{example} \section{First order splittings} \label{sec:splittings} A common discretization strategy for kinetic\\ Langevin dynamics is based on splitting up the dynamics into parts which can be integrated exactly, in the weak sense. An increasingly popular splitting choice used in molecular dynamics modelling is to divide the SDE into deterministic parts corresponding to linear positional drift and an impulse due to the force and a dissipative-stochastic term corresponding to an Ornstein-Uhlenbeck equation \cite{PhysRevE.75.056707}. These parts are denoted by $\mathcal{B}$, $\mathcal{A}$ and $\mathcal{O}$ with update rules given by \begin{equation}\label{eq:BAO} \begin{split} &\mathcal{B}: v \to v - h\nabla U(x), \\ &\mathcal{A}: x \to x + hv,\\ &\mathcal{O}: v \to \eta v + \sqrt{1 - \eta^{2}}\xi, \end{split} \end{equation} where \[ \eta := \exp{\left(-\gamma h \right)}. \] The reasoning for such a splitting is based on the fact that the infinitesimal generator of the SDE (\ref{eq:underdamped_langevin}) can be split as $\mathcal{L} = \mathcal{L}_{\mathcal{A}} + \mathcal{L}_{\mathcal{B}} + \gamma\mathcal{L}_{\mathcal{O}}$, where \[ \mathcal{L}_{\mathcal{A}} = \left\langle v, \nabla_{x}\right\rangle, \qquad \mathcal{L}_{\mathcal{B}} = -\left\langle\nabla U\left(x\right), \nabla_{v}\right\rangle, \qquad \mathcal{L}_{\mathcal{O}} = -\left \langle v, \nabla_{v} \right\rangle + \Delta_{v}. \] The dynamics associated to $\mathcal{L}_{\mathcal{A}}$ and $\mathcal{L}_\mathcal{B}$ are the deteriministic dynamics corresponding to $\mathcal{A}$ and $\mathcal{B}$. The dynamics associated to $\gamma\mathcal{L}_{\mathcal{O}}$ is the Ornstein-Uhlenbeck process, which can be solved exactly, in the sense of distributions. This corresponds to the $\mathcal{O}$ step. We use the convention that one applies the operators left to right. The BAO method would first apply $\mathcal{B}$ then $\mathcal{A}$ and lastly $\mathcal{O}$. For more details on these splittings we refer the reader to \cite{leimkuhler2016computation}. We will now consider contraction for all first order splitting (permutations of the $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{O}$ pieces), which are schemes with weak order $1$. We first consider BAO, where we define a BAO chain with initial condition $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ by $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$, using the update $\mathcal{BAO}$ (\ref{eq:BAO}) and $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ are vectors of standard normal random variables. \begin{theorem}[BAO] \label{Theorem:BAO} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $h < \frac{1 - \eta}{\sqrt{6M}}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the BAO chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k})||_{a,b} \leq \left(1 - c\left(h\right)\right)^{\frac{k}{2}}||(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0})||_{a,b}, \] where $a = \frac{1}{M}$ and $b = \frac{h}{(1-\eta)}$ and \[ c\left(h\right) = \frac{h^{2}m}{4\left(1 - \eta\right)}. \] \end{theorem} \begin{remark} The modified Euclidean norm has now been chosen to be stepsize dependent and is needed to eliminate the corresponding dependency of the stepsize on the strong convexity constant $m$. We note that that simply choosing $b = 1/\gamma$ does not result in a norm which guarantees a stepsize restriction which is independent of $m$, as is clear from the motivation of the construction of our choice of $b$. When $b \neq h/(1-\eta)$ one can always choose $m$ small enough such that $AC - B^{2}$ is not positive definite. We also point out that the stepsize restriction implicitly implies that $\gamma^{2}$ is larger than some constant factor multiplied by $M$. Further, for large $\gamma$ (for example $\gamma \geq 5\sqrt{M}$) we have convergence for stepsizes independent of the size of $\gamma$ (for example $h < 1/8\sqrt{M}$), which improves on the results of \cite{sanz2021wasserstein}. \end{remark} \begin{example} \textit{An example to illustrate the tightness of the restrictions on the stepsize $h$ and the restriction on the friction parameter $\gamma$.} We consider the anisotropic Gaussian distribution on $\mathbb{R}^{2}$ with potential $U: \mathbb{R}^{2} \mapsto \mathbb{R}$ given by $U(x,y) = \frac{1}{2}mx^{2} + \frac{1}{2}My^{2}$. By computing the eigenvalues of the transition matrix $P$ (for contraction) we can see for what values of $h$ contraction occurs. For BAO we have that \[ P = \begin{pmatrix} I - h^{2}Q & hI\\ -h\eta Q & \eta I \end{pmatrix}\text{, where }Q = \begin{pmatrix} m & 0 \\ 0 & M \end{pmatrix}, \] with eigenvalues \begin{align*} &\frac{1}{2}\left(1 + \eta - h^{2}\lambda \pm \sqrt{-4\eta + \left(-1 - \eta + h^{2}\lambda\right)^{2}}\right), \end{align*} for $\lambda = m,M$. For stability and contraction it is necessary and sufficient that \[ \left(1 + \eta - h^{2}M\right) > 0, \quad \text{and} \quad \frac{1}{2}\left(1 + \eta - h^{2}\lambda + \sqrt{-4\eta + \left(-1 - \eta + h^{2}\lambda\right)^{2}}\right) < 1. \] The first condition requires $h < \sqrt{\frac{1 + \eta}{M}}$, where $\frac{1}{\sqrt{M}}< \sqrt{\frac{1 + \eta}{M}} < \frac{2}{\sqrt{M}}$. The second condition holds when \[1 - \eta + h^{2}\lambda > \sqrt{-4\eta + \left(-1 - \eta + h^{2}\lambda \right)^{2}}, \] which is equivalent to $4h^{2}\lambda > 0$, which trivially holds. Due to these stability conditions the best contraction rate possible is $\mathcal{O}\left(\frac{m}{M}\right)$, which coincides with our results. Further we have that the contraction rate is precisely $1- \lambda_{max}$ which simplifies to \[ c_{\mathcal{N}} = 1- \eta + h^2m - \sqrt{\left(1 - \eta + h^2 m \right)^2 - 4h^2m}. \] Moreover, it can be shown that $4 c(h) > c_{\mathcal{N}}$ for $h < 1/\sqrt{22m}$ and $\gamma \geq 4\sqrt{m}$. It is shown in \cite{monmarche2020almost}[Proposition 4] that for the continuous dynamics this condition on $\gamma$ is necessary. \end{example} \begin{theorem}[OAB] \label{Theorem:OAB} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $h < \min{\{\frac{1}{4\gamma},\frac{1-\eta}{\sqrt{6M}}}\}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the OAB chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[|\left|(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k}\right)||_{a,b} \leq \left(1 - c\left( h\right)\right)^{\frac{k}{2}}||\left(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0}\right)||_{a,b}, \] where $a = \frac{1}{M}$, $b = \frac{\eta h}{\left(1-\eta\right)}$ and $c\left(h\right) = \frac{\eta h^{2}m}{4\left(1 - \eta\right)}$. \end{theorem} Considering other splittings one could use the same techniques as above or we can use the contractions results of BAO and OAB to achieve a contraction result for the remaining permutations by writing \begin{align*} (\mathcal{ABO})^{n} &= \mathcal{AB}(\mathcal{OAB})^{n-1}\mathcal{O}, \quad (\mathcal{BOA})^{n} = \mathcal{B}(\mathcal{OAB})^{n-1}\mathcal{OA}\\ (\mathcal{OBA})^{n} &= \mathcal{O}(\mathcal{BAO})^{n-1}\mathcal{BA}, \quad (\mathcal{AOB})^{n} = \mathcal{AO}(\mathcal{BAO})^{n-1} \mathcal{B} \end{align*} However by applying direct arguments as done for OAB and BAO one would achieve better preconstants. Let $\left(\Tilde{x}_{0}, \Tilde{v}_{0} \right) \in \mathbb{R}^{2d}$ and $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ be two initial conditions for a synchronous coupling of sample paths of the ABO splitting and $\overline{x}_{0} := \Tilde{x}_{0} - x_{0}$, $\overline{v}_{0} := \Tilde{v}_{0} - v_{0}$. In the following argument we let $Q$ be such that $\nabla U\left(\Tilde{x}_{0} + h \Tilde{v}_{0}\right) - \nabla U\left(x_{0} + h v_{0}\right) = Q\left(\overline{x}_{0} + h\overline{v}_{0}\right)$ by the mean value theorem. Using the notation $\Psi_{\rm{ABO}}$ to denote the one step map of the ABO discretization we have that for $h < \min{\{\frac{1}{4\gamma},\frac{1-\eta}{\sqrt{6M}}}\}$ \begin{align*} &||\Psi_{\rm{ABO}}\left(\Tilde{x}_{k}\right) - \Psi_{\rm{ABO}}\left(x_{k}\right)||^{2}_{a,b} = ||\left(\Psi_{\rm{ABO}}\right)^{k}\left(\Tilde{x}_{0}\right) - \left(\Psi_{\rm{ABO}}\right)^{k}\left(x_{0}\right)||^{2}_{a,b} \\ &= ||\Psi_{\rm{O}} \circ \left(\Psi_{\rm{OAB}}\right)^{k-1} \circ \Psi_{\rm{AB}} \left(\Tilde{x}_{0}\right) - \Psi_{\rm{O}} \circ \left(\Psi_{\rm{OAB}}\right)^{k-1} \circ \Psi_{\rm{AB}}\left(x_{0}\right)||^{2}_{a,b}\\ &\leq 3\left(1 - c\left(h\right) \right)^{k-1} ||\Psi_{\rm{AB}}\left(\Tilde{x}_{0},\Tilde{v}_{0}\right) - \Psi_{\rm{AB}}\left(x_{0},v_{0}\right)||^{2}_{a,b}\\ &\leq 9\left(1 - c\left(h\right) \right)^{k-1}\left(\left(1 + 2h^{2}M^{2}a\right)||\overline{x}_{0}||^{2} + \left(h^{2} + a + 2h^{4} M^{2}a \right) ||\overline{v}_{0}||^{2} \right)\\ &\leq 27\left(1 - c\left(h\right) \right)^{k-1}||\left(\overline{x}_{0},\overline{v}_{0}\right)||^{2}_{a,b}, \end{align*} where we have used the norm equivalence introduced in Sec. \ref{Sec:Quadratic_Norm}. The same method of argument can be used for the other first order splittings. \section{Higher order splittings} We now consider higher order schemes which are obtained by the splittings introduced in Sec. \ref{sec:splittings}. These schemes are weak order two and they are symmetric in the order of the operators, with repeated operators corresponding to multiple steps with half the stepsize. We will focus our attention to two popular splittings which are BAOAB and ABOBA (or OBABO) as in \cite{leimkuhler2013rational}. Due to the fact that the modified Euclidean norms developed in the previous section are different for different first order splittings we aren't able to simply compose the results of say OBA and ABO to obtain contraction of OBABO. First we consider the BAOAB discretization, where we denote a BAOAB chain with initial condition $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ by $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$, which are defined by the update $\mathcal{BAOAB}$ (\ref{eq:BAO}) and $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ are independent Gaussian random variables. \begin{theorem}[BAOAB] \label{Theorem:BAOAB} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $h \leq \frac{1-\eta}{2\sqrt{M}}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the BAOAB chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||\left(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k}\right)||_{a,b} \leq 7\left(1 - c\left(h\right)\right)^{\frac{k-1}{2}}||\left(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0}\right)||_{a,b}, \] where $a = \frac{1}{M}$ and $b = \frac{h}{\left(1-\eta\right)}$ and \[ c\left(h\right) = \frac{1}{4}\left(\frac{\eta h^{2}m}{\left(1 - \eta \right)} + h^2 m\right) = \frac{h^{2}m}{4\left(1- \eta\right)}. \] \end{theorem} Next we consider the OBABO discretization which has been studied in the recent work \cite{monmarche2021high}. In \cite{monmarche2022hmc} they analyse Hamiltonian Monte Carlo as $\mathcal{O}\left(\mathcal{ABA}\right)^{L}\mathcal{O}$ for $L$ leapfrog steps. In \cite{monmarche2022hmc} a similar norm is used to study Hamiltonian Monte Carlo, however they obtain stepsize restrictions of at least $\mathcal{O}\left(m/L^{3/2}\right)$. We note that the OABAO scheme can also be analysed in our framework. We denote a OBABO chain with initial condition $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ by $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$, which are defined by the update $\mathcal{OBABO}$ (\ref{eq:BAO}) and $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ are independent Gaussian random variables. \begin{theorem}[OBABO]\label{Theorem:OBABO} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $h < \frac{1 - \eta}{\sqrt{4M}}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the OBABO chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||\left(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k}\right)||_{a,b} \leq 7\left(1 - c\left(h\right)\right)^{\frac{k-1}{2}}||\left(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0}\right)||_{a,b}, \] where $a = \frac{1}{M}$, $b = \frac{h}{\left(1-\eta\right)}$ and \[ c\left(h\right) = \frac{h^{2}m}{4\left(1 - \eta\right)}. \] \end{theorem} \begin{remark} In \cite{dalalyan2020sampling} it is shown that the continuous dynamics converges with a rate of $\mathcal{O}(m/\gamma)$. There is a major difference in terms of contraction rate for large $\gamma$ between the rates achieved by BAOAB and OBABO and the continuous dynamics. As $\gamma \to \infty$ for BAOAB and OBABO you have convergence rates of $\mathcal{O}(h^{2}m)$, whereas the contraction rate of the continuous dynamics converges to zero. \end{remark} \begin{remark} In Theorem \ref{Theorem:BAOAB} and Theorem \ref{Theorem:OBABO} we have a prefactor of $7$ due to the fact that we have converted the problem of contraction into proving a simpler problem with one gradient evaluation. More specifically for BAOAB using the relation $(\mathcal{BAOAB})^{n} = \mathcal{BAO}\left(\mathcal{ABAO}\right)^{n-1}\mathcal{AB}$ and proving contraction for $\mathcal{ABAO}$ and similarly for OBABO. The prefactor comes from the remaining terms $\mathcal{BAO}$ and $\mathcal{AB}$. \end{remark} \section{Stochastic exponential Euler scheme} See \cite{durmus2021uniform} for an introduction to the Stochastic exponential Euler scheme and a derivation, based on keeping the gradient constant and analytically integrating the OU process with this constant gradient by combining the $\mathcal{B}$ and the $\mathcal{O}$ steps in the previous splitting. This scheme is the one considered in \cite{cheng2018underdamped,dalalyan2020sampling} and has gained a lot of attention in the machine learning community and we can apply our methods to this scheme. Similar schemes have also been considered in \cite{chandrasekhar1943stochastic,ermak1980numerical,skeel2002impulse} and it has been analysed in \cite{durmus2021uniform,shi2012convergence}. The scheme in the notation we have used is given by the update rule \begin{equation}\label{eq:SES} \begin{split} X_{k+1} &= X_{k} + \frac{1-\eta}{\gamma}V_{k} - \frac{\gamma h + \eta -1}{\gamma^{2}}\nabla U\left(X_{k}\right) + \zeta_{k+1},\\ V_{k+1} &= \eta V_{k} - \frac{1 - \eta}{\gamma} \nabla U\left(X_{k}\right) + \omega_{k+1}, \end{split} \end{equation} where \[ \zeta_{k+1} = \sqrt{2\gamma}\int^{h}_{0}e^{-\gamma\left( h - s\right)}dW_{h\gamma + s}, \qquad \omega_{k+1} = \sqrt{2\gamma} \int^{h}_{0} \frac{1 - e^{-\gamma\left( h - s\right)}}{\gamma}dW_{h\gamma + s}. \] $\left(\zeta_{k},\omega_{k}\right)_{k \in \mathbb{N}}$ are Gaussian random vectors with covariances matrices which are stated in \cite{durmus2021uniform}. Now we can couple two trajectories which have common noise $\left(\zeta_{k},\omega_{k}\right)_{k \in \mathbb{N}}$ then we can obtain contraction rates by the previously introduced methods. For the SES discretization where we denote a SES chain with initial condition $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ by $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$, which are defined by the update SES (\ref{eq:SES}) and $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ are independent Gaussian random variables. \begin{theorem}[Stochastic Euler Scheme]\label{Theorem:SES} Assume $U$ is a $m$-strongly convex and $M$-$\nabla$Lipschitz potential. When $\gamma \geq 5\sqrt{M}$ and $h \leq \frac{1}{2\gamma}$, we have that for all initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, and for any sequence of standard normal random variables $\left(\xi_{n}\right)_{n \in \mathbb{N}}$ the SES chains $\left(x_{n},v_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ and $\left(\Tilde{x}_{n},\Tilde{v}_{n},\xi_{n}\right)_{n \in \mathbb{N}}$ with initial conditions $\left(x_{0},v_{0}\right) \in \mathbb{R}^{2d}$ and $\left(\Tilde{x}_{0},\Tilde{v}_{0}\right)\in \mathbb{R}^{2d}$, respectively, satisfy \[||\left(x_{k} - \Tilde{x}_{k},v_{k} - \Tilde{v}_{k}\right)||_{a,b} \leq \left(1 - c\left(h\right)\right)^{\frac{k}{2}}||\left(x_{0} - \Tilde{x}_{0},v_{0} - \Tilde{v}_{0}\right)||_{a,b}, \] where $a = \frac{1}{M}$, $b = \frac{1}{\gamma}$ and \[ c\left(h\right) = \frac{mh}{4\gamma}. \] \end{theorem} \section{Overdamped Limit} We will now compare and analyze how the different schemes behave in the high friction limit. Starting with the first order schemes. It is a desirable property that the high friction limit is a discretization of the overdamped dynamics, therefore if a user of such a scheme sets a friction parameter $\gamma$ too large, they will not suffer from the $\mathcal{O}(1/\gamma)$ scaling of the convergence rate. We will call schemes with this desirable property $\gamma$-limit convergent (GLC), out of the schemes we have analysed it is only BAOAB and OBABO which are GLC. \subsection{BAO} If we consider the update rule of the BAO scheme \begin{align*} x_{k+1} = x_{k} + h\left(v_{k} - h \nabla U(x_{k})\right), \quad v_{k+1} = \eta v_{k} - h\eta \nabla U(x_{k}) + \sqrt{1 - \eta^{2}}\xi_{k+1}, \end{align*} and take the limit as $\gamma \to \infty$ we obtain \begin{align*} x_{k+1} &= x_{k} - h^{2} \nabla U(x_{k}) + h \xi_{k}, \end{align*} which is simply the Euler-Maruyama scheme with stepsize $h^{2}$ for potential $\Tilde{U} := 4U$, which imposes stepsize restrictions $h^{2} \leq \frac{2}{4M}$ and hence consistent with our analysis. Further if we take the limit of the contraction rate and the modified Euclidean norm we have \[ \lim_{\gamma \to \infty} c\left(h\right) = \frac{h^{2}m}{4}, \qquad \lim_{\gamma \to \infty} ||x||^{2} + 2b\langle x,v \rangle + a||v||^{2} = ||x||^{2} + 2h \langle x,v \rangle + \frac{1}{M}||v||^{2}, \] which is again consistent with the convergence rates achieved in Sec. \ref{sec:conv_overdamped} and the norm is essentially the Euclidean norm when considered on the overdamped process as $\overline{v} = 0$. Due to the fact that the potential is rescaled in the limit, this is not a discretization of the overdamped dynamics. \subsection{OAB} If we consider the update rule of the OAB scheme \begin{align*} x_{k+1} &= x_{k} + h\eta v_{k} + h \sqrt{1 - \eta^{2}}\xi_{k+1},\\ v_{k+1} &= \eta v_{k} \sqrt{1 - \eta^{2}}\xi_{k+1} - h\eta \nabla U(x_{k} + h\eta v_{k} + h \sqrt{1 - \eta^{2}}\xi_{k+1}), \end{align*} and take the limit as $\gamma \to \infty$ we obtain the update rule $x_{k+1} = x_{k} + h \xi_{k+1}$, therefore the overdamped limit is not inherited by the scheme and further we do not expect contraction. This is consistent with our analysis of OAB and our contraction rate which tends towards $0$ in the high friction limit. \subsection{BAOAB} If we consider the update rule of the BAOAB scheme \begin{align*} x_{k+1} &= x_{k} + \frac{h}{2}\left(1 + \eta\right)v_{k} - \frac{h^{2}}{4}\left(1 + \eta\right) \nabla U(x_{k}) + \frac{h}{2}\sqrt{1 - \eta^{2}}\xi_{k+1},\\ v_{k+1} &= \eta \left(v_{k} - \frac{h}{2}\nabla U(x_{k})\right) + \sqrt{1 - \eta^{2}}\xi_{k+1} - \frac{h}{2}\nabla U(x_{k+1}), \end{align*} and take the limit as $\gamma \to \infty$ we obtain \begin{align*} x_{k+1} &= x_{k} - \frac{h^{2}}{2} \nabla U(x_{k}) + \frac{h}{2}\left(\xi_{k} + \xi_{k+1}\right), \end{align*} which is simply the LM scheme with stepsize $h^{2}/2$ (as originally noted in \cite{leimkuhler2013rational}), which imposes stepsize restrictions $h^{2} \leq 2/M$ and hence consistent with our analysis. Further if we take the limit of the contraction rate and the modified Euclidean norm we have \[ \lim_{\gamma \to \infty} c\left(h\right) = \frac{h^{2}m}{4}, \qquad \lim_{\gamma \to \infty} ||x||^{2} + 2b\langle x,v \rangle + a||v||^{2} = ||x||^{2} + 2h \langle x,v \rangle + \frac{1}{M}||v||^{2}, \] which is again consistent with the convergence rates achieved in Sec. \ref{sec:conv_overdamped} and the modified Euclidean norm is essentially the Euclidean norm when considered on the overdamped process as $\overline{v} = 0$. \subsection{OBABO} If we consider the update rule of the OBABO scheme \begin{align*} x_{k+1} &= x_{k} + h\eta v_{k} + h\sqrt{1 - \eta^2}\xi_{1,k+1} - \frac{h^{2}}{2}\nabla U(x_{k}),\\ v_{k+1} &= \eta\left(\eta v + \sqrt{1 - \eta^2}\xi_{1,k+1} - \frac{h}{2}\nabla U(x_{k}) - \frac{h}{2}\nabla U(x_{k+1})\right) + \sqrt{1 - \eta^{2}}\xi_{2,k+1}, \end{align*} where ($\eta = \exp{\left(-\gamma h /2\right)}$) and for ease of notation in the above scheme and we have labelled the two noises of one step $\xi_{1}$ and $\xi_{2}$ . Now we take the limit as $\gamma \to \infty$ we obtain \begin{align*} x_{k+1} &= x_{k} -\frac{h^{2}}{2}\nabla U(x_{k}) + h \xi_{k+1}, \end{align*} which is the Euler-Maruyama scheme for overdamped Langevin with stepsize $h^{2}/2$, which has convergence rate $\mathcal{O}\left(h^{2}m\right)$. Hence consistent with our analysis of OBABO and our contraction rate which tends towards $h^{2}m/4$ in the high friction limit. \subsection{SES} If we consider the limit as $\gamma \to \infty$ of the scheme (\ref{eq:SES}) we obtain the update rule $x_{k+1} = x_{k}$ and therefore the overdamped limit is not inherited by the scheme and further we do not expect contraction. Hence consistent with our analysis of the stochastic Euler scheme as the contraction rate tends to $0$ in the high friction limit. \section{Discussion} \begin{figure}[H] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{0_01.pdf} \caption{Low friction} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{4.pdf} \caption{} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{10000.pdf} \caption{High friction} \end{subfigure} \caption{Contraction of two kinetic Langevin trajectories $x_{1}$ and $x_{2}$ with initial conditions $[-1,-1]$ and $[1,1]$ for a $2$-dimensional standard Gaussian with stepsize $h = 0.25 = 1/4\sqrt{M}$.} \label{Fig:1} \end{figure} We tested our observations numerically in Fig. \ref{Fig:1} with a $2$-dimensional standard Gaussian. Fig. \ref{Fig:1} is consistent with our analysis that all schemes are stable when $\gamma \approx 4\sqrt{M}$ and in the high friction regime EM, OAB and SES behave poorly compared to BAOAB and BAO. In the low friction regime again EM and SES perform poorly compared to the other schemes. In \cite{dalalyan2020sampling} it is shown that the optimal convergence rate for the continuous time dynamics is $\mathcal{O}(m/\gamma)$, therefore our contraction rates are consistent up to a constant for the discretizations, however for some of the schemes considered for example BAOAB and OBABO we have that the scheme inherits convergence to the overdamped Langevin dynamics (without time rescaling) and this is reflected in our convergence rate estimates. Therefore for MCMC applications it does not suffer from the scaling of $1/\gamma$ on the convergence rate, if the user picks a friction parameter which is too high. This robustness with respect to the friction parameter is shown in Fig. \ref{Fig:1}. The constants in our arguments can be improved by sharper bounds and a more careful analysis, but the restriction on $\gamma$ is consistent with other works on synchronous coupling for the continuous time Langevin diffusions \cite{bolley2010trend,cheng2018underdamped,dalalyan2020sampling,deligiannidis2021randomized,zajic2019non}. Further it is shown in \cite{monmarche2020almost}[Proposition 4] that the continuous time process yields Wasserstein contraction of synchronous coupling for all $M$-$\nabla$Lipschitz and $m$-strongly convex potentials $U$ if and only if $M - m < \gamma (\sqrt{M} + \sqrt{m})$ for the norms that we considered. This condition when $M$ is much larger than $m$ is $\mathcal{O}(\sqrt{M})$. It may be possible to achieve convergence rates for small $\gamma$, by using a more sophisticated argument like that of \cite{eberle2019couplings}. Using a different Lyapunov function or techniques may lead to being able to extend these results to all $\gamma > 0$ \cite{durmus2021uniform,qin2022geometric}, following results for the continuous case \cite{eberle2019couplings}, but this is beyond the scope of this paper. The restrictions on the stepsize $h$ are tight for the optimal contraction rate for EM and BAO and hence result in stability conditions of $\mathcal{O}\left(1/\gamma\right)$ for EM and SES. Also we have shown BAO, OBA, AOB, BAOAB and OBABO have convergence guarantees for stepsizes $\mathcal{O}(1/\sqrt{M})$ and BAOAB and OBABO have the desirable GLC property which is not common amongst the schemes we studied. For the choice of parameters which achieve optimal contraction rate we derive $\mathcal{O}(m/M)$ rates of contraction, which are sharp up to a constant and we achieve this for every scheme that we studied. \section*{Acknowledgments} The authors would like to thank Kostas Zygalakis for helpful comments on this work. The authors acknowledge the support of the Engineering and Physical Sciences Research Council Grant EP/S023291/1 (MAC-MIGS Centre for Doctoral Training). \bibliographystyle{siamplain} \section{A detailed example} Here we include some equations and theorem-like environments to show how these are labeled in a supplement and can be referenced from the main text. Consider the following equation: \begin{equation} \label{eq:suppa} a^2 + b^2 = c^2. \end{equation} You can also reference equations such as \cref{eq:matrices,eq:bb} from the main article in this supplement. \lipsum[100-101] \begin{theorem} An example theorem. \end{theorem} \lipsum[102] \begin{lemma} An example lemma. \end{lemma} \lipsum[103-105] Here is an example citation: \cite{KoMa14}. \section[Proof of Thm]{Proof of \cref{thm:bigthm}} \label{sec:proof} \lipsum[106-112] \section{Additional experimental results} \Cref{tab:foo} shows additional supporting evidence. \begin{table}[htbp] {\footnotesize \caption{Example table} \label{tab:foo} \begin{center} \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.~Dev. \\ \hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\ \hline \end{tabular} \end{center} } \end{table} \bibliographystyle{siamplain}
2,869,038,155,773
arxiv
\section{Introduction} \label{intro} The exchange of energy between a gas in contact with a surface is often characterized in terms of the energy accommodation coefficient. Although the origins of the concept of an energy accommodation coefficient can be traced to J. C. Maxwell~\cite{max} it is Knudsen who gave it a proper physical definition under what are now known as the conditions of rarefied gas dynamics.~\cite{Knudsen-09,Knudsen-10,Knudsen-15,Knudsen-34} The Knudsen energy accommodation coefficient has values that range from zero to unity, with a value of unity arising if the gas achieves equilibrium with the surface after colliding with it and a value of zero implying that no energy at all is transferred. Early measurements of the accommodation coefficients for rare gases in contact with a tungsten surface were carried out by Roberts, although it soon became apparent that his experiments were not carried out with sufficiently clean surfaces and thus his data did not represent the values expected for the gas-surface interaction with a clean metal.~\cite{roberts} The work of Roberts did, however, stimulate early theoretical investigations, especially for describing the interaction of He atoms with surfaces using quantum mechanics.~\cite{Jackson,Devonshire-36,Devonshire-37} In the 1960s with the advent of high vacuum technology and good surface cleaning techniques reliable measurements of the accommodation coefficient for rare gases on metal surfaces became available from two different groups, that of Thomas et al.~\cite{Thomas1,Thomas2} and of Kouptsidis and Menzel.~\cite{Menzel1,Menzel2} An extensive review of work pertaining to accommodation coefficients and a very useful compendium of available experimental data has been presented by Saxena and Joshi.~\cite{Saxena} Other extensive reviews have been given by Goodman and Wachman.~\cite{GoodmanProgSurfSci-1974,GoodmanErice,Goodman-76} The purpose of this paper is to present calculations for the accommodation coefficients of the heavy rare gases with metal surfaces using a recently developed classical theory for atom-surface collisions that includes both direct scattering and trapping-desorption processes. Similar classical scattering theories have been applied previously to calculations of the accommodation coefficient and reasonable agreement with measurements for the heavy rare gases on clean tungsten surfaces was obtained, but these calculations included only direct scattering processes and did not properly include trapping and subsequent desorption by the physisorption well of the interaction potential.~\cite{Manson-Muis} The trapping-desorption fraction is that portion of an incident beam of gas particles directed towards a surface that gets trapped by the physisorption well during the initial collision process. If this fraction remains in the physisorption well and does not go on to become permanently adsorbed or chemisorbed (which is the expected case for rare gas atoms if the temperature is not too small) these physisorbed atoms will eventually desorb and the standard assumption is that the trapping-desorption fraction leaves the surface in a thermal energy distribution that is nearly in equilibrium at the temperature of the surface. Under such an assumption the trapping-desorption fraction is expected to enhance the accommodation coefficient and cause it to have values closer to unity. On the other hand, the direct scattering fraction tends to exchange less energy with the surface and its contribution to the accommodation coefficient is expected to cause it to have values less than unity. The gas-surface scattering theory applied in this paper uses classical mechanics. In the initial collision with the surface a gas particle will either be scattered back into the continuum states (the direct scattering fraction) or will be trapped in the physisorption well of the interaction potential. Those particles that are trapped can be subdivided into two classes, those that have negative total energy and those that have positive total energy but are traveling at angles so close to the surface that they cannot escape from the well. This latter class is sometimes called the chattering fraction. In the theory used here the trapped particles are tracked as they make further collisions with the surface and with each subsequent collision a fraction remains trapped but a fraction receives enough energy to escape into the continuum states. These subsequent collisions are treated with an iteration algorithm that can be carried out to very large numbers until virtually all of the initially trapped particles have desorbed. The theory has demonstrated clearly the conditions for the trapping-desorption fraction to leave the surface in equilibrium, and it has also explained experimental data for rare gas scattering under well-defined conditions for which both a direct scattering contribution and a trapping-desorption fraction were observed as separate peaks in the energy-resolved scattering spectra.~\cite{Fan-07} As expected, for the accommodation of He and Ne at a tungsten surface, where quantum mechanics should be dominant in the scattering process, the present classical theory is unable to explain the measured experimental data. However, good agreement with data is obtained for the heavy rare gases Ar, Kr and Xe. \section {Theory} \label{theory} The energy accommodation coefficient $\alpha_E(T_S,T_G) $ is the ratio of the average energy exchanged by a gas in contact with a surface normalized to the maximum thermodynamically allowed energy that could be exchanged. \begin{equation} \label{eqacc} \alpha_E(T_S,T_G) \; = \; \frac{\overline{E_f}-<E_i>}{<E_f>-<E_i>} \; = \; \frac{\overline{E_f}-2 k_B T_G}{2 k_B T_S-2 k_B T_G} \; . \end{equation} In Eq.~(\ref{eqacc}) $T_G$ is the temperature of the gas, $T_S$ is the temperature of the surface, and $k_B$ is Boltzmann's constant, $\overline{E_f}$ is the average energy of a gas particle after making a collision with the surface. The expression on the far right hand side of Eq.~(\ref{eqacc}) is obtained under the assumption that both the gas and surface are in equilibrium at their respective temperatures, thus the average energy of the incident gas is $<E_i> = 2 k_B T_G $ and the average energy of the gas if it should come into equilibrium with the surface would be $<E_f> = 2 k_B T_S $. These average energies are obtained from the Knudsen distribution for a gas in equilibrium, sometimes called the flux-corrected Maxwell-Boltzmann distribution \begin{equation} \label{MB} \frac{dP^{K}({\bf p}_i, T_G )}{ d E_i \: d \Omega_i} \; = \; \frac{E_i \cos \theta_i}{\pi (k_B T_G)^2} \exp \left\{ \frac{-E_i}{k_B T_G} \right\} \; . \end{equation} If the gas is initially in equilibrium then the average final energy after a collision with the surface is given by \begin{equation} \label{ef} \overline{E_f} \; = \; \int_0^\infty d E_i \int_{2 \pi} d \Omega_i \int_0^\infty d E_f \int_{2 \pi} d \Omega_f ~ E_f \; \frac{dP^{K}({\bf p}_i, T_G )}{ d E_i \: d \Omega_i} \; \frac{dR({\bf p}_f,{\bf p}_i, T_S )}{ d E_f \: d \Omega_f} \; , \end{equation} where $ {dR({\bf p}_f,{\bf p}_i, T_S )} / { d E_f \: d \Omega_f} $ is the differential reflection coefficient giving the probability per unit final energy and final solid angle that a gas particle initially in momentum state ${\bf p}_i $ will make a transition to the state ${\bf p}_f $ as a result of the interaction with the surface. The differential reflection coefficient must obey the two conditions of unitarity and detailed balancing, as does also the Knudsen distribution of Eq.~(\ref{MB}). The condition of unitarity means that the number of gas particles is conserved, i.e., for a given initial momentum state ${\bf p}_i $ the integral of the differential reflection coefficient over all final energies and angles is normalized to unity. It is convenient to define an accommodation coefficient that is a function of a single temperature by taking the limit as the surface and gas temperatures approach the same value. Thus results in the equilibrium energy accommodation coefficient defined as \begin{equation} \label{EAC} \alpha_E(T) \; = \; \begin{array}{c} lim \\ T_G \rightarrow T_S \rightarrow T \end{array} \; \alpha_E(T_S,T_G) \; . \end{equation} All calculations in this paper will be for $ \alpha_E(T)$ since most experimental data for the accommodation of rare gases on clean surfaces is reported in this form. Using the condition of detailed balancing, the temperature limit of Eq.~(\ref{EAC}) can be readily carried out leading to the final form \begin{equation} \label{eeac} \alpha_E(T)=\frac{1}{4(k_BT)^2}\int^\infty_0dE_i \int _{2 \pi}{ d \Omega _i} \int^\infty_0dE_f \int _{2 \pi}{ d \Omega _f} ~(E_f-E_i)^2 ~ \frac{dP^K({\bf p}_i; T_G)}{dE_i d\Omega _i} \frac{dR ({\bf p}_f ,{\bf p}_i; T_S )}{dE_f d\Omega _f} \; . \end{equation} At this point the only remaining quantity needed for evaluating the accommodation coefficient is the differential reflection coefficient $ {dR({\bf p}_f,{\bf p}_i, T_S )} / { d E_f \: d \Omega_f} $. This provides a complete description of the scattering process which means that it contains not only the direct scattering arising from a single collision or a small number of collisions with the surface, but it also should contain the contributions of those particles that are initially trapped and then subsequently desorbed. The present authors have recently developed a complete theory of atom-surface scattering that includes both contributions.~\cite{Fan-07} This theory is based on an initial differential reflection coefficient for a single surface collision $ {dR^0({\bf p}_f,{\bf p}_i, T_S )} / { d E_f \: d \Omega_f} $. This initial collision results in a scattered intensity that consists of a fraction that is the direct scattering contribution which leaves the surface and a fraction that is trapped in the physisorption well of the interaction potential. The trapped fraction is followed inside the well and when those particles have another collision, some escape into the continuum and some remain trapped and go on to have further collisions with similar consequences. By dividing all trapped particles into a distribution of small energy and angular bins they can be followed through many collisions until ultimately essentially all of them have escaped into the continuum and desorbed. This process can be written schematically as the following equation \begin{eqnarray} \label{iter} \frac{dR({\bf p}_f,{\bf p}_i)}{d {E}_f d \Omega_f} ~=~ \frac{dR^0({\bf p}_f,{\bf p}_i)}{d {E}_f d \Omega_f} ~+~ \int d E_b d \Omega_b ~ \frac{dR^0({\bf p}_f,{\bf p}_b)}{d {E}_f d \Omega_f} ~ \frac{dR^0({\bf p}_b,{\bf p}_i)}{d {E}_b d \Omega_b} \\ \nonumber ~+~ \int d E_b d \Omega_b ~ \frac{dR^0({\bf p}_f,{\bf p}_b)}{d {E}_f d \Omega_f} ~ \frac{dR^1({\bf p}_b,{\bf p}_i)}{d {E}_b d \Omega_b} ~+~ \ldots \\ \nonumber ~+~ \int d E_b d \Omega_b ~ \frac{dR^0({\bf p}_f,{\bf p}_b)}{d {E}_f d \Omega_f} ~ \frac{dR^{n-1}({\bf p}_b,{\bf p}_i)}{d {E}_b d \Omega_b} ~, \end {eqnarray} where $ {dR^n({\bf p}_b,{\bf p}_i, T_S )} / { d E_b \: d \Omega_b} $ is the differential reflection coefficient giving the distribution of particles remaining trapped in the bound states after $n$ collisions and the intermediate integrations in the higher order terms are carried out only over angles and energies that pertain to the trapped fraction. The process described in Eq.~(\ref{iter}) is readily developed into an iterative scheme in which the scattered distribution remaining in the well after the $n$th collision becomes the source for the next collision. The details of the calculation of the differential reflection coefficient of Eq.~(\ref{iter}) are given in Ref.[\cite{Fan-07}] where it is shown that this procedure for calculating the trapping-desorption fraction can not only explain the physical behavior of the trapping-desorption fraction, but it can also explain experimental data for energy-resolved scattering spectra of rare gases taken under conditions in which there are distinct and clearly exhibited features due to both direct scattering and trapping-desorption. The zeroth order differential reflection coefficient is chosen to be that due to a potential consisting of an attractive square well in front of a smooth repulsive wall whose surface vibrates due to the thermal motion of the substrate atoms. The square well has depth $D$ and width $b$. For a classical mechanical treatment a square well is a reasonable approximation since it describes correctly the increase in energy and refraction towards more normal incidence angles when a particle enters the physisorption potential. The width $b$ does not affect the scattered intensities provided it is larger than the selvedge region of the surface, i.e., as long as it is larger than the surface vibrational displacements. However, trapping times are proportional to $b$. This differential reflection coefficient has been shown to be useful in explaining a variety of atom-surface scattering experiments and is given by~\cite{model1,model2,model3,model4,model5}: \begin{eqnarray} \label{Mx} \frac{dR^0({\bf p}_f,{\bf p}_i; T_s)}{dE_f d\Omega _f} \\ \nonumber = \frac {m^2 {v_R^2 } \left| {\bf p}_f \right|} {4\pi ^3 \hbar ^2 p_{iz} {S_{u.c.} } N_D^0 } \left| {\tau _{fi} } \right|^2 \left(\frac{\pi }{k_B T_s \Delta E_0 }\right)^{3/2} \exp\left\{ { - \frac{(E_f - E_i + \Delta E_0 )^2+{2v_R^2 {\bf P}^2} }{4k_B T_s \Delta E_0 } } \right\} ~, \end {eqnarray} where $\Delta E_0 = ({\bf p}_f - {\bf p}_i)^2 /2M $ is the recoil energy. $p_{iz}$ is the $z$ component of the incident momentum, $\left| {\tau _{fi} } \right|^2$ is the form factor of the scattering $N_D^0$ is the normalization coefficient, ${\bf P} = ({\bf P}_f - {\bf P}_i)$ is the parallel momentum exchange, and $S_{u.c.}$ is the area of a surface unit cell. The quantity $v_R$ is a weighted velocity of sound parallel to the surface. It is expected to have a values of order of the Rayleigh phonon speed and can be calculated from the complete surface phonon spectral densiy, however, it is usually taken to be a constant.~\cite{model1,model2,model3} The amplitude $ {\tau _{fi} } $ of the form factor appearing in Eq.~(\ref{Mx}) is in general the transition matrix of the elastic interaction potential extended off of the energy shell to account for inelastic scattering. A good approximation that has been extensively used is the first-order distorted wave Born approximation matrix element which for a strongly repulsive surface is given by~\cite{Goodman-76} \begin{equation} \label{My} \tau _{fi} ~=~ 4 p_{fz} p_{iz} / m \end {equation} The main numerical operations involved in carrying out calculations are the multiple integrals involved in the accommodation coefficient of Eq.~(\ref{eeac}) and in the iterative evaluation of the differential reflection coefficient of Eq.~(\ref{iter}). In each case these are six-dimensional integrations, although, if the surface is azimuthally symmetric as is the case for the potential used here, the accommodation coefficient reduces to a five-dimensional integral. The angular integrations are carried out using Gauss-Legendre algorithms and the energy integrals with Gauss-Laguerre algorithms. \section{Results}\label{result} Comparisons with experimental data for calculations using the theory and interaction potential described above are presented for the heavy rare gases in contact with a tungsten surface in Figures~\ref{arwv500}-\ref{xewv500}. The data from Thomas {\em et al.} are shown as open circles and the data from Kouptsidis and Menzel are shown as filled circles. Fig.~\ref{arwv500} shows the measured equilibrium accommodation coefficient for Ar on W compared to four curves calculated with different well depths of 10, 20, 25 and 50 meV. The velocity parameter is $v_R=500$ m/s. The best agreement with the data is for a well depth of approximately $D=25$ meV. A table of measured and theoretically calculated well depths for the Ar/W system is given in Ref.~[\cite{Manson-Muis}] which shows that this value of 25 meV is somewhat smaller than expected. This table is based on values presented in Ref.~[\cite{Ref68ofMuis-Manson}] and the measurements, primarily obtained from thermal desorption experiments, range from 78 to 127 meV while calculated values are somewhat smaller ranging from 33 to 47 meV. If the velocity parameter $v_R$ is made somewhat larger the calculations for a larger well depth approach more closely to the data at large temperatures, but the agreement at low temperatures becomes worse. Although the well depth predicted by these calculations is somewhat smaller than expected, it is considerably larger than the value of 15 meV used previously to fit the data with calculations based solely on the direct scattering.~\cite{Manson-Muis} Thus it becomes clear that including the trapping-desorption in the calculation significantly increases the value of the accommodation. Fig.~\ref{krwv500} shows the accommodation coefficient data for Kr/W compared to calculations carried out for two different well depths, 50 and 70 meV. The best agreement with the data is for a well depth of approximately $D=50$ meV. Larger well depths lead to larger trapping-desorption fractions, and since the trapping-desorption fraction is nearly in equilibrium this tends to enhance the accommodation coefficient. As in the case for Ar/W, calculations with $v_R$ larger than 500 m/s will tend to decrease the accommodation coefficient for a given well depth, but at the expense of poorer overall agreement with the data. Estimated well depths for the Kr/W system have been obtained only from thermal desorption experiments and these values range from 195 to 247 meV as tabulated in Ref.~[\cite{Ref68ofMuis-Manson}]. Thus the value used here to give a best fit with the data is small in comparison to the thermal desorption measurements, but again as for the Ar/W system it is twice as large as the value obtained for calculations based only on direct scattering.~\cite{Manson-Muis} Fig.~\ref{xewv500} shows similar comparisons with data for the case of Xe/W. Calculations for two well depths, 100 and 150 meV, are shown and $v_R=500$ m/s. Both of these well depths are somewhat smaller than the independent measured thermal desorption value of 180 meV.~[\cite{Ref68ofMuis-Manson}]. It is interesting to note that the calculations for $D=150$ meV and temperatures below $T_S=150$ K show that essentially all of the gas atoms are trapped in the physisorption well and escape nearly in equilibrium which results in complete accommodation, or $\alpha_E = 1$. It is to be expected that a classical theory will not be adequate to describe the accommodation of the lighter rare gases He and Ne on a surface of atoms as heavy as tungsten. Numerous treatments have shown that the interaction of these gases with metal surfaces, especially for the case of He gas, is quantum mechanical in nature and the scattering is dominated by elastic and single phonon inelastic processes. A classical theory, such as used here, cannot properly treat quantum mechanical processes and the present calculations predict accommodation coefficient values that are much too large for He and Ne. However, there are several quantum theoretical treatments of the accommodation coefficient based on exchange of small numbers of phonons which explain the measured values for the He/W and Ne/W system quite nicely.~\cite{Saxena,Goodman-76} \section{CONCLUSIONS}\label{conclusion} This paper presents calculations of the equilibrium accommodation coefficient for energy exchange at a gas-surface interface using a newly developed theory of atom-surface scattering in the classical limit that treats both the direct scattering and the fraction of particles that are trapped and subsequently desorbed after the initial collision with the surface. This theory is applied to a relatively straightforward model of the interaction potential, consisting of a strongly repulsive vibrating repulsive wall with an attractive square physisorption well in front. However, the theory treats the statistical mechanics of the scattering process properly and is able to track all initially physisorbed particles until they eventually desorb. This theory not only describes correctly both the direct and trapping-desorption fractions, it has been used to explain measured experimental data whose energy-resolved scattering spectra exhibit distinct features due to direct and trapping-desorption events. Thus, it is of interest to calculate the accommodation coefficient using this theory to see if it explains the available data for energy transfer at a gas-surface interface. A large amount of data exists for the accommodation of a variety of atomic and molecular gases at different types of surfaces. However, the most carefully defined systems, both experimentally and theoretically, are the rare gases accommodating at a tungsten surface. Although data is available for all the rare gases except radon, comparisons here are made only for the heavier rare gases. This is because the light mass rare gases, He and Ne, interact quantum mechanically and are not well explained by a purely classical theory. This work can be viewed as a logical extension of an earlier paper by one of the authors in which calculations with a similar interaction potential model, but a theory that contained only the direct scattering component, was applied to the energy accommodation coefficient.~\cite{Manson-Muis} Thus, the present work when compared to the previous results gives a clear indication of the contributions of the trapping-desorption fraction to the accommodation. Good overall agreement between calculations and measured accommodation coefficient data is obtained. However, the results do depend on the choice of the well depth and the velocity parameter that arises from the model of the interaction potential. Neither of these quantities has been well established for the interaction of heavy rare gases with the tungsten surface. The calculated values of the well depths that give the best agreement with measurements tend to be somewhat smaller than estimates extracted from thermal desorption experiments, although there are typically significant difference between such measurements in the cases where more than one exists. In comparison with the previous calculations, however, the present work predicts well depths that are significantly larger due to the influence of the trapping-desorption fraction. The fact that this theory explains the available data for heavy rare gas accommodating at clean tungsten surfaces, and the fact that state-to-state calculations explain recently available data for Ar scattering under conditions where the energy-resolved spectra exhibited clear evidence for distinct direct scattering and trapping-desorption features implies that it should be useful for predicting the energy accommodation for other gas-surface systems. In particular it should be able to predict the behavior of other systems as a function of the experimentally accessible initial conditions such as temperature, well depth, gas particle mass and surface mass. \vspace{1cm} \noindent {\bf Acknowledgements} \\ \noindent This work was supported by the US Department of Energy under grant number DMR-FG02-98ER45704. \newpage
2,869,038,155,774
arxiv
\section{Proof of Lemma \ref{lem:partVC}}\label{app:partVC} In an instance of {\sc 3-SAT}, we are given a set of variables $X$, and a formula encoded as a collection of clauses ${\cal C}$. Each clause $C\in{\cal C}$ is a set of exactly three literals, where each literal is either a variable $x\in X$ or the negation of a variable $x\in X$ that is denoted by $\overline{x}$. A truth assignment $\alpha: X\rightarrow\{\mathsf{T},\mathsf{F}\}$ satisfies a literal $\ell$ if either it is positive and assigned truth, or negative and assigned false. Now, $\alpha$ satisfies a clause if it satisfies at least one of its literals, and it satisfies ${\cal C}$ if it satisfy every clause in $\cal C$. The objective is to decide whether there exists a truth assignment that satisfies $\cal C$. The {\sc 3-SAT} problem is \textsf{NP}-hard, and below we give the well-known classic reduction from {\sc 3-SAT} to {\sc Vertex Cover} that shows that {\sc Vertex Cover} is \textsf{NP}-hard. This result is summarized in Proposition \ref{prop:normalVC}. Afterwards, we argue how the instance outputted by the reduction can be viewed as an instance of {\sc Partitioned Vertex Cover}, and thus conclude the proof of Lemma \ref{lem:partVC}. \paragraph{Reduction from {\sc 3-SAT} to {\sc Partitioned Vertex Cover}.} Let $I=(X,{\cal C})$. Then, we construct an instance $\mathtt{reduction}(I)=(G,k)$ of {\sc Vertex Cover} as follows. First, define $k=|X|+2|{\cal C}|$. Now, for every $x\in X$, add two new vertices $v_x$ and $v_{\overline{x}}$, along with the edge $\{v_x,v_{\overline{x}}\}$, to $G$. For every clause $C=\{p,q,r\}\in{\cal C}$, add three new vertices $u^C_p,u^C_q$ and $u^C_r$, along with the edges $\{u^C_p,u^C_q\}$, $\{u^C_q,u^C_r\}$ and $\{u^C_r,u^C_p\}$ to $G$. Finally, for every $C\in{\cal C}$ and $\ell\in C$, add the edge $\{u^C_\ell,v_\ell\}$ to $G$. It is easy to verify that the following result holds (see, e.g., \cite{sipser2006}). \begin{proposition}[\cite{sipser2006}]\label{prop:normalVC} Let $I=(X,{\cal C})$ be an instance of {\sc 3-SAT}. Then, $I$ has a satisfying assignment if and only if $G$ has a vertex cover of size at most $k$ where $(G,k)=\mathtt{reduction}(I)$. \end{proposition} \paragraph{The viewpoint of {\sc Partitioned Vertex Cover}.} Given the output instance $(G,k)$ of the reduction, we define ${\cal P}=\{\{v_x,v_{\overline{x}}: x\in X\}$ and ${\cal T}=\{\{u^C_p,u^C_q,u^C_r\}: C=\{p,q,r\}\in{\cal C}\}$. Clearly, the sets in ${\cal P}\cup{\cal T}$ are pairwise disjoint, every set in ${\cal P}$ is an edge in $G$, and every set in ${\cal T}$ induces a triangle in $G$. Moreover, every vertex cover of $G$ must select at least one vertex of each edge in $E(G)$, and at least two vertices of every triangle in $G$. Since $|{\cal P}|=|X|$ and $|{\cal T}|=2|{\cal C}|$, this means that $G$ has a vertex cover of size at most $k$ if and only if $G$ has a vertex cover of size exactly $k$, and the latter statement holds if and only if $(G,{\cal P},{\cal T})$ has a solution. By Proposition \ref{prop:normalVC}, this concludes the proof of Lemma \ref{lem:partVC}. \section{Proof of Lemma \ref{lem:vc}}\label{app:vc} Towards the proof of Lemma \ref{lem:vc}, we first prove the following claim. \begin{claim}\label{claim:configuration} For every $i\in V(G)$, we have that \begin{itemize} \item $\{a_i,b_i\}\in M$, or \item both $\{a_i,d_i\}\in M$ and $\{b_i,c_i\}\in M$. \end{itemize} \end{claim} \begin{proof} Consider some $i\in V(G)$. First, note that $a_i$ is the top preference of every vertex among its neighbors. Therefore, if $a_i$ is not matched by $M$, then we can match it to any of its neighbors (so if that neighbor was matched, its former matching partner is now unmatched), and get two votes in favor of the change and at most one vote against it, which means that $M$ is not popular. Therefore, $a_i$ must be matched by $M$. Let us now rule out the possibility that $a_i$ is matched to $c_i$ by $M$. If $a_i$ is matched to $c_i$, then $b_i$ must be matched to $u^e_i$ for some edge $e$ incident to $i$, as otherwise by removing $\{a_i,c_i\}$ from $M$ and adding $\{a_i,b_i\}$ to it instead, we obtain a more popular matching. Denote $e=\{i,j\}$. Then, by removing $\{a_i,c_i\}$, $\{b_i,u^e_i\}$ and $\{u^e_j,b_j\}$ (if $\{u^e_j,b_j\}\in M$), and adding $\{a_i,b_i\}$ and $\{u^e_i,u^e_j\}$, we get four votes in favor of the change (from $a_i, b_i, u^e_i$ and $u^e_j$) and at most two votes against it (from $c_i$ and possibly $b_j$), which contradicts the popularity of $M$. It remains to show that if $a_i$ is matched to $d_i$, then $b_i$ is matched to $c_i$. To this end, suppose that $a_i$ is matched to $d_i$. If $b_i$ is unmatched, then by removing $\{a_i,d_i\}$ and adding $\{b_i,a_i\}$, we obtain a more popular matching. Thus, if $b_i$ is not matched to $c_i$, then it must be matched to $u^e_i$ for some edge $e=\{i,j\}$ incident to $i$. In this case, where $b_i$ is matched to $u^e_i$, by removing $\{a_i,d_i\}$, $\{b_i,u^e_i\}$ and $\{u^e_j,b_j\}$ (if $\{u^e_j,b_j\}\in M$), and adding $\{a_i,b_i\}$ and $\{u^e_i,u^e_j\}$, we get four votes in favor of the change (from $a_i,b_i,u^e_i$ and $u^e_j$) and at most two votes against it (from $d_i$ and possibly $b_j$), which contradicts the popularity of $M$. \end{proof} We now proceed with the proof of the lemma. To this end, let $\{i,j\}\in E(G)$ be an arbitrarily chosen edge. To prove that this edge is covered by $U$, we need to show that at least one edge among $\{a_i,b_i\}$ and $\{a_j,b_j\}$ is in $M$. Suppose, by way of contradiction, that this statement is false. Then, by Claim \ref{claim:configuration}, it holds that $\{a_i,d_i\},\{b_i,c_i\},\{a_j,d_j\},\{b_j,c_j\}\in M$. In this case, $\{u^e_i,u^e_j\}$ must be in $M$ (else they are not matched, which contradicts Observation \ref{obs:maximal}). Then, we remove $\{a_i,d_i\},\{b_i,c_i\},\{u^e_i,u^e_j\},\{a_j,d_j\}$ and $\{b_j,c_j\}$ from $M$, and add $\{a_i,c_i\},\{b_i,u^e_i\},\{a_j,c_j\}$ and $\{b_j,u^e_j\}$ to $M$. Then, we gain six votes in favor of the replacement (from $a_i,a_j,b_i,b_j,c_i$ and $c_j$) and only four votes against it (from $d_i,d_j,u^e_i$ and $u^e_j$), which contradicts the popularity of $M$. This completes the proof of the lemma. \section{Correctness}\label{sec:correctness} In this section, we prove the correctness of our reduction. For the sake of clarity, the proof is divided into two subsections, corresponding to the forward and reverse directions. Together with Lemma \ref{lem:partVC}, this proof will conclude the correctness of Theorem \ref{thm:main}. \subsection{Forward Direction}\label{sec:forward} Here, we prove that if there exists a solution to the instance $(G,{\cal P},{\cal T})$ of {\sc Partitioned Vertex Cover}, then there exists a popular matching in $\mathtt{reduction}(I)=(H,L=\{\ell_v: v\in V(H)\})$. For this purpose, let us suppose that $U$ is a solution to $(G,{\cal P},{\cal T})$. In what follows, we first construct a matching $M$ in $H$. Then, we will show that the graph $H_M$ (see Definition \ref{def:GM}) satisfies several useful properties, which will eventually lead us to the conclusion that $M$ is popular. \begin{figure}[t!]\centering \fbox{\includegraphics[scale=0.8]{cropped_PairSelectorMatch}} \caption{Edges shown in bold are inserted into $M$ (in Section \ref{sec:forward}). Edges labeled -2 by $\mathtt{label}_M$ are marked by a red no entry sign, and edges labeled +2 by $\mathtt{label}_M$ are marked by a green square.}\label{fig:pairSelectorMatch} \end{figure} \paragraph{Construction of $M$.} The matching $M$ is the union of the following sets. \begin{itemize} \item $M_U=\{\{u^e_i,u^e_j\}: \{i,j\}\in E(G)\}$. \item For every $\{i,j\}\in{\cal P}$ with $i<j$, let $x\in\{i,j\}$ be the vertex not in $U$, and $y\in\{i,j\}$ be the vertex in $U$, and insert the edges $\{a_x,d_x\}$, $\{b_x,c_x\}$, $\{a_y,b_y\}$, $\{f_{xy},c_y\}$ and $\{f_{yx},d_y\}$ into $M_{\cal P}$. (See Fig.~\ref{fig:pairSelectorMatch}.) \item For every $\{i,j,k\}\in{\cal T}$ with $i<j<k$, let $x\in\{i,j,k\}$ be the vertex not in $U$, and $y,z\in\{i,j,k\}$ be the two vertices in $U$ such that $d_x$ prefers $d_y$ over $d_z$, and insert the edges $\{a_x,d_x\}$, $\{b_x,c_x\}$, $\{a_y,b_y\}$, $\{a_z,b_z\}$, $\{c_y,c_z\}$ and $\{d_y,d_z\}$ into $M_{\cal T}$. (See Fig.~\ref{fig:tripleSelectorMatch}.) \end{itemize} Since the sets in ${\cal P}\cup{\cal T}$ are pairwise disjoint, the sets above are well (uniquely) defined. We also remark that the figures do not only capture the case where $i=x$ due to the symmetry of our gadgets (i.e., if $j=x$ or $k=x$ in the case of a triple, we obtain precisely the same figures). \begin{figure}[t!]\centering \fbox{\includegraphics[scale=0.8]{cropped_TripleSelectorMatch}} \caption{Edges shown in bold are inserted into $M$ (in Section \ref{sec:forward}). Edges labeled -2 by $\mathtt{label}_M$ are marked by a red no entry sign, and edges labeled +2 by $\mathtt{label}_M$ are marked by a green square.}\label{fig:tripleSelectorMatch} \end{figure} \paragraph{Properties of $H_M$.} Let us start by observing that, since all vertices in $H$ are matched by $M$, the following statement immediately holds. \begin{observation}\label{obs:noUnmatched} There is no alternating path in $H_M$ that starts from a vertex not matched by $M$ and contains at least one edge labeled +2 by $\mathtt{label}_M$. \end{observation} We proceed to identify which edges in $H_M$ are labeled +2 by $\mathtt{label}_M$. \begin{lemma}\label{lem:2} The set of edges labeled +2 by $\mathtt{label}_M$ is $\{\{a_i,b_i\}: i\notin U\}\cup\{\{a_i,c_i\}: i\notin U\}$. \end{lemma} \begin{proof} First, for all $i\notin U$, we have that $\{a_i,d_i\}\in M$ and $\{b_i,c_i\}\in M$. Since $a_i$ prefers both $b_i$ and $c_i$ over $d_i$, and both $b_i$ and $c_i$ prefer $a_i$ over each other, we have that all the edges in $\{\{a_i,b_i\}: i\notin U\}\cup\{\{a_i,c_i\}: i\notin U\}$ are labeled +2 by $\mathtt{label}_M$. Next, we show that all other edges in $H$ are not labeled +2 by $\mathtt{label}_M$, which will complete the proof. Observe that for all $i\in U$, we have that $\{a_i,b_i\}\in M$, and since $p_{a_i}(b_i)=p_{b_i}(a_i)=1$, this means the no edge incident to $a_i$ or $b_i$ can be labeled +2 by $\mathtt{label}_M$. Similarly, for all $\{i,j\}\in E(G)$, we have that $\{u^e_i,u^e_j\}\in M$, and since $p_{u^e_i}(u^e_j)=p_{u^e_j}(u^e_i)=1$, this means the no edge incident to $u^e_i$ or $u^e_j$ can be labeled +2 by $\mathtt{label}_M$. Thus, no edge that belongs to an Edge Coverage gadget, excluding the edges in $\{\{a_i,b_i\}: i\notin U\}\cup\{\{a_i,c_i\}: i\notin U\}$, is labeled +2 by $\mathtt{label}_M$. Now, consider some pair $\{i,j\}\in{\cal P}$ with $i<j$, and let $x\in\{i,j\}$ be the vertex not in $U$, and $y\in\{i,j\}$ be the vertex in $U$. Then, the edges $\{a_x,d_x\}$, $\{b_x,c_x\}$, $\{a_y,b_y\}$, $\{f_{xy},c_y\}$ and $\{f_{yx},d_y\}$ belong to $M$. However, $c_x$ prefers $b_x$ over both $f_{yx}$ and $d_y$, and $d_x$ prefers $a_x$ over both $f_{xy}$ and $c_y$, which means that none of the edges $\{c_x,f_{yx}\}$, $\{c_x,d_y\}$, $\{d_x,f_{xy}\}$ and $\{d_x,c_y\}$ is labeled +2 by $\mathtt{label}_M$. Finally, consider some triple $\{i,j,k\}\in{\cal T}$ with $i<j<k$, and let $x\in\{i,j,k\}$ be the vertex not in $U$, and $y,z\in\{i,j,k\}$ be the two vertices in $U$ such that $d_x$ prefers $d_y$ over $d_z$. Then, the edges $\{a_x,d_x\}$, $\{b_x,c_x\}$, $\{a_y,b_y\}$, $\{a_z,b_z\}$, $\{c_y,c_z\}$ and $\{d_y,d_z\}$ belong to $M$. However, $c_x$ prefers $b_x$ over both $c_y$ and $c_z$, and $d_x$ prefers $a_x$ over both $d_y$ and $d_z$, which means that none of the edges $\{c_x,c_y\}$, $\{c_x,c_z\}$, $\{d_x,d_y\}$ and $\{d_x,d_z\}$ is labeled +2 by $\mathtt{label}_M$. \end{proof} Now, Lemma \ref{lem:2} directly implies the correctness of the following lemma. \begin{lemma}\label{lem:plusInGadget} For any $P\in {\cal P}$, the only edges labeled +2 by $\mathtt{label}_M$ in the Pair Selector gadget associated with $P$ are $\{a_x,b_x\}$ and $\{a_x,c_x\}$ for the unique vertex $x\in P$ that is not in $U$. Similarly, for any $T\in {\cal T}$, the only edges labeled +2 by $\mathtt{label}_M$ in the Triple Selector gadget associated with $T$ are $\{a_x,b_x\}$ and $\{a_x,c_x\}$ for the unique vertex $x\in T$ that is not in $U$. \end{lemma} Having Lemma \ref{lem:plusInGadget} at hand, we are ready to rule out the possibly of having a ``bad'' alternating path that is completely contained inside a Pair Selector gadget or a Triple Selector gadget. \begin{lemma}\label{lem:noPath} For any $P\in {\cal P}$, there is no alternating path in $H_M$ that contains at least two edges labeled +2 by $\mathtt{label}_M$ and which consists only of edges from the Pair Selector gadget associated with $P$. Similarly, for any $T\in {\cal T}$, there is no alternating path in $H_M$ that contains at least two edges labeled +2 by $\mathtt{label}_M$ and which consists only of edges from the Triple Selector gadget associated with $T$. \end{lemma} \begin{proof} First, consider some pair $P\in{\cal P}$. By Lemma \ref{lem:plusInGadget}, the only edges labeled +2 by $M$ in the Pair Selector gadget associated with $P$ are $\{a_x,b_x\}$ and $\{a_x,c_x\}$ for the unique vertex $x\in P$ that is not in $U$. However, these two edges are part of a triangle in $H$, and therefore no alternating path can contain both of them together. Second, consider some triple $T\in{\cal T}$. By Lemma \ref{lem:plusInGadget}, the only edges labeled +2 by $M$ in the Triple Selector gadget associated with $T$ are $\{a_x,b_x\}$ and $\{a_x,c_x\}$ for the unique vertex $x\in T$ that is not in $U$. However, these two edges are again part of a triangle in $H$, and therefore no alternating path can contain both of them together. \end{proof} In the following two lemmas, we also rule out the possibly of having a ``bad'' alternating cycle that is completely contained inside a Pair Selector gadget or a Triple Selector gadget. \begin{figure}[t!]\centering \fbox{\includegraphics[scale=0.8]{cropped_PairSelectorPath}} \caption{The path constructed in the proof of Lemma \ref{lem:noCyclePair}, highlighted in yellow.}\label{fig:pairSelectorPath} \end{figure} \begin{lemma}\label{lem:noCyclePair} For any $P\in {\cal P}$, there is no alternating cycle in $H_M$ that contains at least one edge labeled +2 by $\mathtt{label}_M$ and which consists only of edges from the Pair Selector gadget associated with $P$. \end{lemma} \begin{proof} Consider some pair $P=\{i,j\}\in{\cal P}$ with $i<j$, and let $x\in\{i,j\}$ be the vertex not in $U$, and $y\in\{i,j\}$ be the vertex in $U$. Suppose, by way of contradiction, that there exists an alternating cycle $C$ in $H_M$ that contains at least one edge labeled +2 by $\mathtt{label}_M$ and which consists only of edges from the Pair Selector gadget associated with $P$. First, suppose that $\{a_x,c_x\}\in E(C)$. Then, since $\{c_x,b_x\}\in M$, we have that $\{c_x,b_x\}\in E(C)$. Since the only neighbor in the gadget of $b_x$ apart from $c_x$ is $a_x$, we have that $\{b_x,a_x\}\in E(C)$. However, we have thus ``closed'' a triangle, which contradicts the choice of $C$ as an alternating cycle. By Lemma \ref{lem:plusInGadget} and since $C$ contains at least one edge labeled +2 by $\mathtt{label}_M$, it must hold that $\{a_x,b_x\}\in E(C)$. Then, since $\{c_x,b_x\}\in M$, we have that $\{c_x,b_x\}-\{b_x,a_x\}$ is a subpath of $C$. Now, note that $c_x$ prefers $b_x$ over its two other neighbors in the gadget, and $f_{yx}$ prefers $d_y$ over $c_x$. Therefore, $\{c_x,f_{yx}\}$ is labeled -2 by $\mathtt{label}_M$, and hence it does not exist in $H_M$. Thus, we also have that $\{d_y,c_x\}\in E(C)$, and since $\{d_y,f_{yx}\}\in M$, we have that $\{f_{yx},d_y\}-\{d_y,c_x\}-\{c_x,b_x\}-\{b_x,a_x\}$ is a subpath of $C$ (see Fig.~\ref{fig:pairSelectorPath}). However, $f_{yx}$ has no neighbor in $H_M$ apart from $d_y$, and therefore we have reached a contradiction to the choice of $C$ as an alternating cycle. \end{proof} \begin{lemma}\label{lem:noCycleTriple} For any $T\in {\cal T}$, there is no alternating cycle in $H_M$ that contains at least one edge labeled +2 by $\mathtt{label}_M$ and which consists only of edges from the Triple Selector gadget associated with $T$. \end{lemma} \begin{proof} Consider some triple $T=\{i,j,k\}\in{\cal T}$ with $i<j<k$, and let $x\in\{i,j,k\}$ be the vertex not in $U$, and $y,z\in\{i,j,k\}$ be the two vertices in $U$ such that $d_x$ prefers $d_y$ over $d_z$. Suppose, by way of contradiction, that there exists an alternating cycle $C$ in $H_M$ that contains at least one edge labeled +2 by $\mathtt{label}_M$ and which consists only of edges from the Triple Selector gadget associated with $T$. First, suppose that $\{a_x,c_x\}\in E(C)$. Then, since $\{c_x,b_x\}\in M$, we have that $\{c_x,b_x\}\in E(C)$. Since the only neighbor in the gadget of $b_x$ apart from $c_x$ is $a_x$, we have that $\{b_x,a_x\}\in E(C)$. However, we have thus ``closed'' a triangle, which contradicts the choice of $C$ as an alternating cycle. By Lemma \ref{lem:plusInGadget} and since $C$ contains at least one edge labeled +2 by $\mathtt{label}_M$, it must hold that $\{a_x,b_x\}\in E(C)$. Then, since $\{c_x,b_x\}\in M$ and $\{a_x,d_x\}\in M$, we have that $\{c_x,b_x\}-\{b_x,a_x\}-\{a_x,d_x\}$ is a subpath of $C$. Observe that $c_x$ prefers $b_x$ over $c_z$, and $c_z$ prefers $c_y$ over $c_x$. Moreover, $d_x$ prefers $a_x$ over $d_y$, and $d_y$ prefers $d_z$ over $d_x$. Therefore, both $\{c_x,c_z\}$ and $\{d_x,d_y\}$ are labeled -2 by $\mathtt{label}_M$, which means that these two edges do not exist in $H_M$. Since the only neighbor of $c_x$ in the gadget except for $a_x,b_x$ and $c_z$ is $c_y$, and since the only neighbor of $d_x$ in the gadget except for $a_x$ and $d_y$ is $d_z$, we have that $\{c_y,c_x\},\{d_x,d_z\}\in E(C)$. Since $\{c_z,c_y\},\{d_z,d_y\}\in M$, this means that $\{c_z,c_y\}-\{c_y,c_x\}-\{c_x,b_x\}-\{b_x,a_x\}-\{a_x,d_x\}-\{d_x,d_z\}-\{d_z,d_y\}$ is a subpath of $C$. Now, since the only neighbor of $d_y$ in the gadget except for $d_x$ and $d_z$ is $a_y$, we have that $\{d_y,a_y\}\in E(C)$. Because $\{a_y,b_y\}\in M$, and since the only neighbor of $b_y$ in this gadget except for $a_y$ is $c_y$, this means that $\{c_z,c_y\}-\{c_y,c_x\}-\{c_x,b_x\}-\{b_x,a_x\}-\{a_x,d_x\}-\{d_x,d_z\}-\{d_z,d_y\}-\{d_y,a_y\}-\{a_y,b_y\}-\{b_y,c_y\}$ is a subpath of $C$ (see Fig.~\ref{fig:tripleSelectorPath}). However, $c_y$ has three different neighbors on this path, which contradicts the choice of $C$ as an alternating cycle. \end{proof} \begin{figure}[t!]\centering \fbox{\includegraphics[scale=0.8]{cropped_TripleSelectorPath}} \caption{The path constructed in the proof of Lemma \ref{lem:noCycleTriple}, highlighted in yellow.}\label{fig:tripleSelectorPath} \end{figure} Next, in the following two lemmas, we argue that a shortest ``bad'' alternating path, as well as a ``bad'' alternating cycle, cannot contain any vertex of the form $u^e_i$. The proof of the second lemma is essentially a simplified version of the proof of the first one, but we present the full details for the sake of clarity. \begin{lemma}\label{lem:noCrossPath} Let $S$ be a shortest alternating path in $H_M$ that contains at least two edges labeled +2 by $\mathtt{label}_M$. Then, $S$ does not contain the vertex $u^e_i$ for any $e\in E(G)$ and $i\in e$. \end{lemma} \begin{proof} Suppose, by way of contradiction, that $S$ contains the vertex $u^e_i$ for some $e\in E(G)$ and $i\in e$. Then, since $\{u^e_i,u^e_j\}\in M$ for the vertex $j\in V(G)$ such that $e=\{i,j\}$, we have that $\{u^e_i,u^e_j\}\in E(S)$. Since $U$ is a vertex cover in $G$, at least one vertex in $\{i,j\}$ belongs to $U$, and let us suppose without loss of generality that this vertex is $i$. Then, $\{a_i,b_i\}\in M$, which means that $\mathtt{label}_M(\{b_i,u^e_i\})=-2$. Thus, $\{b_i,u^e_i\}\notin E(H_M)$. Since the only neighbors of $u^e_i$ in $H$ are $b_i$ and $u^e_j$, the only neighbors of $u^e_j$ in $H$ are $b_j$ and $u^e_i$, this implies that $u^i_e$ is an endpoint of $S$ and $\{u^e_j,b_j\}\in E(S)$. Since $u^e_j$ prefers $u^e_i$ over $b_j$, we have that $\mathtt{label}_M(\{b_j,u^e_j\})\neq +2$. Thus, by removing $\{u^e_i,u^e_j\}$ and $\{b_j,u^e_j\}$ from $S$, we obtain yet another alternating path in $H_M$ that contains at least two edges labeled +2 by $\mathtt{label}_M$. This contradicts the choice of $S$ as the shortest alternating path in $H_M$ with this property. \end{proof} \begin{lemma}\label{lem:noCrossCycle} There is no alternating cycle in $H_M$ that contains the vertex $u^e_i$ for any $e\in E(G)$ and $i\in e$. \end{lemma} \begin{proof} Suppose, by way of contradiction, that there exists an alternating cycle $C$ in $H_M$ that contains the vertex $u^e_i$ for some $e\in E(G)$ and $i\in e$. Then, since $\{u^e_i,u^e_j\}\in M$ for the vertex $j\in V(G)$ such that $e=\{i,j\}$, we have that $\{u^e_i,u^e_j\}\in E(C)$. Since $U$ is a vertex cover in $G$, at least one vertex in $\{i,j\}$ belongs to $U$, and let us suppose without loss of generality that this vertex is $i$. Then, $\{a_i,b_i\}\in M$, which means that $\mathtt{label}_M(\{b_i,u^e_i\})=-2$. Thus, $\{b_i,u^e_i\}\notin E(H_M)$. Since the only neighbors of $u^e_i$ in $H$ are $b_i$ and $u^e_j$, this implies that $C$ cannot be a cycle, and hence we have reached a contradiction. \end{proof} \paragraph{Conclusion of the forward direction.} First, by Observation \ref{obs:noUnmatched}, there is no alternating path in $H_M$ that starts from a vertex not matched by $M$ and contains at least one edge labeled +2 by $\mathtt{label}_M$. Now, by Lemmas \ref{lem:noCrossPath} and \ref{lem:noCrossCycle}, if there exists an alternating cycle in $H_M$ that contains at least one edge labeled +2 by $\mathtt{label}_M$, or an alternating path in $H_M$ that contains at least two edges labeled +2 by $\mathtt{label}_M$, then there also exists such a cycle or path that does not contain any vertex in $\{u^e_i: e\in E(G), i\in e\}$. However, if we remove the vertices in $\{u^e_i: e\in E(G), i\in e\}$ from $H$, then the remaining connected components are precisely the Pair Selector and Triple Selector gadgets. By Lemmas \ref{lem:noPath} and \ref{lem:noCyclePair}, there exists no alternating cycle in $H_M$ that contains at least one edge labeled +2 by $\mathtt{label}_M$, as well as no alternating path in $H_M$ that contains at least two edges labeled +2 by $\mathtt{label}_M$, which consists only of edges of a Pair Selector gadget. Moreover, by Lemmas \ref{lem:noPath} and \ref{lem:noCycleTriple}, the same claim holds also with respect to a Triple Selector gadget. Thus, by Proposition \ref{prop:char}, we conclude that $M$ is popular. \subsection{Reverse Direction} Here, we prove that if there exists a popular matching in $\mathtt{reduction}(I)=(H,L=\{\ell_v: v\in V(H)\})$, then there exists a solution to the instance $(G,{\cal P},{\cal T})$ of {\sc Partitioned Vertex Cover}. For this purpose, let us suppose that $M$ is a popular matching in $(H,L=\{\ell_v: v\in V(H)\})$. In what follows, we first construct a subset $U\subseteq V(G)$. Then, we will show that $U$ is a vertex cover of $G$. Afterwards, we will show that for every $P\in {\cal P}$, it holds that $|U\cap P|=1$. Lastly, we will show that for every $T\in {\cal T}$, it holds that $|U\cap T|=2$, which will conclude the proof. Before implementing this plan, let us give a folklore observation (that is true for any graph and preference lists) that will be used in all proofs ahead. \begin{observation}\label{obs:maximal} Let $J$ be an instance of {\sc Popular Matching}. Every popular matching in $J$ is a maximal matching. \end{observation} \begin{proof} Suppose, by way of contradiction, that there exists a popular matching $\widehat{M}$ in $J$ that is not maximal. Then, there exists an edge $\{x,y\}$ that is present in the graph in $J$, and with both endpoints not matched by $\widehat{M}$. However, by adding $\{x,y\}$ to $\widehat{M}$, we obtain a matching more popular than $\widehat{M}$, and thus reach a contradiction. \end{proof} \paragraph{Construction of $U$.} We simply define $U:=\{i\in V(G): \{a_i,b_i\}\in M\}$. \paragraph{Proof that $U$ is a vertex cover.} The proof the $U$ is a vertex cover is the same as a proof given by Kavitha \cite{DBLP:journals/corr/abs-1802-07440}. However, for the sake of completeness, and also to verify that although our construction has other components, that same proof still goes through, we will present the details in Appendix \ref{app:vc}. Here, we state the claim we need in the following lemma. \begin{lemma}\label{lem:vc} The set $U$ is a vertex cover of $G$. \end{lemma} \paragraph{Proof that $U$ is a solution.} Since we have already established that $U$ is a vertex cover, the proof that $U$ is a solution will follow from the correctness of the two following lemmas. \begin{lemma} For every $P\in {\cal P}$, it holds that $|U\cap P|=1$. \end{lemma} \begin{proof} Let us consider some arbitrary pair $P=\{i,j\}\in {\cal P}$. By Lemma \ref{lem:vc}, and because a pair is also an edge in $G$, we have that $|U\cap P|\geq 1$. Thus, to prove the lemma, it suffices to show that it is not possible to have $|U\cap P|=2$. To this end, suppose by way of contradiction that $|U\cap P|=2$. By the definition of $U$, both $\{a_i,b_i\}\in M$ and $\{a_j,b_j\}\in M$. Note that the only neighbors of $c_i$ besides $a_i$ and $b_i$ are $f_{ji}$ and $d_j$, the only neighbors of $f_{ji}$ are $c_i$ and $d_j$, and the only neighbors of $d_j$ besides $a_j$ are $c_i$ and $f_{ij}$. Thus, by Observation \ref{obs:maximal}, $M$ must contain exactly one of the edges $\{c_i,d_j\}$, $\{d_j,f_{ji}\}$ and $\{f_{ji},c_i\}$. If $\{c_i,d_j\}\in M$, then by replacing this edge by $\{f_{ji},c_i\}$, we obtain a more popular matching (both $c_i$ and $f_{ji}$ vote in favor of the replacement, while only $d_j$ votes against it). If $\{d_j,f_{ji}\}\in M$, then by replacing this edge by $\{c_i,d_j\}$, we obtain a more popular matching (both $c_i$ and $d_j$ vote in favor of the replacement, while only $f_{ji}$ votes against it). If $\{f_{ji},c_i\}\in M$, then by replacing this edge by $\{d_j,f_{ji}\}$, we obtain a more popular matching (both $d_j$ and $f_{ji}$ vote in favor of the replacement, while only $c_i$ votes against it). Since every case led to a contradiction, the proof is complete. \end{proof} \begin{lemma} For every $T\in {\cal T}$, it holds that $|U\cap T|=2$. \end{lemma} \begin{proof} Let us consider some arbitrary triple $T=\{i,j,k\}\in {\cal T}$. By Lemma \ref{lem:vc}, and because a triple is also a triangle in $G$, we have that $|U\cap T|\geq 2$. Thus, to prove the lemma, it suffices to show that it is not possible to have $|U\cap T|=3$. To this end, suppose by way of contradiction that $|U\cap T|=3$. By the definition of $U$, all the three edges $\{a_i,b_i\}$, $\{a_j,b_j\}$ and $\{a_k,b_k\}$ belong to $M$. Note that the only neighbors of $d_i$ besides $a_i$ are $d_j$ and $d_k$, the only neighbors of $d_j$ besides $a_j$ are $d_i$ and $d_k$, and the only neighbors of $d_k$ besides $a_k$ are $d_i$ and $d_j$. Thus, by Observation \ref{obs:maximal}, $M$ must contain exactly one of the edges $\{d_i,d_j\}$, $\{d_j,d_k\}$ and $\{d_k,d_i\}$. If $\{d_i,d_j\}\in M$, then by replacing this edge by $\{d_j,d_k\}$, we obtain a more popular matching (both $d_j$ and $d_k$ vote in favor of the replacement, while only $d_i$ votes against it). If $\{d_j,d_k\}\in M$, then by replacing this edge by $\{d_k,d_i\}$, we obtain a more popular matching (both $d_i$ and $d_k$ vote in favor of the replacement, while only $d_j$ votes against it). If $\{d_k,d_i\}\in M$, then by replacing this edge by $\{d_i,d_j\}$, we obtain a more popular matching (both $d_i$ and $d_j$ vote in favor of the replacement, while only $d_k$ votes against it). Since every case led to a contradiction, the proof is complete. \end{proof} \section{Introduction}\label{sec:intro} The input to the {\sc Popular Matching}\ problem consists of a graph on $n$ vertices and the preferences of the vertices represented as a ranked list of the neighbors of every vertex, said to be the {\it preference list} of the vertex. The goal is to find a {\it popular matching}--a matching that is preferred over any other matching (in terms of the preference lists) by at least half of the vertices in the graph. {\sc Popular Matching}\ is one of the best known problems in the topic {\it matchings under preferences}, a class of matching problems that requires that the matching obtained satisfy some predefined notion of acceptability of the participating vertices. It is well-known that when the input graph is bipartite, {\sc Popular Matching}\ problem can always be decided affirmatively in polynomial time, \cite{DBLP:journals/iandc/HuangK13}. It is equally well-known that when the graph is arbitrary, the computational complexity of deciding whether a popular matching exists is unknown. In this paper we settle this question by proving the following result. \begin{theorem}\label{thm:main} {\sc Popular Matching}\ is \textsf{NP}-hard. \end{theorem} Popular matching finds real-life applications in avenues as diverse as the organ-donor exchange markets, spectrum sharing in cellular networks, barter exchanges, to just name a few \cite{XiaoHanYuenDaSilva16}. Specifically, situations in which a {\it stable matching} -- a matching that does not admit an edge whose endpoints prefer each other to their respective ``situation'' in the current matching -- is too restrictive, popular matching finds applicability. It is known that stable matching is the smallest sized popular matching. So for applications where it is desirable to have matchings of size larger than a stable matching -- for instance, allocating projects to students, where the absence of blocking edges is not mandatory -- popular matching may be a suitable alternative.The notion of popularity captures a natural relaxation of the notion of stability: blocking edges are permitted but the matching, nevertheless has overall stability. The provenance of the notion of a popular matching can be dated to the work of Condorcet in 1785 on the subject of a {\it Condorcet winner} \cite{Condorcet1785}. In the last century, however, the notion was introduced as the {\it majority assignment} by G\"{a}rdenfors \cite{Gardenfors75} in 1975. Anecdotal retelling ascribes the coinage of the term ``popular matching'' and the associated question of does there exist a polynomial time algorithm for {\sc Popular Matching}\ to Robert Irving during a talk in University of Glasgow in 2012, \cite[pg 303]{Manlove13b}. Abraham \etalcite{Abraham07} was the first to discuss an efficient algorithm for computing a popular matching albeit for the case where the graph is bipartite and only the vertices in one of the partitions have a preference list, an setting known as the {\sf Housing Allocation}. The persuasive motivation and elegant analysis of Abraham \emph{et~al.}\xspace led to a spate of papers on popular matching \cite{ManloveSng06,HuangKavithaMichailNasre08,KavithaN09,KavithaMestreNasre11,OptimalPM09,BiroIrvingManlove10,HuangKavitha17,KiralyKarkus17} covering diverse settings that include strict preferences as well as one with ties. In the midst of this work, the issue of the computational complexity of {\sc Popular Matching}\ in an arbitrary graph remained unsettled, leading various researcher to devise notions such as the {\it unpopularity factor} and {\it unpopularity margin} \cite{HuangKavithaMichailNasre08,McCutchen08,HuangKavitha13j} in the hope of capturing the essence of popular matchings. A solution concept that emerged from this search is the {\it maximum sized popular matching}, motivated by the fact that unlike stable matchings (Rural Hospital Theorem \cite{Roth86}), all popular matchings in an instance do not match the same set of vertices or even have the same size. Thus, it is natural to focus on the size of a popular matching. There is a series of papers that focus on computing the maximum sized popular matchings in bipartite graphs (without ties in preference lists) \cite{Size-Popularity-Tradeoff-Kavitha14j,DBLP:journals/iandc/HuangK13,CsehKavitha16} and (with ties) \cite{PM-2sidedPref-1sidedTies-CsehHuangKavitha17j}. When preferences are strict, there are various polynomial time algorithm that compute a maximum sized popular matching in bipartite graphs: Huang and Kavitha \cite{HuangKavitha13j} give an $\mathcal{O}(mn_{0})$ algorithm that is improved by Kavitha to $\mathcal{O}(m)$ \cite{Size-Popularity-Tradeoff-Kavitha14j} where $m$ and $n_{0}$ denote the number of edges in the bipartite graph and the size of the smaller vertex partition, respectively. In the presence of ties (even on one side), the problem of computing the maximum sized popular matching was shown to be \textsf{NP}-hard\ \cite{PM-2sidedPref-1sidedTies-CsehHuangKavitha17j}. It is worth noting that every stable matching is popular, but the converse is not true. As a consequence of the former, every bipartite graph has a popular matching that is computable in polynomial-time because it has a stable matching computable by the famous Gale-Shapley algorithm described in the seminal paper \cite{GaleShapley62} by the eponymous Gale and Shapley. Recently, Kavitha investigated the computational complexity of the search for a maximum sized popular matching in arbitrary graphs \cite{DBLP:journals/corr/abs-1802-07440}, and showed it to be \textsf{NP}-hard. In literature, the terminology related to {\sc Popular Matching}\ is closely related to that of the {\sc Stable Marriage} problem. When the input graph is (bipartite) arbitrary, the instance is said to be that of the ({\sc Stable Marriage}) {\sc Stable Roommate} problem. \section{Introduction}\label{sec:intro} Matching problems with preferences are ubiquitous in everyday life scenarios. They arise in applications such as the assignment of students to universities, doctors to hospitals, students to campus housing, pairing up police officers, kidney donor-recipient pairs and so on. The common theme is that individuals have preferences over the possible outcomes and the task is to find a matching of the participants that is in some sense optimal with respect to these preferences. In this paper we study the computational complexity of computing one such solution concept, namely the {\sc Popular Matching} ~problem. The input to the {\sc Popular Matching}\ problem consists of a graph on $n$ vertices and the preferences of the vertices represented as a ranked list of the neighbors of every vertex, said to be the {\it preference list} of the vertex. The goal is to find a {\it popular matching}--a matching that is preferred over any other matching (in terms of the preference lists) by at least half of the vertices in the graph. Popular matching finds real-life applications in avenues as diverse as the organ-donor exchange markets, spectrum sharing in cellular networks, barter exchanges, to just name a few \cite{Manlove13b,XiaoHanYuenDaSilva16}. Specifically, situations in which a {\it stable matching} -- a matching that does not admit a \emph{blocking edge}, i.e. an edge whose endpoints prefer each other to their respective ``situation'' in the current matching -- is too restrictive, popular matching finds applicability. It is known that stable matching is the smallest sized popular matching. So for applications where it is desirable to have matchings of larger size than a stable matching -- for instance, allocating projects to students, or pairing up police officers, where the absence of blocking edges is not mandatory -- popular matching may be a suitable alternative. The notion of popularity captures a natural relaxation of the notion of stability: blocking edges are permitted but the matching, nevertheless has overall stability To define the {\sc Popular Matching}\ problem formally, we first need few definitions. Let $N_G(v)$ denotes the neighborhood of a vertex $v\in V(G)$. Given a vertex $v\in V(G)$, a {\em preference list of $v$ in $G$} is a {\em bijective} function $\ell_v: N_G(v)\rightarrow \{1,2,\ldots,|N_G(v)|\}$. Informally, the smaller the number a vertex $v\in V(G)$ assigns to a vertex $u\in N_G(v)$, the more $v$ prefers to be matched to $u$. In particular, for all $u,w\in N_G(v)$, if $\ell_v(u)<\ell_v(w)$, then $v$ prefers $u$ over $w$. A matching $M$ in $G$ is a subset of $E(G)$ whose edges are pairwise disjoint. We say that a vertex $v\in V(G)$ is matched by a matching $M$ if there exists a (unique) vertex $u\in V(G)$ such that $\{u,v\}\in M$, which we denote by $u=M(v)$. In literature, the terminology related to {\sc Popular Matching}\ is closely related to that of the {\sc Stable Marriage} problem. When the input graph is (bipartite) arbitrary, the instance is said to be that of the {\em (stable marriage) roommates setting} of the problem. We denote an instance of {\sc Popular Matching}\ (in the roommates setting) by $I=(G,L=\{\ell_v: v\in V(G)\})$. Roughly speaking, a vertex $v\in V(G)$ prefers a matching $M$ over a matching $M'$ if its ``status'' in $M$ is better than the one in $M'$, where being not matched is the least preferred status. Formally, the notion of preference over matchings is defined as follows. Given two matchings in $G$, denoted by $M$ and $M'$, we say that a vertex $v\in V(G)$ {\em prefers} $M$ over $M'$ if one of the following conditions is satisfied: {\em (i)} $v$ in matched by $M$ but not matched by $M'$; {\em (ii)} $v$ is matched by both $M$ and $M'$, and $\ell_v(M(v))<\ell_v(M'(v))$. We say that $M'$ is {\em more popular} than $M$, if the number of vertices that prefer $M'$ to $M$ exceeds the number of vertices that prefer $M$ to $M'$. A matching $M$ is {\em popular} if and only if there is no matching $M'$ that is more popular than $M$. In the decision version of the {\sc Popular Matching}\ problem, given an instance $I=(G,L=\{\ell_v: v\in V(G)\})$, {\em the question is whether there exists a popular matching?} \medskip \noindent {\bf History of the problem and our result.} The provenance of the notion of a popular matching can be dated to the work of Condorcet in 1785 on the subject of a {\it Condorcet winner} \cite{Condorcet1785}. In the last century, however, the notion was introduced as the {\it majority assignment} by G\"{a}rdenfors \cite{Gardenfors75} in 1975. Anecdotal retelling ascribes the coinage of the term ``popular matching'' and the associated question of does there exist a polynomial time algorithm for {\sc Popular Matching}\, in the ``housing allocation'' setting, to Robert Irving during a talk in University of Glasgow in 2002, \cite[pg 333]{Manlove13b}. In 2005, Abraham \etalcite{Abraham07} was the first to discuss an efficient algorithm for computing a popular matching albeit for the case where the graph is bipartite and only the vertices in one of the partitions have a preference list, a setting known as the {\em housing allocation}. The persuasive motivation and elegant analysis of Abraham \emph{et~al.}\xspace led to a spate of papers on popular matching \cite{ManloveSng06,HuangKavithaMichailNasre08,KavithaN09,KavithaMestreNasre11,OptimalPM09,BiroIrvingManlove10,HuangKavitha17,KiralyKarkus17} covering diverse settings that include strict preferences as well as one with ties. It is well-known that when the input graph is bipartite--the stable marriage setting, {\sc Popular Matching}\ problem can always be decided affirmatively in polynomial time, \cite{DBLP:journals/iandc/HuangK13}. It is equally well-known that when the graph is arbitrary, the computational complexity of deciding whether a popular matching exists is unknown. In particular, whether {\sc Popular Matching}\ is \textsf{NP}-hard\ has been repeatedly, explicitly asked as an open question over the last decade ~\cite{EgresWiki,BiroIrvingManlove10,CsehTalk2015,Cseh-survey,CsehKavitha16,HuangKavitha13j,DBLP:journals/iandc/HuangK13,HuangKavitha17,Size-Popularity-Tradeoff-Kavitha14j,DBLP:journals/corr/abs-1802-07440,KiralyKarkus17,ManloveSummerSchool2013,Manlove13b}. Indeed, it has been stated as one of the main open problem in the area (see the aforementioned citations). In this paper we settle this question by proving the following result. \begin{theorem}\label{thm:main} {\sc Popular Matching}\ is \textsf{NP}-complete. \end{theorem} \smallskip \noindent {\bf Our method.} An optimization question related to the {\sc Popular Matching}\ is about finding a popular matching of the largest size (as not all popular matchings are of same size). Let this problem be called {\sc Max Sized Popular Matching}. Until recently, it was also not known whether this problem is \textsf{NP}-hard\ in roommate setting. Recently, Kavitha investigated the computational complexity of {\sc Max Sized Popular Matching} in arbitrary graphs \cite{DBLP:journals/corr/abs-1802-07440}, and showed it to be \textsf{NP}-hard. This reduction serves as one of the main three gadgets in our reduction--the other two gadgets are completely new. The design of our reduction required several new insights. Firstly, our source problem is a ``{\sc 3-SAT}-like'' variant of {\sc Vertex Cover}, which allows us to enjoy benefits of both worlds: we gain both the lack of ``optimization constraints'' as in {\sc 3-SAT}, and the simplicity of {\sc Vertex Cover}. The usage of this source problem requires us to encode selection of {\em exactly} one ``element'' out of two, and exactly two ``elements'' out of three. Here, our gadget design is carefully tailored to exploit a known characterization of a popular matching. In particular, we make use of ``troublemaker triangles''--these are triangles consisting of three vertices, one of whom must be matched to a vertex outside the triangle to give rise to a popular matching. We embed these triangles in a structure that coordinates the way in which they can be {\em traversed}. Here, traversal precisely refers to the above mentioned characterization, which relies on the exposure of certain alternating paths and cycles in a graph associated with a candidate matching (to be a popular matching). Our gadgets lead traversals of such paths and cycles to dead-ends. We remark that when we describe our gadgets, we present additional intuitive explanations of their design. \medskip \noindent {\bf Related results.} Chung~\cite{Chung00} was the first to study the {\sc Popular Matching}\ problem in the roommates setting. He observed that every stable matching is a popular matching. In the midst of a long series of articles, the issue of the computational complexity of {\sc Popular Matching}\ in an arbitrary graph remained unsettled, leading various researcher to devise notions such as the {\it unpopularity factor} and {\it unpopularity margin} \cite{HuangKavithaMichailNasre08,McCutchen08,HuangKavitha13j} in the hope of capturing the essence of popular matchings. A solution concept that emerged from this search is the {\it maximum sized popular matching}, motivated by the fact that unlike stable matchings (Rural Hospital Theorem \cite{Roth86}), all popular matchings in an instance do not match the same set of vertices or even have the same size. Thus, it is natural to focus on the size of a popular matching. There is a series of papers that focus on the {\sc Max Sized Popular Matching} problem in bipartite graphs (without ties in preference lists) \cite{Size-Popularity-Tradeoff-Kavitha14j,DBLP:journals/iandc/HuangK13,CsehKavitha16} and (with ties) \cite{PM-2sidedPref-1sidedTies-CsehHuangKavitha17j}. When preferences are strict, there are various polynomial time algorithm that solve {\sc Max Sized Popular Matching} in bipartite graphs: Huang and Kavitha \cite{HuangKavitha13j} give an $\mathcal{O}(mn_{0})$ algorithm that is improved by Kavitha to $\mathcal{O}(m)$ \cite{Size-Popularity-Tradeoff-Kavitha14j} where $m$ and $n_{0}$ denote the number of edges in the bipartite graph and the size of the smaller vertex partition, respectively. In the presence of ties (even on one side), the {\sc Max Sized Popular Matching} was shown to be \textsf{NP}-hard\ \cite{PM-2sidedPref-1sidedTies-CsehHuangKavitha17j}. It is worth noting that every stable matching is popular, but the converse is not true. As a consequence of the former, every bipartite graph has a popular matching that is computable in polynomial-time because it has a stable matching computable by the famous Gale-Shapley algorithm described in the seminal paper \cite{GaleShapley62} by the eponymous Gale and Shapley. \section{Definition of {\sc Partitioned Vertex Cover}}\label{sec:partVC} The correctness of our reduction will crucially rely on the fact that our source problem will {\em not} be {\sc Vertex Cover}, but a variant of it that we call {\sc Partitioned Vertex Cover}. This variant is defined as follows. \paragraph{Problem definition.} The input of {\sc Partitioned Vertex Cover}\ consists of a graph $G$, a collection ${\cal P}$ of pairwise disjoint edges in $G$, and a collection ${\cal T}$ of pairwise disjoint sets of size 3 of vertices that induce triangles in $G$,\footnote{That is, for all $\{x,y,z\}\in{\cal T}$, we have that $\{x,y\},\{y,z\},\{z,x\}\in E(G)$.} such that every vertex in $V(G)$ occurs in either a triangle in ${\cal T}$ or an edge in ${\cal P}$ (but not in both). In other words, ${\cal T}\cup{\cal P}$ forms a partition of $V(G)$ into sets of sizes 3 and 2. To ease readability, we will refer to a set (edge) in $\cal P$ as a {\em pair} and to a set in $\cal T$ as a {\em triple}. The objective of {\sc Partitioned Vertex Cover}\ is to decide whether $G$ has a vertex cover $U$ such that the two following conditions hold. \begin{enumerate} \item\label{pvc:condition1} For every $P\in {\cal P}$, it holds that $|U\cap P|=1$. \item\label{pvc:condition2} For every $T\in {\cal T}$, it holds that $|U\cap T|=2$. \end{enumerate} A vertex cover $U$ with the properties above will be referred to as a {\em solution}. \paragraph{Remark and Hardness.} We remark that it will be crucial for us that {\em (i)} the sets in ${\cal T}\cup{\cal P}$ are all pairwise disjoint, {\em (ii)} the maximum size of a set in ${\cal T}\cup{\cal P}$ is only 3 and all but one of the vertices of a set in ${\cal T}\cup{\cal P}$ must be selected, and {\em (iii)} all solutions must have the same size, where the implicit size requirement (that is, being of size exactly $|{\cal P}|+2|{\cal T}|$) is automatically satisfied if Conditions \ref{pvc:condition1} and \ref{pvc:condition2} are satisfied. Now, we claim that {\sc Partitioned Vertex Cover}\ is \textsf{NP}-hard. The correctness of this claim directly follows from a classic reduction from {\sc 3-SAT} to {\sc Vertex Cover} (see, e.g., \cite{sipser2006}). For the sake of completeness, we present this reduction and argue formally why its output can be viewed correctly as an instance of {\sc Partitioned Vertex Cover}\ (rather than an instance of {\sc Vertex Cover}) in Appendix \ref{app:partVC}. \begin{lemma}\label{lem:partVC} {\sc Partitioned Vertex Cover}\ is \textsf{NP}-hard. \end{lemma} \section{Preliminaries}\label{sec:prelims} \paragraph{Standard Definitions and Our Notation.} Given a graph $G$, we let $V(G)$ and $E(G)$ denote the vertex set and edge set of $G$, respectively. Throughout the paper, we consider undirected simple graphs. We view an edge as a {\em set} of two vertices. A {\em triangle} in $G$ is a cycle in $G$ on exactly three vertices. The neighborhood of a vertex $v\in V(G)$ in $G$ is denoted by $N_G(v)=\{u\in V(G): \{u,v\}\in E(G)\}$, and the set of edges incident to $v$ in $G$ is denoted by $E_G(v)$. Given a vertex $v\in V(G)$, a {\em preference list of $v$ in $G$} is a {\em bijective} function $\ell_v: N_G(v)\rightarrow \{1,2,\ldots,|N_G(v)|\}$. Informally, the smaller the number a vertex $v\in V(G)$ assigns to a vertex $u\in N_G(v)$, the more $v$ prefers to be matched to $u$. In particular, for all $u,w\in N_G(v)$, if $\ell_v(u)<\ell_v(w)$, then $v$ prefers $u$ over $w$. A matching $M$ in $G$ is a subset of $E(G)$ whose edges are pairwise disjoint. We say that a vertex $v\in V(G)$ is matched by a matching $M$ if there exists a (unique) vertex $u\in V(G)$ such that $\{u,v\}\in M$, which we denote by $u=M(v)$. Moreover, $M$ is maximal if there is no edge in $E(G)$ such that both endpoints of that edge are not matched by $M$. We denote an instance of {\sc Popular Matching}\ (in the roommates setting) by $I=(G,L=\{\ell_v: v\in V(G)\})$. Roughly speaking, a vertex $v\in V(G)$ prefers a matching $M$ over a matching $M'$ if its ``status'' in $M$ is better than the one in $M'$, where being not matched is the least preferred status. Formally, the notion of preference over matchings is defined as follows. \begin{definition}\label{def:preference} Let $I=(G,L=\{\ell_v: v\in V(G)\})$ be an instance of {\sc Popular Matching}. Given two matchings in $G$, denoted by $M$ and $M'$, we say that a vertex $v\in V(G)$ {\em prefers} $M$ over $M'$ if one of the following conditions is satisfied: {\em (i)} $v$ in matched by $M$ but not matched by $M'$; {\em (ii)} $v$ is matched by both $M$ and $M'$, and $\ell_v(M(v))<\ell_v(M'(v))$. The number of vertices in $V(G)$ that prefer $M$ over $M'$ is denoted by $\mathtt{vote}(M,M')$. \end{definition} Roughly speaking, $\mathtt{vote}(M,M')$ above can be thought of as the number of vertices that will vote in favor of $M$ when they are asked to decide whether $M$ or $M'$ should be chosen. For notational convenience, given a vertex $v\in V(G)$, we denote $\ell_v(v)=|N_G(v)|+1$, and given a matching $M$ where $v$ is not matched, we denote $M(v)=v$. Then, for example, the first condition in Definition \ref{def:preference} is subsumed by the second one. We now also formally define the notion of popularity. \begin{definition}\label{def:popularity} Let $I=(G,L=\{\ell_v: v\in V(G)\})$ be an instance of {\sc Popular Matching}. We say that a matching $M$ in $G$ is {\em popular} if $\mathtt{vote}(M',M)-\mathtt{vote}(M,M')\leq 0$ for any other matching $M'$ in $G$. \end{definition} Intuitively, the meaning of the definition above is that when the vertices are asked whether we should replace $M$ by $M'$, for any other matching $M'$, the number of vertices that will vote against the swap is at least as large as the number of vertices that will vote in favor of it. Let us recall that in the {\sc Popular Matching}\ problem, the objective is to decide whether there exists a popular matching. Given a graph $G$, we say that a vertex $v\in V(G)$ {\em covers} an edge $e\in E(G)$ if $v$ is incident to $e$, that is, $v\in e$. A {\em vertex cover} $U$ in $G$ is a subset of $V(G)$ such that every edge in $E(G)$ is covered by at least one vertex in $U$. In the {\sc Vertex Cover} problem, we are given a graph $G$ and an integer $k$, and the objective is to decide whether $G$ has a vertex cover of size at most $k$. \paragraph{Known characterization of popular matchings.} We need to present (known) definitions of a labeling of the edges in $E(G)$ as well as of a special graph derived from $G$ and a matching $M$ in $G$, which will give rise to a characterization of popular matchings. \begin{definition}[Definition 2 in \cite{DBLP:journals/iandc/HuangK13}, Rephrased]\label{def:label} Let $I=(G,L=\{\ell_v: v\in V(G)\})$ be an instance of {\sc Popular Matching}. Given a matching $M$ in $G$, the edge labeling $\mathtt{label}_M: (E(G)\setminus M)\rightarrow \{-2,0,+2\}$ is defined as follows. $$ \mathtt{label}_M(\{u,v\})= \begin{cases} -2\ \ \ \mathrm{if}\ \ell_u(M(u))<\ell_u(v)\ \mbox{and }\ \ell_v(M(v))<\ell_v(u)\\ +2\ \ \ \mathrm{if }\ \ell_u(M(u))>\ell_u(v)\ \mbox{and }\ \ell_v(M(v))>\ell_v(u)\\ \ \ 0\ \ \ \mathrm{otherwise} \end{cases} $$ \end{definition} Intuitively, an edge in the definition above is assigned $-2$ if both its endpoints do not prefer being matched to each other over their status in $M$, and it is assigned $+2$ if both its endpoints prefer being matched to each other over their status in $M$. \begin{definition}[\cite{DBLP:journals/iandc/HuangK13}]\label{def:GM} Let $I=(G,L=\{\ell_v: v\in V(G)\})$ be an instance of {\sc Popular Matching}. Given a matching $M$ in $G$, the graph $G_M$ is the subgraph of $G$ with $V(G_M)=V(G)$ and $E(G_M)=\{\{u,v\}\in E(G): \{u,v\}\in M$ or $\mathtt{label}_M(\{u,v\})\neq -2\}$. \end{definition} Before we can present the characterization, we need to define the notions of an alternating path and an alternating cycle in $G_M$. First, an {\em alternating cycle} in $G_M$ is a cycle in $G_M$ (with an even number of edges) such that if we traverse the edges of the cycle (in any direction), then every edge in $M$ is followed by an edge outside $M$, and every edge outside $M$ is followed by an edge in $M$. Similarly, an {\em alternating path} in $G_M$ is a path in $G_M$ such that if we traverse the edges of the path (in any direction), then every edge in $M$ is followed by an edge outside $M$ (with the exception of the last edge), and every edge outside $M$ is followed by an edge in $M$ (with the same exception), and in addition, if the edge incident to the first or last vertex on the path is not in $M$, then that vertex is not matched by $M$. Now, the characterization is given by the following proposition. \begin{proposition}[Theorem 1 in \cite{DBLP:journals/iandc/HuangK13}, Rephrased]\label{prop:char} Let $I=(G,L=\{\ell_v: v\in V(G)\})$ be an instance of {\sc Popular Matching}. A matching $M$ in $G$ is popular if and only if the following conditions hold in $G_M$. \begin{itemize} \item There is no alternating cycle in $G_M$ that contains at least one edge labeled +2 by $\mathtt{label}_M$. \item There is no alternating path in $G_M$ that starts from a vertex not matched by $M$ and contains at least one edge labeled +2 by $\mathtt{label}_M$. \item There is no alternating path in $G_M$ that contains at least two edges labeled +2 by $\mathtt{label}_M$. \end{itemize} \end{proposition} We remark that the observation that Theorem 1 in \cite{DBLP:journals/iandc/HuangK13} holds for general graphs (its statement refers to bipartite graphs) is noted on page 6 of that paper. The usefulness of Proposition \ref{prop:char} for us is that it will help us verify that the matching we construct when we prove that forward direction of the correctness of our reduction is indeed popular. Note that, if we are to prove the popularity of matching by using only the definition of popularity, then we need to compare the matching to a huge number of other matchings (that can be of a super-exponential magnitude). Thus, Proposition \ref{prop:char} will come in handy. \section{Reducing {\sc Partitioned Vertex Cover}\ to {\sc Popular Matching}}\label{sec:reduction} Let $I=(G,{\cal P},{\cal T})$ be an instance of {\sc Partitioned Vertex Cover}. In this section, we construct an instance $\mathtt{reduction}(I)=(H,L=\{\ell_v: v\in V(H)\})$ of {\sc Popular Matching}. Note that, to avoid confusion, we denote the graph in $\mathtt{reduction}(I)$ by $H$ rather than $G$, since the latter already denotes the graph in $I$. We remark that the Edge Coverage gadget below is in fact the entire reduction from (standard) {\sc Vertex Cover} to an optimization variant of {\sc Popular Matching}\ recently given by Kavitha \cite{DBLP:journals/corr/abs-1802-07440} (in that context, we will use notation consistent with this work). Our two other gadgets are completely new. After describing the Edge Coverage gadget, we briefly discuss its weakness. In particular, this brief discussion sheds light on the jump in understanding the {\sc Popular Matching} problem that we had to perform in order to employ this known gadget (or any other similar gadget in the literature on popular matchings) to prove the hardness of {\sc Popular Matching}. \subsection{Edge Coverage} For every vertex $i\in V(G)$, we add four new vertices (to $H$), denoted by $a_i,b_i,c_i$ and $d_i$. In addition, we add the edges $\{d_i,a_i\},\{a_i,b_i\},\{a_i,c_i\}$ and $\{b_i,c_i\}$ (see Fig.~\ref{fig:edgeCoverage}). Now, for every edge $e=\{i,j\}\in E(G)$, we add two vertices, $u^e_i$ and $u^e_j$, and the edges $\{u^e_i,u^e_j\},\{b_i,u^e_i\}$ and $\{b_j,u^e_j\}$. Let us now give a partial definition of the preference lists of the vertices added so far (see Fig.~\ref{fig:edgeCoverage}). When we will add neighbors to some of these vertices, they will be appended to the end of these partial lists, and we will not change the values that we are about to define. For every vertex $i\in V(G)$, we have the following definitions. \begin{itemize} \item {\bf Vertex $a_i$:} $\ell_{a_i}(b_i)=1$; $\ell_{a_i}(c_i)=2$; $\ell_{a_i}(d_i)=3$. \item {\bf Vertex $b_i$:} $\ell_{b_i}(a_i)=1$; $\ell_{b_i}$ restricted to $\{u^e_i: e\in E_G(i)\}$ is an arbitrary bijection into $\{2,3,\ldots,|E_G(i)|+1\}$;\footnote{That is, every vertex in $\{u^e_i: e\in E_G(i)\}$ is assigned a unique integer from $\{2,3,\ldots,|E_G(i)|+1\}$, and it is immaterial to us which bijection to choose to achieve this.} $\ell_{b_i}(c_i)=|N_G(i)|+2$. \item {\bf Vertex $c_i$:} $\ell_{c_i}(a_i)=1$; $\ell_{c_i}(b_i)=2$. \item {\bf Vertex $d_i$:} $\ell_{d_i}(a_i)=1$. \item {\bf Vertex $u^e_i$ for any $e=\{i,j\}\in E_G(i)$:} $\ell_{u^e_i}(u^e_j)=1$; $\ell_{u^e_i}(b_i)=2$. \end{itemize} This completes the description of the Edge Coverage gadget. \begin{figure}[t!]\centering \fbox{\includegraphics[scale=0.8]{cropped_EdgeCoverage}} \caption{The Edge Coverage gadget. Here, $x\in\{2,3,\ldots,|N_G(i)|+1\}$ and $y\in\{2,3,\ldots,|N_G(j)|+1\}$.}\label{fig:edgeCoverage} \end{figure} \paragraph{Intuition.} The Edge Coverage gadget aims to encode (as we will see in Section \ref{sec:correctness}) the selection of a vertex as follows. In every popular matching $M$, either $\{a_i,b_i\}\in M$ or both $\{a_i,d_i\}\in M$ and $\{b_i,c_i\}\in M$. The special choice of the preferences ensure this, where the first choice indicates that $i$ is present in the vertex cover encoded by $M$, while the second choice indicates that $i$ is not present in this vertex cover. Intuitively, $a_i$ and $b_i$ prefer each other the most, but if we choose to match them, we ``leave out'' both $d_i$ and $c_i$, which gives rise to the two configurations as above. Then, the addition of $u^e_i$ and $u^e_j$, which prefer each other the most, and which are inserted in the ``middle'' of $b_i$'s and $b_j$'s lists, respectively, will ensure that that every edge is indeed covered. To establish this last claim, it will also be important that $c_i$ prefers $a_i$ over $b_i$---this will allow us to ``move'' from the configuration of having $\{a_i,d_i\},\{b_i,c_i\}\in M$ and $\{a_j,d_j\},\{b_j,c_j\}\in M$ to one where $c_i$ and $c_j$ are matched to $a_i$ and $a_j$, respectively, when we try to exhibit a matching more popular than $M$. While this gadget, already given by Kavitha \cite{DBLP:journals/corr/abs-1802-07440}, is very useful to us, its main drawback is that it cannot enforce popular matchings to favor the selection of $\{b_i,c_i\}\in M$ and $\{a_i,d_i\}\in M$ over $\{a_i,b_i\}\in M$. In other words, this gadget does not help us, in any way, to force the encoded vertex cover to be as small as possible. (We remark that Kavitha \cite{DBLP:journals/corr/abs-1802-07440} considers a variant of {\sc Popular Matching} where the matching should be as large as possible, and hence the inherent difficulty of the problem is circumvented.) By considering {\sc Partitioned Vertex Cover}\ rather than {\sc Vertex Cover}, we do not need to deal with such ``size optimization'' constraint anymore. However, we now need to handle the constraints imposed by $\cal P$ and $\cal T$. Nevertheless, these two sets are very structured as explained in Section \ref{sec:partVC} (in sharp contrast to, say, an arbitrary instance of {\sc 3-SAT}). In fact, every detail of the gadgets described next is carefully tailored to exploit the extra structural properties of {\sc Partitioned Vertex Cover}\ as much as possible, as will be made clear in Section \ref{sec:correctness}. \subsection{Pair Selector} For every pair $\{i,j\}\in{\cal P}$ with $i<j$, we add two new vertices (to $H$), denoted by $f_{ij}$ and $f_{ji}$, along with the edges $\{d_i,f_{ij}\},\{f_{ij},c_j\},\{d_j,f_{ji}\}$ and $\{f_{ji},c_i\}$ (see Fig.~\ref{fig:pairSelector}). In addition, we insert the edges $\{c_i,d_j\}$ and $\{c_j,d_i\}$. We update the preference lists of the vertices as follows (see Fig.~\ref{fig:pairSelector}). \begin{itemize} \item {\bf Vertex $c_i$:} $\ell_{c_i}(f_{ji})=3$; $\ell_{c_i}(d_j)=4$. \item {\bf Vertex $c_j$:} $\ell_{c_j}(f_{ij})=3$; $\ell_{c_j}(d_i)=4$. \item {\bf Vertex $d_i$:} $\ell_{d_i}(c_j)=2$; $\ell_{d_i}(f_{ij})=3$. \item {\bf Vertex $d_j$:} $\ell_{d_j}(c_i)=2$; $\ell_{d_j}(f_{ji})=3$. \item {\bf Vertex $f_{ij}$:} $\ell_{f_{ij}}(d_i)=1$; $\ell_{f_{ij}}(c_j)=2$. \item {\bf Vertex $f_{ji}$:} $\ell_{f_{ji}}(d_j)=1$; $\ell_{f_{ji}}(c_i)=2$. \end{itemize} Note that the definition above is valid since no vertex in $V(G)$ participates in more than one pair, and hence no integer is assigned by any function $\ell_{\diamond}$ more than once. This completes the description of the Pair Selector gadget. \begin{figure}[t!]\centering \fbox{\includegraphics[scale=0.8]{cropped_PairSelector}} \caption{The Pair Selector gadget.}\label{fig:pairSelector} \end{figure} \paragraph{Intuition.} First, we would like to point out that the Pair Selector gadget is symmetric in the sense that if we swap $i$ and $j$, we obtain an isomorphic structure also with respect to preferences. Thus, the gadget is well (uniquely) defined even if we drop the requirement ``with $i<j$'' above. We will use this symmetry when we prove the correctness of our reduction. To gain some deeper understanding of this gadget, let us recall that in {\sc Partitioned Vertex Cover}, {\em exactly} one vertex among $\{i,j\}$ must be selected. We already know that the Edge Coverage gadget is meant to ensure that {\em at least} one vertex among $\{i,j\}$ is selected. Hence, we only need to ensure that not {\em both} $i$ and $j$ are selected. However, if both $i$ and $j$ are selected, then both $c_i$ and $d_j$ are left not matched. Then, the preferences on the triangle on $\{c_i,d_j,f_{ji}\}$ are chosen specifically to ``cause trouble''---no matter which edge of this triangle will be picked by the matching, we can replace it by a different edge on this triangle to exhibit a more popular matching. For example, if we pick $\{c_i,f_{ji}\}$, then $d_j$ is left not matched, while $f_{ji}$ prefers $d_j$ over $c_i$. This means that by replacing $\{c_i,f_{ji}\}$ by $\{f_{ji},d_j\}$, we make both $f_{ji}$ and $d_j$ more satisfied, while only $c_i$ becomes less satisfied (no other vertex in $H$ is affected by the swap). In light of the swap above, it may appear as if it would have been sufficient to keep the triangle on $\{c_i,d_j,f_{ji}\}$, while removing the triangle on $\{c_j,d_i,f_{ij}\}$ from the gadget. However, without the second triangle, the proof of the forward direction fails---the matching attempted to construct from a vertex cover will not be popular. In particular, by having the second triangle as well, we will always by able to match all of the vertices in $H$, and hence avoid the need to consider the second condition in Proposition \ref{prop:char}. Again, we stress that the second triangle is not meant to ease the proof, but that without it the forward direction of the proof fails. It is also worth to note here that only having these two triangles is not sufficient, but the exact ``orientation'' of their preferences is crucial. In particular, if we changed the orientation of only one of the triangles---for example, if we made $c_i$ prefer $d_j$ over $f_{ji}$, $d_j$ prefer $f_{ji}$ over $c_i$, and $f_{ji}$ prefer $c_i$ over $d_j$---then the gadget would have no longer been symmetric, and the proof of the forward direction would have failed. Roughly speaking, the two triangles on $\{c_i,d_j,f_{ji}\}$ and $\{c_j,d_i,f_{ij}\}$ ``work together'' to prevent the existence of alternating cycles that must not exist by Proposition \ref{prop:char}. Deeper coordination is required in the next gadget, and we will elaborate on it more when we explain the intuition behind that gadget. \subsection{Triple Selector} For every triple $\{i,j,k\}\in{\cal T}$ with $i<j<k$, we add six new edges (to $H$): $\{d_i,d_j\}$, $\{d_j,d_k\}$, $\{d_k,d_i\}$, $\{c_i,c_j\}$, $\{c_j,c_k\}$ and $\{c_k,c_i\}$ (see Fig.~\ref{fig:tripleSelector}). We update the preference lists of the vertices as follows (see Fig.~\ref{fig:tripleSelector}). \begin{itemize} \item {\bf Vertex $c_i$:} $\ell_{c_i}(c_k)=3$; $\ell_{c_i}(c_j)=4$. \item {\bf Vertex $c_j$:} $\ell_{c_j}(c_i)=3$; $\ell_{c_j}(c_k)=4$. \item {\bf Vertex $c_k$:} $\ell_{c_k}(c_j)=3$; $\ell_{c_k}(c_i)=4$. \item {\bf Vertex $d_i$:} $\ell_{d_i}(d_j)=2$; $\ell_{d_i}(d_k)=3$. \item {\bf Vertex $d_j$:} $\ell_{d_j}(d_k)=2$; $\ell_{d_j}(d_i)=3$. \item {\bf Vertex $d_k$:} $\ell_{d_k}(d_i)=2$; $\ell_{d_k}(d_j)=3$. \end{itemize} Note that the definition above is valid since no vertex in $V(G)$ participates in both a pair and a triple, or in more than one triple, and hence no integer is assigned by any function $\ell_{\diamond}$ more than once. This completes the description of the Triple Selector gadget. \begin{figure}[t!]\centering \fbox{\includegraphics[scale=0.8]{cropped_TripleSelector}} \caption{The Triple Selector gadget.}\label{fig:tripleSelector} \end{figure} \paragraph{Intuition.} First, we would like to point out that the Triple Selector gadget is symmetric with respect to cyclic shifts. That is, if we replace $j$ by $i$, $k$ by $j$, and $i$ by $k$, then we obtain an isomorphic structure also with respect to preferences. We will use this symmetry when we prove the correctness of our reduction. To gain some deeper understanding of this gadget, let us recall that in {\sc Partitioned Vertex Cover}, {\em exactly} two vertices among $\{i,j,k\}$ must be selected. We already know that the Edge Coverage gadget will ensure that {\em at least} two vertices among $\{i,j,k\}$ are selected (since $\{i,j,k\}$ induces a triangle in $G$ and to cover the edges of a triangle at least two of its vertices must be selected). Hence, we only need to ensure that not {\em all} of the vertices $i,j$ and $k$ are selected. However, if $i,j$ and $k$ are all selected, then $d_i,d_j$ and $d_k$ are all left not matched. Then, the preferences on the triangle on $\{d_i,d_j,d_k\}$ are chosen specifically to ``cause trouble'' in a manner similar to the Pair Selector gadget---again, no matter which edge of this triangle will be picked by the matching, we can replace it by a different edge on this triangle to exhibit a more popular matching. For example, if we pick $\{d_i,d_j\}$, then $d_k$ is left not matched, while $d_j$ prefers $d_k$ over $d_i$. This means that by replacing $\{d_i,d_j\}$ by $\{d_j,d_k\}$, we make both $d_j$ and $d_k$ more satisfied, while only $d_i$ becomes less satisfied (no other vertex in $H$ is affected by the swap). As in the case of the Pair Selector gadget, the inner triangle (in Fig.~\ref{fig:tripleSelector}) on $\{d_i,d_j,d_k\}$ is not sufficient---the forward direction of the proof fails without the outer triangle on $\{c_i,c_j,c_k\}$. Here, to make the forward direction go through, an additional idea is required. Roughly speaking, we need to have coordination between the triangles (recall that in the previous gadget, some coordination was also noted as a requirement to ensure symmetry, but here deeper coordination is required). Let us elaborate (in an non-formal manner) on the meaning of this coordination here. Specifically, we ``orient'' the inner triangle and the outer triangle in different directions. (Note that symmetry would have been achieved even if we would have oriented them in the same direction.) By this, we mean that while in the inner triangle, $d_i$ prefer $d_j$ over $d_k$, $d_j$ prefers $d_k$ over $d_i$, and $d_k$ prefers $d_i$ over $d_j$, the same does not hold when we rename $d$ to be $c$---here, the direction is reversed, as $c_i$ prefers $c_k$ over $c_j$, $c_j$ prefers $c_i$ over $c_k$, and $c_k$ prefers $c_j$ over $c_i$. This reversal will come in handy when we prove the forward direction, as it will ``block up'' alternating cycles that must not exist by Proposition \ref{prop:char}. Intuitively, the main insight is that if we try to improve the matching we will construct in the proof of the forward direction in a ``clockwise direction'', then we can make two $d$-type vertices more satisfied and only one $d$-type vertex less satisfied, but at the same time, more $c$-type vertices become unsatisfied, and hence we overall do not gain more popularity. In addition, if we try to improve the matching in a ``counter-clockwise direction'', then we can make two $c$-type vertices more satisfied and only one $c$-type vertex less satisfied, but at the same time, more $d$-type vertices become unsatisfied, and hence again we overall do not gain more popularity.
2,869,038,155,775
arxiv
\section{Introduction} \label{section:introduction} The theoretical analysis of state-of-the-art variable metric Evolution Strategies (ESs) is a long-standing open problem in evolutionary computation. While simple step-size adaptive ESs without Covariance Matrix Adaptation (CMA) have been analyzed with good success \citep{jaegerskuepper2006quadratic,akimoto2018drift,morinaga2019generalized}, we are still lacking appropriate tools for rigorously proving stability and convergence of variable metric methods like CMA-ES \citep{hansen:2001}. Most theoretical work on the rigorous analysis of evolution strategies focuses on simple ESs without CMA. Notable early work in this area was conducted by \citet{jaegerskuepper2006quadratic}, who proved linear convergence of the (1+1)-ES with $1/5$ success rule on convex quadratic functions with a progress rate of $\mathcal{O} \left( \frac{1}{d \cdot \kappa(H)} \right)$, which translates into the runtime growing linearly with problem dimension $d$ and the problem difficulty. Here, problem difficulty is measured by the conditioning $\kappa(H)$ (quotient of largest and smallest eigenvalue) of the Hessian $H$ of a quadratic objective function. \citet{akimoto2018drift} proved a similar result restricted to the sphere function but providing explicit runtime bounds with drift theory methods \citep{doerr2011sharp}. That result was the basis of the much stronger result of \citet{morinaga2019generalized}, which establishes linear convergence of the (1+1)-ES on a large (non-parametric) class of problems, namely on $L$-smooth strongly convex functions. The analysis of modern variable-metric ESs like CMA-ES and its many variants is significantly less developed. In particular, no (linear) convergence guarantees exist, mostly for the lack of proofs of stability of the CMA update. One significant approach to the problem is the Information Geometric Optimization (IGO) framework \citep{2017OllivierIGO}. It allows to interpret the so-called rank-$\mu$ update of CMA-ES as a stochastic natural gradient step \citep{akimoto2010bidirectional}. This means that stability and convergence can be established provided the learning rate is small enough. However, the learning rates used in practice do not fulfill this condition, and hence establishing stability remains an open problem. For non-evolutionary variable-metric methods the situation is mixed. For example, to the best of our knowledge, there does not exist an analysis showing that the classic Nelder-Mead simplex algorithm converges to the minimum of a convex quadratic function at a rate that is independent of the conditioning number. Restricted results exist in low dimensions \citep{lagarias2012convergence}. On the other hand, Powell's NEWUOA method \citep{NEWUOA} can jump straight into the optimum once it has obtained enough samples to estimate the coefficients of the quadratic function exactly. The variable metric random pursuit algorithm of \citet{stich2016variable} is of particular interest in our context, since it is conceptually close to evolutionary computation methods and at the same time provides a provably stable update that allows the covariance matrix to converge to the inverse Hessian. In this paper we prove the stability of an alternative CMA mechanism, namely the recently proposed Hessian Estimation Evolution Strategy (HE-ES). To this end we introduce a minimal elitist variant of HE-ES and prove monotone convergence of its covariance matrix to a multiple of the inverse Hessian of a convex quadratic objective function. Informally speaking, we mean by stability that the covariance matrix does not drift arbitrarily far away from the inverse Hessian. Our result is stronger, since we prove that the covariance matrix converges monotonically to a multiple of the inverse Hessian. As a consequence we are able to transfer existing results on the convergence of simple ESs on the sphere function to HE-ES. This way we obtain a strong guarantee, namely linear convergence of our HE-ES variant at a rate that is independent of the conditioning number of the problem at hand. The paper is organized as follows. We first introduce HE-ES and define the (1+4)-HE-ES as a minimal elitist variant. This algorithm is the main subject of our subsequent study. The next step is to show the stability and the convergence of the HE-ES covariance matrix update to the inverse Hessian of a quadratic objective function. We finally leverage the analysis of \cite{morinaga2019generalized} to show linear convergence of (1+4)-HE-ES at a rate that is independent of the problem difficulty $\kappa(H)$. \section{Hessian Estimation Evolution Strategies} The Hessian Estimation Evolution Strategy (HE-ES) is a recently proposed variable metric evolution strategy \citep{HEES}. Its main characteristic is its mechanism for adapting the sampling covariance matrix. In this section we first present the original algorithm and then introduce a novel elitist variant. \subsection{The HE-ES Algorithm} \label{section:HE-ES} HE-ES is a modern evolution strategy. It features non-elitist selection, global weighted recombination, cumulative step-size adaptation, and a special mechanism for covariance matrix adaptation. Most of these mechanisms coincide with the design of standard CMA-ES \citep{hansen:2001}. In the following presentation we focus on the non-standard aspects of the algorithm, following \citet{HEES}. In each iteration, HE-ES draws a number of mirrored samples of the form $x_i^- = m - \sigma \cdot A b_i$ and $x_i^+ = m + \sigma \cdot A b_i$, where $\sigma > 0$ is the global step size and $A$ is a Cholesky factor of the covariance matrix $C = A^T A$. For brevity we write $x_i^\pm$, with $\pm$ representing either $+$ or $-$. The vectors $b_i$ are drawn from the multi-variate Gaussian distribution $\mathcal{N}(0, I)$. Furthermore, the vectors $b_i$ are orthogonal, i.e., they fulfill $b_i^T b_j = 0$ for $i \not= j$. We also consider the normalized \emph{directions} $\frac{b_i}{\|b_i\|}$ in the following. The three points $x_i^-, m, x_i^+$ are arranged on a line, and restricted to each such line, the function values in these points give rise to the quadratic model \begin{align*} q_i(t) = c + g_i t + \frac{h_i}{2} t^2 \approx f \left( m + t \cdot A \frac{b_i}{\|b_i\|}\right) \end{align*} of the objective function. Fitting its coefficients to the function values yields the offset $c = f(m)$, the gradient $g_i = \frac{f(x_i^+) - f(x_i^-)}{2 \sigma \|b_i\|}$, and the Hessian $h_i = \frac{f(x_i^-) + f(x_i^+) - 2 f(m)}{\sigma^2 \|b_i\|^2}$. The coefficients $h_i$ measure the curvature of the graphs of the quadratic models $q_i$. They are of particular interest in the following. The intuition behind this construction is as follows: Each $h_i$ is a finite difference estimate of a diagonal coefficient of the Hessian matrix $H$. This is strictly true if $b_i$ is parallel to an axis of the coordinate system. Otherwise, $h_i$ contains exactly the same type of information, but not referring to an axis and a corresponding diagonal entry, but to an arbitrary direction~$b_i$. Therefore, estimating the moden $q_i$ and $h_i$ in particular allows HE-ES to obtain curvature information about the problem, and more specifically, information about the Hessian of a quadratic objective function. The goal of HE-ES is to adapt its sampling covariance matrix $C$ towards a multiple of the inverse of the Hessian $H$ of a convex quadratic objective function \begin{align} f(x) = \frac12 (x - x^*)^T H (x - x^*) + f^* \label{eq:objective} \end{align} with global optimum $x^*$, optimal value $f^*$, and strictly positive definite symmetric Hessian $H$. Its covariance matrix update therefore updates $C$ \emph{in direction $b_i$} (measured by $\frac{b_i^T}{\|b_i\|} C \frac{b_i}{\|b_i\|}$) towards a multiple of $H^{-1}$ (measured by $\alpha \cdot \frac{b_i^T}{\|b_i\|} H^{-1} \frac{b_i}{\|b_i\|}$). This corresponds to learning a good \emph{shape} of the multi-variate normal distribution, while we leave learning of its position to the mean update, and learning of its global scale to the step size update. In other words, adapting to the (arbitrary) scaling factor $\alpha > 0$ is left to step size update, which usually operates at a faster time scale (larger learning rate) than covariance matrix adaptation. Since the scaling factor $\alpha$ is arbitrary, a meaningful update can only change different components of $C$ \emph{relative} to each other. Say, if \begin{align} h_i \cdot \frac{b_i^T}{\|b_i\|} C \frac{b_i}{\|b_i\|} \gg h_j \cdot \frac{b_j^T}{\|b_j\|} C \frac{b_j}{\|b_j\|} \enspace, \label{eq:imbalance} \end{align} then the variance in direction $b_i$ should be reduced while the variance in direction $b_j$ should be increased. This way, HE-ES keeps the \emph{scale} of its sampling distribution (measured by $\det(C)$) fixed. If we fully trust the data and the model, i.e., when minimizing a noise-free quadratic function, then equalizing left-hand-side and right-hand-side of inequality~\eqref{eq:imbalance} is the optimal (greedy) update step. \begin{algorithm} \caption{Hessian Estimation Evolution Strategy (HE-ES)} \label{algorithm:HE-ES} \begin{algorithmic}[1] \STATE{\textbf{input} $m^{(0)} \in \mathbb{R}^d$, $\sigma^{(0)} > 0$, $A^{(0)} \in \mathbb{R}^{d \times d}$} \STATE{\textbf{parameters} $\tilde \lambda \in \mathbb{N}$, $c_s$, $d_s$, $w \in \mathbb{R}^{2 \tilde \lambda }$} \STATE{$B \leftarrow \lceil \tilde \lambda / d \rceil$} \STATE{$p_s^{(0)} \leftarrow 0 \in \mathbb{R}^d$} \STATE{$g_s^{(0)} \leftarrow 0$} \STATE{$t \leftarrow 0$} \REPEAT \FOR{$j \in \{1, \dots, B\}$} \STATE{$b_{1j}, \dots, b_{dj} \leftarrow$ \texttt{sampleOrthogonal}()} \ENDFOR \STATE{$x_{ij}^- \leftarrow m^{(t)} - \sigma^{(t)} \cdot A^{(t)} b_{ij}$ ~~~~~for $i+(j-1)B \leq \tilde \lambda$} \STATE{$x_{ij}^+ \leftarrow m^{(t)} + \sigma^{(t)} \cdot A^{(t)} b_{ij}$ ~~~~~for $i+(j-1)B \leq \tilde \lambda$ \hfill \# mirrored sampling} \STATE{$A^{(t+1)} \leftarrow A^{(t)} \cdot$ \texttt{computeG}($\{b_{ij}\}$, $f(m)$, $\{f(x_{ij}^\pm)\}$, $\sigma$)} \hfill \# matrix adaptation \STATE{$w_{ij}^\pm \leftarrow w_{\text{rank}(f(x_{ij}^\pm))}$} \STATE{$m^{(t+1)} \leftarrow \sum_{ij} w_{ij}^\pm \cdot x_{ij}^\pm$} \hfill \# mean update \STATE{$g_s^{(t+1)} \leftarrow (1 - c_s)^2 \cdot g_s^{(t)} + c_s \cdot (2 - c_s)$} \hfill \STATE{$p_s^{(t+1)} \leftarrow (1 - c_s) \cdot p_s^{(t)} + \sqrt{c_s \cdot (2 - c_s) \cdot \mu_\text{eff}^\text{mirrored}} \cdot \sum_{ij} (w_{ij}^+ - w_{ij}^-) \cdot b_{ij}$} \STATE{$\sigma^{(t+1)} \leftarrow \sigma^{(t)} \cdot \exp\left( \frac{c_s}{d_s} \cdot \left[ \frac{\|p_s^{(t+1)}\|}{\chi_d} - \sqrt{g_s^{(t+1)}} \right] \right)$} \hfill \# CSA \STATE{$t \leftarrow t + 1$} \UNTIL{ \textit{stopping criterion is met} } \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{sampleOrthogonal} \label{procedure:sampleOrthogonal} \begin{algorithmic}[1] \STATE{\textbf{input} dimension $d$} \STATE{$z_1, \dots, z_d \sim \mathcal{N}(0, I)$} \STATE{$n_1, \dots, n_d \leftarrow \|z_1\|, \dots, \|z_d\|$} \STATE{apply the Gram-Schmidt procedure to $z_1, \dots, z_d$} \STATE{return $y_i = n_i \cdot z_i, \quad i = 1, \dots, d$} \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{computeG} \label{procedure:computeG} \begin{algorithmic}[1] \STATE{\textbf{input} $b_{ij}$, $f(m)$, $f(x_{ij}^\pm)$, $\sigma$} \STATE{\textbf{parameters} $\kappa$, $\eta_A$} \STATE{$h_{ij} \leftarrow \frac{f(x_{ij}^+) + f(x_{ij}^-) - 2 f(m)}{\sigma^2 \cdot \|b_{ij}\|^2}$ \hfill \# estimate curvature along $b_{ij}$} \STATE{\textbf{if} $\max(\{h_{ij}\}) \leq 0$ \textbf{then} \textbf{return} $I$} \STATE{$c \leftarrow \max(\{h_{ij}\}) / \kappa$} \STATE{$h_{ij} \leftarrow \max(h_{ij}, c)$ \hfill \# truncate to trust region} \STATE{$q_{ij} \leftarrow \log(h_{ij})$} \STATE{$q_{ij} \leftarrow q_{ij} - \frac{1}{\tilde \lambda} \cdot \sum_{ij} q_{ij}$ \hfill \# subtract mean $\to$ ensure unit determinant} \STATE{$q_{ij} \leftarrow q_{ij} \cdot \frac{-\eta_A}{2}$ \hfill \# learning rate and inverse square root (exponent $-1/2$)} \STATE{$q_{i,B} \leftarrow 0 \quad \forall i \in \{d B - \tilde \lambda, \dots, d\}$ \hfill \# neutral update in the unused directions} \STATE{\textbf{return} $\frac{1}{B} \sum_{ij} \frac{\exp(q_{ij})}{\|b_{ij}\|^2} \cdot b_{ij} b_{ij}^T$} \end{algorithmic} \end{algorithm} Algorithm~\ref{algorithm:HE-ES} provides an overview of the resulting HE-ES algorithm. It is designed to be conceptually close to CMA-ES, using multi-variate Gaussian samples and cumulative step-size adaptation (CSA, \citealt{hansen:2001}). One difference is the use of orthogonal mirrored samples (see algorithm~\ref{procedure:sampleOrthogonal}). If there are more directions than dimensions (the poulation size $\lambda$ exceeds $2d$) then multiple independent blocks of orthogonal samples are used. The core update mechanism discussed above is realized in algorithm~\ref{procedure:computeG}, applied to the Cholesky factor $A$ of the covariance matrix $C = A^T A$. Since practical objective functions are hardly exactly quadratic, the algorithm dampens update steps with a learning rate and limits the impact of non-positive curvature estimates ($h_i \leq 0$). A further notable property of HE-ES is its correction for mirrored sampling in CSA, which removes a bias that is otherwise present in the method \citep{HEES}. We do not discuss these additional mechanisms in detail, since they do not play a role in the subsequent analysis. It was demonstrated by \citet{HEES} that HE-ES shows excellent performance on many problems, including some highly rugged and non-convex functions, which strongly violate the assumption of a quadratic model. However, for the sake of a tractable analysis, we restrict ourselves to objective functions of the form given in equation~\eqref{eq:objective}. In general, quadratic functions should not be optimized with HE-ES; for example, NEWUOA is a more suitable method for this type of problem. The relevance of the function class lies in the fact that in the late phase of convergence, every twice continuously differentiable objective function is well approximated by its second order Taylor polynomial around the optimum, which is of the form~\eqref{eq:objective}. \subsection{A Minimal Elitist HE-ES} \begin{algorithm} \caption{(1+4)-HE-ES} \label{algorithm:elitist} \begin{algorithmic}[1] \STATE{\textbf{input} $m^{(0)} \in \mathbb{R}^d$, $\sigma^{(0)} > 0$, $A^{(0)} \in \mathbb{R}^{d \times d}$, $c_{\sigma} > 1$} \STATE{$t \leftarrow 0$} \REPEAT \STATE{$b_{1}, \dots, b_{d} \leftarrow$ \texttt{sampleOrthogonal}()} \STATE{$x_{i}^- \leftarrow m^{(t)} - \sigma^{(t)} \cdot A^{(t)} b_{i}$ ~~~~~for $i \in \{1, 2\}$} \STATE{$x_{i}^+ \leftarrow m^{(t)} + \sigma^{(t)} \cdot A^{(t)} b_{i}$ ~~~~~for $i \in \{1, 2\}$ \hfill \# mirrored sampling} \STATE{$f_{i}^\pm \leftarrow f(x_{i}^\pm)$ \hfill \# evaluate the four offspring} \STATE{$A^{(t+1)} \leftarrow A^{(t)} \cdot$ \texttt{computeG}($\{b_{i}\}$, $f(m^{(t)})$, $\{f_{i}^\pm\}$, $\sigma^{(t)}$)} \hfill \# matrix adaptation \IF{$f_1^+ \leq f(m^{(t)})$} \STATE{$m^{(t+1)} \leftarrow x_1^+$} \hfill \# mean update using the first sample \STATE{$\sigma^{(t+1)} \leftarrow \sigma^{(t)} \cdot c_{\sigma}$} \hfill \# increase step size (1/5 rule) \ELSE \STATE{$\sigma^{(t+1)} \leftarrow \sigma^{(t)} \cdot c_{\sigma} ^ {-1/4}$} \hfill \# decrease step size (1/5 rule) \ENDIF \STATE{$t \leftarrow t + 1$} \UNTIL{ \textit{stopping criterion is met} } \end{algorithmic} \end{algorithm} In this section we design a minimal variant of the HE-ES family. For the sake of a tractable analysis, we aim at simplicity in the algorithm design, and at mechanisms that allow us to leverage existing analysis techniques, but without losing the main characteristics of a variable-metric ES, and of course without changing the covariance matrix adaptation principle. Several similarly reduced models exists for CMA-ES, for example the (1+1)-CMA-ES \citep{igel2007covariance}, natural evolution strategies (NES) \citep{wierstra2014natural}, and the matrix-adaptation ES (MA-ES) of \citet{beyer2017simplify}. HE-ES already implements most of the simplifying elements of MA-ES. Our main means of breaking down the algorithm therefore is to design an elitist variant. For HE-ES, a naive (1+1) selection scheme is not meaningful, for two reasons: mirrored samples always come in pairs, and HE-ES always needs to sample at least two directions, so it can assess \emph{relative} curvatures. Therefore, the minimal scheme proposed here is the (1+4)-HE-ES. In each generation, it draws two random orthogonal directions and generates four mirrored samples. To keep the algorithm as close as possible to the (1+1)-ES used by \cite{akimoto2018drift} and \cite{morinaga2019generalized}, we will only consider one sample for updating $m^{(t)}$ and $\sigma^{(t)}$ and use a variant of the classic $1/5$-rule \citep{rechenberg1973evolutionsstrategie,kern:2004}. Thus the 3 additional samples drawn in each iteration are only used for updating $A^{(t)}$. Removing line~8 (the covariance matrix update) of Algorithm~\ref{algorithm:elitist} and fixing $A^{(0)}=I$ leads to what we refer to as the (1+1)-ES. The resulting (1+4)-HE-ES is given in algorithm~\ref{algorithm:elitist}. We find its adaptation behavior to be comparable to the full HE-ES on convex quadratic problems. Due to its minimal population size it cannot implement an increasing population (IPOP) scheme, which limits its performance on highly multi-modal problems. However, it otherwise successfully maintains the character of the full HE-ES algorithm. In the subsequent analysis we focus on noise-free convex quadratic objective functions. In this situation algorithm~\ref{procedure:computeG} is simplified as follows: the check for a negative definite Hessian in line~4 can be dropped. Equally well, the trust region mechanism in lines 5 and 6 is superfluous. Finally, we can afford a learning rate of $\eta_A = 1$. With $h_1$ and $h_2$ as defined in line~3, we find that the simplified algorithm returns the matrix \begin{align} G = I + \left( \sqrt[4]{\frac{h_1}{h_2}} - 1 \right) \frac{b_1 b_1^T}{\|b_1\|^2} + \left( \sqrt[4]{\frac{h_2}{h_1}} - 1 \right) \frac{b_2 b_2^T}{\|b_2\|^2} \enspace, \label{eq:G} \end{align} where $I$ is the identity matrix. The update modifies the factor $A$ only in directions $b_1$ and $b_2$ and leaves the orthogonal subspace unchanged. \subsection{Relation to other Algorithms} There are a few approaches in the literature that adapt the covariance matrix based on Hessian information. Most closely related to our approach are variable-metric random pursuit algorithms by \citet{stich2016variable}. Here, a search-direction $b_1$ is sampled uniformly on a sphere with radius $\lVert b_1\rVert = \epsilon$ and the matrix is updated as: $$ C^{(t+1)} = C^{(t)} + \left( h_1 - \frac{b_1^T C^{(t)} b_1}{\lVert b_1\rVert^2}\right) \frac{b_1 b_1^T}{\lVert b_1\rVert^2} \enspace. \label{eq:RP} $$ It is easy to show that for this update holds $$ \frac{b_1^T C^{(t+1)} b_1}{\lVert b_1\rVert^2} = h_1\enspace, $$ i.e., the update learns the exact curvature of the problem in direction $b_i$, assuming that $\epsilon$ is small enough or the function is quadratic. Another relevant algorithm is BOBYQA \citep{powell2009bobyqa}. Instead of using local curvature approximation, the algorithm keeps track of a set of $m$ points $x_i$, $i=1,\dots,m$ with function values $f(x_i)$. In each iteration, the algorithm estimates the Hessian $\hat H^{(t+1)}=(C^{(t+1)})^{-1}$ by minimizing \begin{align} & \min_{c, g, \hat H^{(t+1)}} \lVert \hat H^{(t+1)} -\hat H^{(t)} \rVert_F\\ & \text{s.t.}\quad \frac 1 2 x_i^T \hat H^{(t+1)}x_i + g^T x_i + c= f(x_i),\; i=1,\dots,m \end{align} Thus, it fits a quadratic function on the selected points under the condition that the approximation $\hat H$ of the Hessian is as similar as possible to the one used in the previous iteration. Given a set of $m=(n+1)(n+2)/2$ points on a quadratic function, the algorithm is capable of learning the exact Hessian. In contrast to our proposed method, both mentioned algorithms do not constrain the covariance matrix or the Hessian matrix to be positive definite. While \citet{stich2016variable} handle the case that an update can lead to a non-zero eigenvalue, they still assume that the correct estimate of the curvature is positive. Thus, a negative curvature of the underlying function can lead to a break-down of the method. In contrast, BOBYQA allows for negative curvature and instead of sampling from a normal distribution, a trust-region problem is solved. \section{Stability and Convergence of the Covariance Matrix} In the following we consider the (1+4)-HE-ES as introduced in the previous section. Our aim is to show the stability and the monotonic convergence of its covariance matrix to a multiple of the inverse Hessian of a convex quadratic function. We use the following notation. Let $m \in \mathbb{R}^d$, $\sigma > 0$, and $A \in \mathrm{SL^{\pm}}(d, \mathbb{R})$ denote the parameters of the current sampling distribution $\mathcal{N}(m, \sigma^2 C)$ with $C = A^T A$. Here $\mathrm{SL^{\pm}}(d, \mathbb{R})$ denotes the group of $d \times d$ matrices with determinant $\pm 1$, which is closely related to the special linear group $\mathrm{SL}(d, \mathbb{R})$. We obtain $\det(C) = 1$, hence the covariance matrix $C \in \mathrm{SL}(d, \mathbb{R})$ is an element of the special linear group In the following, we assume $d \geq 2$. In order to clarify the goals of this section we start by defining stability and convergence of the covariance matrix. \begin{definition} \label{def:stability-convergence} Consider the space of positive definite symmetric $d \times d$ matrices, equipped with a pre-metric $\delta$ (a symmetric, non-negative function fulfilling $\delta(x, x) = 0$). Let $(C_t)_{t \in \mathbb{N}}$ be a sequence of matrices, and let $R$ denote a reference matrix. We define the scale-invariant distance $\delta_R(C) = \min\limits_{s > 0} \delta(s \cdot C, R)$ of $C$ from~$R$. \begin{enumerate} \item We call the sequence $(C_t)_{t \in \mathbb{N}}$ \emph{stable up to scaling} if there exist constants $t_0$ and $\varepsilon > 0$ such that $\delta_R(C_t) < \varepsilon$ for all $t > t_0$. \item We say that $(C_t)_{t \in \mathbb{N}}$ \emph{converges to $R$ up to scaling} if $\lim\limits_{t \to \infty} \delta_R(C_t) = 0$. \item We call the convergence \emph{monotonic} if $t \mapsto \delta_R(C_t)$ is a monotonically decreasing sequence. \end{enumerate} \end{definition} It is obvious that (monotonic) convergence up to scaling implies stability up to scaling for all $\varepsilon > 0$. In the following, the reference matrix is always the inverse Hessian~$H^{-1}$. \subsection{Invariance Properties} \label{section:invariance} In this section we formally establish the invariance properties of HE-ES. The analysis is not specific to a particular variant and hence applies also to the (1+4)-HE-ES. We start by showing that the HE-ES is invariant to affine transformations of the search space. \begin{lemma} \label{lemma:invariance-X} Let $g(x) = M x + b$ be an invertible affine transformation. Consider the state trajectory \begin{align} \left(m^{(t)}, \sigma^{(t)}, A^{(t)}\right)_{t \in \mathbb{N}} \label{eq:statesA} \end{align} of HE-ES or (1+4)-HE-ES applied to the objective function $f$, and alternatively the state trajectory \begin{align} \left(\tilde m^{(t)}, \tilde \sigma^{(t)}, \tilde A^{(t)}\right)_{t \in \mathbb{N}} \label{eq:statesB} \end{align} of the same algorithm with initial state \begin{align} \left(\tilde m^{(0)}, \tilde \sigma^{(0)}, \tilde A^{(0)}\right) = \left(g(m^{(0)}), \sigma^{(0)}, M A^{(0)}\right) \label{eq:initialB} \end{align} applied to the objective function $\tilde f(x) = f\big(g^{-1}(x)\big)$. Assume further, that both algorithms use the same sequence of random vectors $\left(b_{1,1}^{(t)}, \dots, b_{B,d}^{(t)}\right)_{t \in \mathbb{N}}$. Then it holds that \begin{align} \left(\tilde m^{(t)}, \tilde \sigma^{(t)}, \tilde A^{(t)}\right) = \left(g(m^{(t)}), \sigma^{(t)}, M A^{(t)}\right) \label{eq:statement} \end{align} for all $t \in \mathbb{N}$. \end{lemma} \begin{proof} The straightforward proof is inductive. The base case $t=0$ holds by assumption, see equation~\eqref{eq:initialB}. Assume that the assertion in equation~\eqref{eq:statement} holds for some value of $t$. In iteration $t$ the HE-ES and (1+4)-HE-ES applied to $\tilde f$ generate the offspring \begin{align} \tilde x_i^{\pm} \, &= \tilde m^{(t)} \pm \tilde \sigma^{(t)} \cdot \tilde A^{(t)} b_i^{(t)} \label{eq:samples} \\ &= g(m^{(t)}) \pm \sigma^{(t)} \cdot M A^{(t)} b_i^{(t)} \notag \\ &= g \left( m^{(t)} \pm \sigma^{(t)} \cdot A^{(t)} b_i^{(t)} \right) \notag \\ &= g(x_i^{\pm}) \notag \enspace. \end{align} This identity immediately implies \begin{align} \tilde f(\tilde x_i^{\pm}) = f\left(g^{-1}(\tilde x_i^{\pm})\right) = f\Big(g^{-1}\left(g(x_i^{\pm})\right)\Big) = f(x_i^{\pm}) \enspace, \label{eq:sameranks} \end{align} as well as $\tilde f(\tilde m^{(t)}) = f(m^{(t)})$, with the same logic. Therefore the procedure \texttt{computeG} is called by both algorithms with the exact same parameters and we hence obtain the same matrix $G$ for the original and for the transformed problem. We conclude \begin{align*} \tilde A^{(t+1)} = \tilde A^{(t)} G = M A^{(t)} G = M A^{(t+1)} \enspace. \end{align*} Due to equation~\eqref{eq:sameranks} it holds that $m^{(t+1)} = m^{(t)} \Leftrightarrow \tilde m^{(t+1)} = \tilde m^{(t)}$. If the means change then they are replaced with the point $\tilde x_1^+$ for (1+4)-ES, and with convex combinations $\sum_i w_i x_i^\pm$ and $\sum_i w_i \tilde x_i^\pm$ for the original non-elitist HE-ES. Obviously, the first case is a special case of the second one. Hence it holds that \begin{align*} \tilde m^{(t+1)} = \sum_i w_i \tilde x_i^\pm = \sum_i w_i g(x_i^\pm) = g(m^{t+1}) \end{align*} according to equation~\eqref{eq:samples}. Equation~\eqref{eq:sameranks} also guarantees that the step sizes are multiplied with the same factor $\delta$, since both CSA and the $1/5$-rule are rank-based methods. We obtain \begin{align*} \tilde \sigma^{(t+1)} = \delta \cdot \tilde \sigma^{(t)} = \delta \cdot \sigma^{(t)} = \sigma^{(t+1)} \enspace. \end{align*} We have shown that all three components of the tuples in equation~\eqref{eq:statement} coincide for $t+1$. \end{proof} Affine invariance is an important property for handling non-separable ill-conditioned problems. HE-ES shares this invariance property with CMA-ES. Next we turn to invariance to transformations of objective function values. A significant difference between HE-ES and CMA-ES is that the former is not invariant to monotonically increasing transformations of fitness values, while the latter is: let $h : \mathbb{R} \to \mathbb{R}$ be a strictly monotonically increasing function, then CMA-ES minimizes $h \circ f$ the same way as $f$. HE-ES has the same property only for affine transformations $h(t) = a t + b$, $a > 0$. It can be argued that in many situations a first order Taylor approximation (which is affine) of the transformation is good enough, but it is understood that this argument has limitations. Affine invariance of function values is formalized by the following lemma. \begin{lemma} \label{lemma:invariance-f} Consider the state trajectory \begin{align} \left(m^{(t)}, \sigma^{(t)}, A^{(t)}\right)_{t \in \mathbb{N}} \end{align} of HE-ES or (1+4)-HE-ES applied to the objective function $f$, and alternatively the state trajectory \begin{align} \left(\tilde m^{(t)}, \tilde \sigma^{(t)}, \tilde A^{(t)}\right)_{t \in \mathbb{N}} \end{align} of the same algorithm with initial state \begin{align} \left(\tilde m^{(0)}, \tilde \sigma^{(0)}, \tilde A^{(0)}\right) = \left(m^{(0)}, \sigma^{(0)}, A^{(0)}\right) \end{align} applied to the objective function $\tilde f(x) = a \cdot f(x) + b$, $a > 0$. Then it holds that \begin{align} \left(\tilde m^{(t)}, \tilde \sigma^{(t)}, \tilde A^{(t)}\right) = \left(m^{(t)}, \sigma^{(t)}, A^{(t)}\right) \end{align} for all $t \in \mathbb{N}$. \end{lemma} \begin{proof} Due to $a > 0$ the transformation $h(t) = a t + b$ is strictly monotonically increasing and hence preserves the order (ranking) of objective values. HE-ES and its variants are fully rank-based up to their covariance matrix update. Therefore most operations on $f$ and $h \circ f$ are exactly the same, even for general strictly monotonic transformations~$h$. Procedure~\ref{procedure:computeG} needs a closer investigation. In the curvature estimates $h_{i,j}$ computed in line~3 the offset $b$ cancels out, while the factor $a$ enters linearly. It also enters linearly into the cutoff threshold $c$ computed in line~5, and hence in the truncation in line~6. It is then transformed into the summand $\log(a)$ for $q_{i,j}$ in line~7, which is removed in line~8 when subtracting the mean. We conclude that Procedure~\ref{procedure:computeG} is invariant to affine transformations of function values. \end{proof} Affine invariance in effect means that it suffices to analyze HE-ES in an arbitrary coordinate system. For example, setting $g(x) = A^{-1}(x - m)$ we can transform the problem so that at the beginning of an iteration it holds that $m = 0$ and $A = C = I$. This reparameterization trick was first leveraged by \citet{glasmachers2010xNES} and used by \citet{krause2015xCMAES} and by \citet{beyer2017simplify}. Alternatively we can transform the objective function into a simpler form, as discussed in the next section. In general we cannot achieve both at the same time. \subsection{Informal Discussion of the Covariance Matrix Update} Before we proceed with our analysis, we provide an intuition on the effect of the covariance matrix update of HE-ES. Consider a general convex quadratic objective function as given in equation~\eqref{eq:objective}, with symmetric and strictly positive definite Hessian $H$, unique optimal solution $x^*$ and optimal value $f^*$. Without loss of generality, applying Lemma~\ref{lemma:invariance-f} with $h(t) = \det(H)^{-\frac{1}{d}} \cdot (t - f^*)$ we can set $f^* = 0$ and assume $\det(H) = 1$. Due to Lemma~\ref{lemma:invariance-X} applied with $g(x) = H^{-1/2}(x - x^*)$ it even suffices to consider $x^* = 0$ and $H = I$ (the identity matrix), which yields the well-known sphere function $f(x) = \frac12 \|x\|^2$. In this situation, the ultimate goal of covariance (or transformation) matrix adaptation is to generate a sequence $(A^{(t)})_{t \in \mathbb{N}}$ fulfilling $$ C^{(t)} = (A^{(t)})^T A^{(t)} \underset{t \to \infty}{\longrightarrow} I \enspace, $$ for all initial states $A^{(0)}$. Due to random fluctuations and a non-vanishing learning rate, CMA-ES does not fully achieve this goal. Instead its covariance matrix keeps fluctuating around the inverse Hessian. In contrast, HE-ES actually achieves the above goal: its covariance matrix converges to the inverse Hessian, which is hence approximated to arbitrarily high precision. We note that in practice this difference does not matter, since a realistic black-box objective function is hardly exactly quadratic. This improved stability of the update, however, is what makes the subsequent analysis tractable. Let $b_1, b_2 \sim \mathcal{N}(0, I)$ be the Gaussian random vectors sampled in the current generation of (1+4)-HE-ES, and define their normalized counterparts $u_i = b_i / \|b_i\|$. Note that by construction the directions are orthogonal: $b_1^T b_2 = 0 = u_1^T u_2$. We consider the four offspring \begin{align*} x_i^+ = m + \sigma \cdot A b_i \qquad \text{and} \qquad x_i^- = m - \sigma \cdot A b_i \qquad \text{for} \qquad i \in \{1, 2\} \end{align*} forming two pairs of mirrored samples. From the corresponding function values we estimate the curvatures \begin{align} h_i = \frac{f(x_i^+) + f(x_i^-) - 2 f(m)}{\sigma^2 \cdot \|b_i\|^2} = u_i^T A^T H A u_i \enspace. \label{eq:curvature} \end{align} We then extract the update coefficients \begin{align*} \gamma_1 = h_1^{-1/4} h_2^{1/4} \qquad \text{and} \qquad \gamma_2 = h_1^{1/4} h_2^{-1/4} \enspace. \end{align*} The update of the transformation matrix takes the form \begin{align} A' = A \cdot G \qquad \text{with} \qquad G = I + \sum_{i=1}^2 (\gamma_i - 1) \cdot u_i u_i^T \enspace, \label{eq:update} \end{align} see also equation~\eqref{eq:G}. Here, $A = A^{(t)}$ is the matrix before and $A' = A^{(t+1)}$ is the matrix after the update. The main aim of the subsequent analysis is to explain the effect of this update in intuitive geometric terms and to derive the guarantee that $(A^{(t)})^T A^{(t)}$ converges to $H^{-1}$. Let $V = \mathbb{R} \cdot u_1 + \mathbb{R} \cdot u_2$ denote the two-dimensional subspace of $\mathbb{R}^d$ spanned by the two sampling directions. We consider the symmetric rank-two matrix $U = u_1 u_1^T + u_2 u_2^T$. Its eigenspace for eigenvalue $1$ is $V$. For $d > 2$, the orthogonal subspace $V^\perp = \{x \in \mathbb{R}^d \,|\, x^T v = 0 \, \forall v \in V\}$ is non-trivial. It is the eigenspace for eigenvalue $0$. The map $x \mapsto U x$ is the orthogonal projection onto $V$. For vectors $x \in V^\perp$ the map $x \mapsto G x$ is the identity. Within $V$ the update matrix $G$ has eigenvalues $\gamma_i$ with corresponding eigenvectors~$u_i$. Multiplying the eigenvalues yields $\det(G) = 1$. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{update} \caption{\label{figure:update} Adaptation of an initially isotropic covariance matrix $C$ (illustrated as a circular iso-density curve) towards an ellipsoidal objective function (family of ellipsoidal level lines) in the two-dimensional subspace spanned by $u_1$ and $u_2$. The resulting covariance matrix $C'$ exhibits elliptic iso-density curves. The update changes $C$ into $C'$ along the directions $u_1$ and $u_2$. The different extents of the ellipses are illustrated by the dashed bounding boxes, which change from an initial square into a rectangle of equal area. The bounding box of the iso-density line of $C'$ is defined by the four (marked) intersections of the ``coordinate axes'' spanned by $u_1$ and $u_2$ with a level set of the objective function. It is clearly visible that the update does not learn the problem structure in a single step. Still, the resulting iso-density curve is closer to the level sets than the original iso-density curve. If $u_1$ and $u_2$ happen to be principal axes of the level set ellipsoids, then the adaptation is completed in a single step. } \end{center} \end{figure} The intuition behind this update is best explained by simplifying matters somewhat. To this end assume $H = I$, $h_1 = 1/h_2$ and that $U A U$ (the transformation $A$ restricted to $V$) has eigenvectors $u_i$ with corresponding eigenvalues $\lambda_i$: we were lucky to sample along the eigenvectors of the problem. Equation~\eqref{eq:curvature} then yields $h_i = \lambda_i^2$ and hence $\gamma_1 = \sqrt{\lambda_2/\lambda_1}$ and $\gamma_2 = \sqrt{\lambda_1/\lambda_2}$. Plugging this into equation~\eqref{eq:update} yields $U A' U = U$. In other words, the Hessian restricted to $V$ is learned in a single step. In general, if $h_1$ and $h_2$ do not multiply to unity then only the relative scaling of the two eigenvalues is corrected. More significantly, if the sampling directions do not agree with the eigenvectors then the update does not solve the problem in a single step. Instead, the effect is exactly analogous to improving the properties of an ill-conditioned matrix---as seen from the perspective of the sampling directions---with a diagonal pre-conditioner. In this general situation, the effect of the update is depicted in figure~\ref{figure:update}. \subsection{A Preconditioning Perspective} We have seen above that thanks to affine invariance we can transform any convex quadratic objective function into the sphere function with Hessian $H = I$, without loss of generality. We stick to this choice from here on. For another informal argument (which will be made precise below) we restrict the problem to the subspace $V$. In that subspace we use $u_1$ and $u_2$ as coordinate axes, so the components of the following two-dimensional vectors and matrices refer to that coordinate system. This immediately implies $u_1 = (1, 0)^T$ and $u_2 = (0, 1)^T$. We then deal with the $2 \times 2$ matrices \begin{align*} A = \begin{pmatrix} a_{11} & a_{21} \\ a_{12} & a_{22} \end{pmatrix} \qquad \text{and} \qquad G = \begin{pmatrix} \gamma_1 & 0 \\ 0 & \gamma_2 \end{pmatrix}. \end{align*} It is due to the specific choice of the basis that the matrix $G$ is diagonal. It then becomes apparent that at the core of the update the (1+4)-HE-ES alters the variance in the directions $u_1$ and $u_2$, disregarding the inherent structure of the covariance matrix (e.g., its eigenbasis). A related perspective is taken in methods for solving extremely large linear systems, where the problem can often be simplified through preconditioning \citep[Chapter~13]{vandervorst2003iterative}. Changing $A$ into $A' = A \cdot G$ can be understood as a measure for improving the conditioning of the problem, which is the same as decreasing the spread of the eigenvalues of $C$. The effect on $C' = G \cdot C \cdot G$ is two-sided preconditioning with the same matrix $G$. A diagonal preconditioner $G$ is among the simplest choices. In our analysis it arises naturally through the very definition of the (1+4)-HE-ES. A commonly agreed upon measure of problem hardness and of the spread of the eigenvalues is the conditioning number, which is the quotient of largest divided by the smallest eigenvalue. In general, absolute values of eigenvalues are considered, however, for the covariance matrix $C$ all eigenvalues are positive. Taking this perspective, we would like to show that the conditioning number of $C' = (A')^T A' = G A^T A G = G C G$ is smaller than or equal to the conditioning number of $C$, and that it is strictly smaller most of the time. Sticking to our two-dimensional view established above we can solve the eigenequation analytically by finding the zeros of the characteristic polynomial. It holds that \begin{align*} C = A^T A = \begin{pmatrix} a_{11}^2 + a_{12}^2 & a_{11} a_{21} + a_{12} a_{22} \\ a_{11} a_{21} + a_{12} a_{22} & a_{21}^2 + a_{22}^2 \end{pmatrix} =: \begin{pmatrix} c_{11} & c_{12} \\ c_{12} & c_{22} \end{pmatrix} \end{align*} and equation~\eqref{eq:curvature} yields $h_i = c_{ii}$. The eigenvalues of $C$ are the zeros of its characteristic polynomial \begin{align*} p_C(\lambda) \, &= \det(\lambda \cdot I - C) \\ &= (\lambda - c_{11}) (\lambda - c_{22}) - c_{12}^2 \\ &= \lambda^2 - (c_{11} + c_{22}) \lambda + c_{11} c_{22} - c_{12}^2 \\ &= \lambda^2 - \tr(C) \lambda + \det(C) \enspace. \end{align*} We obtain the (real) eigenvalues $\lambda_{1/2} = \frac{\tr(C)}{2} \pm \sqrt{\frac{\tr(C)^2}{4} - \det(C)}$ and the conditioning number \begin{align} \kappa(C) = \frac {\frac{\tr(C)}{2} + \sqrt{\frac{\tr(C)^2}{4} - \det(C)}} {\frac{\tr(C)}{2} - \sqrt{\frac{\tr(C)^2}{4} - \det(C)}} = \frac {1 + \sqrt{1 - \frac{4 \det(C)}{\tr(C)^2}}} {1 - \sqrt{1 - \frac{4 \det(C)}{\tr(C)^2}}} \label{eq:conditioning} \end{align} It holds that $\det(G) = 1$ by construction, which implies $\det(C') = \det(C)$. With this property it is easy to see from equation~\eqref{eq:conditioning} that the conditioning number $\kappa$ is a strictly monotonically increasing function of $\tr(C)$. In the following we will therefore consider the goal of minimizing $\tr(C)$ while keeping $\det(C)$ fixed. The minimizer of $\tr(C)$ is a multiple of the identity matrix, which is indeed our adaptation goal. Therefore, independent of the monotonic relation to the condition number in the two-dimensional case, minimizing the trace of $C$ is justified as a covariance matrix adaptation goal in its own right. For a general Hessian this goal translates into minimizing $\tr(H \cdot C)$ while keeping $\det(C)$ fixed, which is equivalent to adapting $C$ towards a multiple of~$H^{-1}$. This construction is compatible with Definition~\ref{def:stability-convergence} using the trace to construct the pre-metric $\delta(A, B) = \tr \Big( \frac{A}{\sqrt[d]{\det(A)}} \cdot \frac{B^{-1}}{\sqrt[d]{\det(B^{-1})}} \Big) - d$. \pagebreak \noindent The following lemma computes the change of the trace induced by a single update step. \begin{lemma} \label{lemma:trace-reduction} For a matrix $A \in \mathrm{GL}(d, \mathbb{R})$ and two orthonormal vectors $u_1, u_2 \in \mathbb{R}^d$ (fulfilling $\|u_i\| = 1$ and $u_1^T u_2 = 0$) we define the following quantities: \begin{align*} C = A^T A, &\qquad h_i = u_i^T C u_i, \qquad \gamma_i = \left( \frac{h_1 h_2}{h_i^2} \right)^{1/4}, \\ G = I + \sum_{i=1}^2 (\gamma_i - 1) u_i u_i^T, &\qquad A' = A G, \qquad C' = (A')^T A' = G C G. \end{align*} It holds that $\det(C') = \det(C) > 0$ and \begin{align*} \tr(C) - \tr(C') = h_1 + h_2 - 2 \sqrt{h_1 h_1} \geq 0 \enspace. \end{align*} \end{lemma} \begin{proof} The proof is elementary. Our first note is that $C$ is strictly positive definite, which implies $h_i > 0$ and $\gamma_i > 0$. We choose vectors $u_3, \dots, u_d$ so that $u_1, \dots, u_d$ form an orthonormal basis of $\mathbb{R}^d$. We collect these vectors as columns in the orthogonal matrix $U$. We then represent the matrix $C$ as an array of coefficients in the above basis: \begin{align*} U^T C U = \begin{pmatrix} c_{11} & c_{21} & \cdots & c_{d1} \\ c_{12} & c_{22} & \cdots & c_{d2} \\ \vdots & \vdots & \ddots & \vdots \\ c_{1d} & c_{2d} & \cdots & c_{dd} \end{pmatrix} \enspace. \end{align*} In this basis the matrix $G$ has a particularly simple form: \begin{align*} U^T G U = \begin{pmatrix} \gamma_1 & 0 & 0 & 0 & \cdots & 0 \\ 0 & \gamma_2 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 \end{pmatrix} \enspace. \end{align*} From $\gamma_1 \gamma_2 = 1$ we obtain $\det(U^T G U) = \det(G) = 1$, which immediately implies the first claim $\det(C') = \det(C)$. We compute the product $C' = G C G$ in the basis $U$ as follows: $U^T C' U = U^T G C G U = (U^T G U) (U^T C U) (U^T G U)$. We obtain the components \begin{align*} U^T C' U \, &= \begin{pmatrix} \gamma_1^2 c_{11} & \gamma_1 \gamma_2 c_{21} & \gamma_1 c_{31} & \gamma_1 c_{41} & \cdots & \gamma_1 c_{d1} \\ \gamma_1 \gamma_2 c_{12} & \gamma_2^2 c_{22} & \gamma_2 c_{32} & \gamma_2 c_{42} & \cdots & \gamma_1 c_{d2} \\ \gamma_1 c_{13} & \gamma_2 c_{23} & c_{33} & c_{43} & \cdots & c_{d3} \\ \gamma_1 c_{14} & \gamma_2 c_{24} & c_{34} & c_{44} & \cdots & c_{d4} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ \gamma_1 c_{1d} & \gamma_2 c_{2d} & c_{3d} & c_{4d} & \cdots & c_{dd} \end{pmatrix} \enspace. \end{align*} It holds that $\tr(U^T C U) = \tr(C)$ and $\tr(U^T C' U) = \tr(C')$ due to invariance of the trace under changes of the coordinate system. Our target quantity $\tr(C) - \tr(C')$ is hence the difference of the sums of the diagonals of the above computed matrices $U^T C U$ and $U^T C' U$, which amounts to \begin{align*} c_{11} + c_{22} - \gamma_1^2 c_{11} - \gamma_2^2 c_{22} \enspace. \end{align*} Using $c_{ii} = u_i^T C u_i = h_i$ we obtain $\gamma_i^2 c_{ii} = \sqrt{h_1 h_2}$ for $i \in \{1, 2\}$. This immediately yields $\tr(C') - \tr(C) = h_1 + h_2 - 2 \sqrt{h_1 h_2}$. The right hand side is never negative because the arithmetic average of two positive numbers is never smaller than their geometric average. \end{proof} The lemma shows that the trace \emph{never increases} due to a covariance matrix update, no matter how the offspring are sampled. This is a strong guarantee for the stability of the update. In contrast, the update of CMA-ES can move the covariance matrix arbitrarily far away from its target. Although large deviations happen with extremely small probability, the probabilistic nature of its stability as an unbounded Markov chain \citep{auger2005convergence} significantly complicates the analysis. With (1+4)-HE-ES we are in the comfortable situation of monotonic improvements, which is somewhat analogous to analyzing algorithms with elitist selection. We have established that for $H=I$ the sequence $\tr(C^{(t)})$ is monotonically decreasing. With fixed determinant $\det(C^{(t)}) = D$ it is bounded from below by $d \sqrt[d]{D}$, hence it converges due to the monotone convergence theorem. In the general setting, using affine invariance, this translates into monotonic decrease of the sequence $\tr(C^{(t) H}$. \subsection{Convergence of the Covariance Matrix to the Inverse Hessian} It is left to show that the trace indeed converges to its lower bound, which implies convergence of $C^{(t)}$ to a multiple of $H^{-1}$. We are finally in the position to guarantee this property. \begin{theorem} \label{theorem:cma-convergence} Let $A^{(t)}$ denote the sequence of transformation matrices of (1+4)-HE-ES when optimizing a convex quadratic function with strictly positive definite symmetric Hessian $H$. We define the sequence of covariance matrices $C^{(t)} := (A^{(t)})^T A^{(t)}$. Then with full probability it holds that \begin{align*} \lim_{t \to \infty} C^{(t)} = \alpha \cdot H^{-1} \qquad \text{with} \quad \alpha = \sqrt[d]{\det\left(C^{(0)} \right) \cdot \det(H)} \enspace. \end{align*} \end{theorem} \begin{proof} The proof is based on topological arguments and drift. For technical reasons and for ease of notation, and importantly without loss of generality, we restrict ourselves to the case $\det\left(C^{(t)}\right) = 1$, $H = I$, and hence $\alpha = 1$. Under the constraint $\det(C) = 1$ the function $\tr(C)$ attains its minimum $\tr(I) = d$ at the unique minimizer $C = I$. Consider a fixed covariance matrix $C$ and random vectors $u_1, u_2$. According to Lemma~\ref{lemma:trace-reduction} the function \begin{align*} \Delta_C(u_1, u_2) = u_1^T C u_1 + u_2^T C u_2 - 2 \sqrt{u_1^T C u_1 \cdot u_2^T C u_2} \geq 0 \end{align*} computes the single-step reduction of the trace when sampling in directions $u_1$ and $u_2$. The function is analytic in $C$ and in $u_i$, and for $C \not= I$ it is non-constant in $u_i$, hence it is zero only on a set of measure zero with respect to the random variables $u_1, u_2$. We conclude that in expectation over $u_1$ and $u_2$ it holds that \begin{align*} \mathbb{E}[\Delta_C] > 0 \qquad \forall \, C \not= I \enspace. \end{align*} In the next step we exploit the continuity of the function $C \mapsto \mathbb{E}[\Delta_C]$. We fix a ``quality'' level $\rho = \tr(C)$. In other words, for a given suboptimal level $\rho > d$ we consider an arbitrary covariance matrix $C$ fulfilling $\tr(C) = \rho$ and $\det(C) = 1$. The set \begin{align*} \tr^{-1}(\rho) = \Big\{ C \in \mathrm{SL}(d, \mathbb{R}) \,\Big|\, \tr(C) = \rho \Big\} \end{align*} is compact: being the pre-image of a point under a continuous map it is closed, the eigenvalues of $C$ are upper bounded by $\rho$, and the space of eigenbases is the orthogonal group, which is compact. Therefore the expected progress $\mathbb{E}[\Delta_C]$ attains its minimum and its maximum on this set. We denote them by \begin{align*} Q(\rho) = \min_{C \in \tr^{-1}(\rho)} \mathbb{E}[\Delta_C] \qquad \text{and} \qquad R(\rho) = \max_{C \in \tr^{-1}(\rho)} \mathbb{E}[\Delta_C] \enspace. \end{align*} We note three convenient properties: \begin{compactitem} \item It holds that $Q(\rho) > 0 \Leftrightarrow \rho > d \Leftrightarrow R(\rho) > 0$, \item $Q$ and $R$ are monotonically increasing functions, and \item $Q$ and $R$ are continuous functions. \end{compactitem} We aim to show that the sequence $\rho^{(t)} = \tr(C^{(t)})$ converges to $d$ with full probability. To this end we pick a target level $\rho^* > d$, so we have to show that the sequence $\rho^{(t)}$ falls below $\rho^*$. This is achieved by applying an additive drift argument and using the monotonicity of $R$ and $Q$ as well as the monotonic decrease of $\rho^{(t)}$ (Lemma~\ref{lemma:trace-reduction}). By construction it holds that \begin{align} \mathbb{E}\left[\rho^{(t)} - \rho^{(t+1)}\right] \in \Big[ Q(\rho^{(t)}), R(\rho^{(t)}) \Big] \subset \Big[ Q(\rho^*), R(\rho^{(0)}) \Big] \enspace. \label{eq:CMA-drift} \end{align} Here, the monotonic reduction of $\rho^{(t)}$ together with the monotonicity of $Q$ and $R$ yield $t$-independent lower and upper bounds on the expected progress, as long as it holds that $\rho^{(t)} \geq \rho^*$. The existence of the two bounds allows us to apply a drift argument. We define the first hitting time $T(\rho^*) = \min\{t \in \mathbb{N} \,|\, \rho(t) \leq \rho^*\}$ of reaching the target $\rho^*$. \citet[theorem~2.3, equation~2.9]{hajek1982hitting} guarantees that the probability $\Pr(T(\rho^*) > k)$ tends to zero as $k \to \infty$ (and it does so exponentially fast). Hence, the sequence $\rho(t)$ eventually falls below $\rho^*$ with probability one. Since $\rho^* > d$ was arbitrary we conclude that $\rho^{(t)} \rightarrow d$ with full probability. This proves $C^{(t)} \rightarrow I$ in our case, and hence in general $C^{(t)} \rightarrow \alpha \cdot H^{-1}$ due to affine invariance. The form of the scaling factor $\alpha = \sqrt[d]{\det\left(C^{(0)} \right) \cdot \det(H)}$ results immediately from affine invariance and the need to fulfill $\det(H C^{(t)}) = \det(H C^{(0)}) = \det(H) \det(C^{(0)}) = \alpha$. \end{proof} The above theorem establishes that the update of (1+4)-HE-ES is not only stable and improving the covariance matrix monotonically, but that it also achieves its goal of converging to a multiple of the inverse Hessian. To the best of our knowledge, this is the first theorem proving that the covariance matrix update of a variable-metric evolution strategy has this property. This stability of $C^{(t)}$ will allow us to derive a strong convergence speed result for $m^{(t)}$ in the next section. We would like to note that equation~\eqref{eq:CMA-drift} can be understood as a variable drift condition \citep{doerr2011sharp} for $\rho^{(t)}$. A more detailed drift analysis bears the potential to bound the time it takes for the covariance matrix to adapt to the problem at hand. However, the task of bounding $Q$ and $R$ is non-trivial. In practice we find that the covariance matrix converges at a linear rate, see figure~\ref{figure:cma-speed}. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{convergence} \includegraphics[width=0.49\textwidth]{convergenceCMAES} \caption{\label{figure:cma-speed} The plots show the time evolution of condition number $\kappa(C)$ (solid curve) and trace $\tr(C)$ (dashed curve), both with their global minima of $1$ and $d$ subtracted, on a logarithmic scale, for (1+4)-HE-ES on the left and (1+1)-CMA-ES on the right. For CMA-ES, the trace is computed on a suitably normalized multiple of the covariance matrix. The algorithms are run on the sphere function, but they are initialized with a covariance matrix resembling an optimization run of an ellipsoid function with conditioning number $10^6$ when starting from an isotropic search distribution. The curves are medians over 99 independent runs. In the right half of the left plot the covariance matrix is already adapted extremely close to the identity. It is clearly visible that in this late phase $\kappa(C)$ and $\tr(C)$ both converge at a linear rate. In contrast, with the CMA-ES update the precision saturates at some non-optimal value. } \end{center} \end{figure} \section{Linear Convergence of HE-ES on Convex Quadratic Functions} In this section we establish that the (1+4)-HE-ES converges at a linear rate that is independent of the problem difficulty $\kappa(H)$. The proof builds on the stability of the covariance matrix update established in the previous section, as well as on the analysis of the (1+1)-ES by \citet{morinaga2019generalized}. We adapt notations and definitions in the following to make the two analyses compatible. Defining linear convergence of stochastic algorithms is not a straight-forward task. We define linear convergence in terms of the first hitting time: \begin{definition}\label{def:linear} Let $\left(X^{(t)}\right)_{t \in \mathbb{N}}$ a sequence of random variables with $\mathbb{E}[X^{(t)}]\rightarrow X^*$ and let $\Psi^{(t)}= \Psi(X^{(t)})$ be a potential function with $\mathbb{E}[\Psi^{(t)}]\rightarrow -\infty$. The first hitting time of the target $\delta$ is defined as $$ T_{\Psi}(\delta) = \min \big\{t \in \mathbb{N} \,\big|\, \Psi^{(t)} < \delta \big\}. $$ We say that $X^{(t)}$ converges $\Psi$-linearly to $X^*$, if there exists $Q$ such, that $$\lim_{\epsilon \rightarrow 0}\mathbb{E} \left[\frac {T_{\Psi}(\log \epsilon)}{-\log \epsilon}\right] \leq \log(Q)\enspace.$$ \end{definition} The intuition of this definition is that $\Psi$ measures the logarithmic distance from the optimum, e.g. $\Psi(x)=\log \lVert x-x^* \rVert$. When considering a deterministic algorithm, i.e. $X^{(t)}$ is a sequence of dirac-distributions, this choice of $\Psi$ makes our definition equivalent to $Q$-linear convergence. The recent work of \citet{akimoto2018drift} established linear convergence of the (1+1)-ES on the sphere function by means of drift analysis. The result was significantly extended by \citet{morinaga2019generalized} to a large class of functions, including strongly convex $L$-smooth functions. As a special case it establishes linear convergence for all convex quadratic problems. Unsurprisingly this comes at the price of a worse convergence rate, a result that was first established by \citet{jaegerskuepper2006quadratic}. This is because all of the above results refer to a simple ES without covariance matrix adaptation. Analyzing an ES with CMA has proven to be significantly more difficult than analyzing an ES without CMA on a potentially ill-conditioned convex quadratic function. The reason is that adapting the covariance matrix can turn the problem into \emph{any} convex quadratic function, with unbounded conditioning number (or trace), while the condition number is bounded in case of an arbitrary but fixed convex quadratic problem and isotropic mutations. However, the stability of the update established in the previous section allows us to derive a strong convergence result even with an elaborate CMA mechanism in place. The technically rather complicated proof in this section is based on a simple idea. Using invariance properties, the optimization problem faced by (1+4)-HE-ES can be transformed into a convex quadratic problem faced by the simple (1+1)-ES, independently in each iteration. This amounts to optimizing a dynamically changing sequence of convex quadratic objective function with the (1+1)-ES. The sequence of objective functions lies within a class that is covered by \citet{morinaga2019generalized}. We need to adapt that analysis only slightly, arguing that it does not only hold for a single function from a flexible class of functions, but \emph{uniformly} for function classes with bounded conditioning of the Hessian. Then the analysis holds even for dynamically changing functions, as long as the sequence remains inside of the function class. The last part is a direct consequence of the stability of the HE-ES update. \begin{figure} \centering \includegraphics{function_evolution} \caption{Visualization of the function-sequence described in Corollary~\ref{corollary:functions}. For a set of three points $m^{(t)}$, three functions $\tilde{f}^{(t)}$ are depicted (continuous, dotted, and dashed contour-lines). The functions $\tilde{f}^{(t)}$ and $\tilde{f}^{(t+1)}$ are chosen such that their function-values agree at point $m^{(t+1)}$ and thus function-value decreases of a successful step (black-arrows) amount to the same progress as on the target function $f$. Further note that contour-lines of same function-values between functions encompass the same area, which is proven in Lemma~\ref{lemma:fmu_invariance}. } \label{fig:figure_sequence} \end{figure} We consider (1+4)-HE-ES optimizing the convex quadratic function \eqref{eq:objective} with unique minimum $x^*$, optimal value $f^*$, and strictly positive definite Hessian $H$. The following corollary will allow us to rephrase the results of the previous section in terms that are more compatible with \citet{morinaga2019generalized}: \begin{corollary}\label{corollary:functions} Consider the state-trajectory $\left(m^{(t)}, \sigma^{(t)}, A^{(t)}\right)_{t \in \mathbb{N}}$ of the (1+4)-HE-ES applied to the convex quadratic function $$ f(x) = \frac12 (x-x^*)^T H (x-x^*) + f^*\enspace.$$ There exists a sequence of functions $\tilde f^{(t)}=f \circ [g^{(t)}]^{-1}$ such, that the state-trajectory is equivalent to the run of a (1+1)-ES optimizing $\tilde f^{(t)}$ in iteration $t$ starting from $(m^{(0)}, \sigma^{(0)})$. The state-trajectory of the (1+1)-ES is $\left(\tilde m^{(t)}, \tilde \sigma^{(t)} \right)_{t \in \mathbb{N}}$ and for all $t \in \mathbb{N}$ it holds that \begin{enumerate} \item $f(m^{(t)})=\tilde f^{(t)}(\tilde m^{(t)})$ \item $\tilde \sigma^{(t)} = \sigma^{(t)}$ \item $f(m^{(t+1)})=\tilde f^{(t)}(\tilde m^{(t+1)})$ \item $\nabla^2 \tilde f^{(t)}(x) \xrightarrow{t\rightarrow \infty} \alpha I$, $\alpha > 0$ \end{enumerate} \end{corollary} \begin{proof} The proof is straight-forward. We define $$g^{(t)}(x)=[A^{(t)}]^{(-1)}(x- m^{(t)}) + \tilde m^{(t)}\enspace.$$ With this choice, the first statement is fulfilled by construction. The second statement can be derived analogous to the proof of Lemma~\ref{lemma:invariance-X} equation \eqref{eq:samples} and the third statement can be obtained by noting that $$[g^{(t)}]^{-1}(\tilde m^{(t+1)})= m^{(t)} + A^{(t)}(\tilde m^{(t+1)} - \tilde m^{(t)}) = m^{(t+1)} \enspace.$$ For the fourth statement, we obtain that $\nabla^2 \tilde f^{(t)}(x) = A^{(t)} H \left(A^{(t)}\right)^T$. Since by Lemma~\ref{theorem:cma-convergence}, $C^{(t)}= \left(A^{(t)}\right)^T A^{(t)}\xrightarrow{t\rightarrow \infty} \alpha H^{-1}$ we obtain $\nabla^2 \tilde f^{(t)}(x) \xrightarrow{t\rightarrow \infty} \alpha I$. \end{proof} In other words, instead of considering an update of the covariance matrix, we can apply the (1+1)-ES to a sequence of functions that converge to the sphere function. This requires that the chosen sequence of functions does not change the behaviour of the optimizer. For this, statements 1 and 3 are crucial, because they can be used to show that single-step improvements on the set of functions can be related to improvements on the target function. This result does not extend to the two-step progress and $f(m^{(t+2)}) - f(m^{(t)}) \neq \tilde f^{(t)}(\tilde m^{(t+2)}) - \tilde f^{(t)}(\tilde m^{(t)})$ as the two steps are taken in different coordinate systems. Figure \ref{fig:figure_sequence} gives a visual depiction of this sequence of functions. The analysis of \citet{morinaga2019generalized} does not use the function-values directly, but instead uses a different function to show convergence. This is to allow their analysis to be invariant to strictly monotically increasing transformations of the function values. This is an important property in their analysis, because otherwise transforming a function would have an impact on the measured convergence speed. To achieve this, we define the function $$f_\mu(m) = \sqrt[d]{\mu\big(\big\{x \in \mathbb{R}^d \,\big|\, f(x) < f(m)\big\}\big)} \enspace,$$ which denotes the $d$-th root of the Lebesgue measure of the set of points that improve upon $m$. For this function it holds that $f(x) < f(y) \Leftrightarrow f_\mu(x) < f_\mu(y)$. For the sphere function, $f_\mu$ can be computed analytically and we obtain $f_\mu(m) = \gamma_d \cdot \|m - x^*\|$, where $\gamma_d$ is a dimension-dependent constant. This justifies the use of $f_\mu(m)$ as a measure of distance of $m$ to the optimum. For a general convex quadratic function it holds that $$ f_\mu(m) = \frac{\gamma_d}{\sqrt[d]{\det(H)}} \cdot \sqrt{f(m) - f^*}. $$ The question arises, whether this transformation is compatible with the set of functions defined in Corollary~\ref{corollary:functions}. The answer is given by the following lemma: \begin{lemma}\label{lemma:fmu_invariance} Let $\tilde f^{(t)}=f \circ [g^{(t)}]^{-1}$, $\tilde m^{(t)}$, $t \in \mathbb{N}$ the set of functions and vectors as defined in Corollary~\ref{corollary:functions} and $$\tilde f^{(t)}_\mu(m) = \sqrt[d]{\mu\big(\big\{x \in \mathbb{R}^d \,\big|\, \tilde f^{(t)}(x) < \tilde f^{(t)}(m)\big\}\big)}\enspace.$$ It holds that \begin{enumerate} \item $\tilde f^{(t)}_\mu(\tilde m^{(t)}) = \tilde f^{(t-1)}_\mu(\tilde m^{(t)})$ \item $ \tilde f^{(t)}_\mu(\tilde m^{(t)}) = \frac 1 {\sqrt[d]{ \det{A^{(0)}}}} f_\mu(m^{(t)})$ \end{enumerate} \end{lemma} \begin{proof} Let $\varphi^{(t)}=g^{(t-1)} \circ [g^{(t)}]^{-1}$. With this, it holds that $\tilde f^{(t)}= \tilde f^{(t-1)} \circ \varphi^{(t)}$. It is easy to verify that $\varphi^{(t)}(\tilde m^{(t)})=\tilde m^{(t)}$ and $$\nabla \varphi^{(t)}(x)=\big[A^{(t-1)}\big]^{-1}A^{(t)}=G^{(t)}\enspace,$$ where $G^{(t)}$ is the matrix computed by \texttt{computeG} via equation \eqref{eq:G} in iteration $t$. We obtain \begin{align*} \tilde f^{(t)}_\mu(\tilde m^{(t)}) \, &= \sqrt[d]{\mu\left(\Big\{ x \,\Big|\, \tilde f^{(t)}(x) < \tilde f^{(t)}(\tilde m^{(t)}) \Big\}\right)} \\ &= \sqrt[d]{\mu\left(\Big\{ x \,\Big|\, \tilde f^{(t-1)}\big(\varphi^{(t)}(x)\big) < \tilde f^{(t-1)}\big(\varphi^{(t)}(\tilde m^{(t)})\big) \Big\}\right)} \\ &= \sqrt[d]{\mu\left(\Big\{ x \,\Big|\, \tilde f^{(t-1)}(y) < \tilde f^{(t-1)}(\tilde m^{(t)}) \text{ for } y = \varphi^{(t)}(x) \Big\}\right)} \\ &\overset{(*)}{=} \sqrt[d]{\mu\left(\Big\{ y \,\Big|\, \tilde f^{(t-1)}(y) < \tilde f^{(t-1)}(\tilde m^{(t)}) \Big\}\right)} \\ &= \tilde f^{(t-1)}_\mu(\tilde m^{(t)}) \end{align*} for all $t$. The set changes from the left-hand-side to the right-hand-side of equation (*). The equality of the Lebesgue measures of the two sets holds because the matrix $G^{(t)}$ has unit determinant, and hence the transformation $\varphi^{(t-1)}$ preserves the Lebesgue measure. We can apply the decomposition argument via $\varphi^{(t)}$ iteratively and arrive at $\tilde f^{(t)}= \tilde f^{(0)} \circ \varphi^{(1)} \circ \cdots \circ \varphi^{(t)}$, where \[ \tilde f^{(0)}(m)= \left(f \circ [g^{(0)}]^{-1}\right)(m) =f(A^{(0)}(m-m^{(0)}) + m^{(0)})\enspace, \] follows from the definition of $g^{(t)}$ and starting-conditions of (1+1)-ES and (1+4)-HE-ES. As $\nabla \varphi^{(t)}$ has unit determinant for all $t > 0$, it holds that $\tilde f^{(t)}_\mu(\tilde m^{(t)})= \tilde f^{(0)}_\mu(g^{(0)}(m^{(t)}))$. Finally, the Lebesque-measure of $\tilde f^{(0)}$ is given by: \begin{align*} \tilde f^{(0)}_\mu(m) \, &= \sqrt[d]{\mu\left(\Big\{ x \,\Big|\, \tilde f^{(0)}(x) < \tilde f^{(0)}(m) \Big\}\right)} \\ &= \sqrt[d]{\mu\left(\Big\{ x \,\Big|\, f(A^{(0)}(x-m^{(0)}) + m^{(0)}) < f([g^{(0)}]^{-1}(m)) \Big\}\right)}\\ &= \sqrt[d]{\frac 1 {\det{A^{(0)}}}\mu\left(\Big\{ y \,\Big|\, f(y) < f([g^{(0)}]^{-1}(m)) \Big\}\right)}\\ &= \frac 1 {\sqrt[d]{ \det{A^{(0)}}}} f_{\mu}([g^{(0)}]^{-1}(m)) \enspace. \end{align*} \end{proof} \subsection{Adaptation of the Analysis of \citet{morinaga2019generalized}} Before we state our main theorem, we need to recap the results of \citet{morinaga2019generalized} and how their proof is structured. A key definition for this is the normalized step size \begin{equation}\label{eq:orig_bar_sigma} \bar \sigma = \frac {\sigma} {f_\mu(m)}\end{equation} which uses $f_\mu(m)$ as a measure of distance from the optimum. Using this definition, the convergence proof for the (1+1)-ES with 1/5-success rule is structured into the following steps: \begin{enumerate} \item It is proven that for any $0 < p_u < 1/5 < p_l < 1/2$ we can find normalized step-sizes $0 < \bar \sigma_l < \bar \sigma_u < \infty$ such, that for $\bar \sigma \in [\bar \sigma_l, \bar \sigma_u]$ the success-probability of the (1+1)-ES is $P\big(f(X) < f(m)\big) \in [p_u,p_l]$, where $X \sim{} \mathcal{N}(m,f_\mu(m) \bar \sigma I)$ for all $m\in \mathbb{R}^d$ such, that $f(m) \leq f(m^{(0)})$. In other words, for any point that might get accepted during an optimization run, the success probability must be within $[p_u,p_l]$ when $\bar \sigma \in [\bar \sigma_l, \bar \sigma_u]$. \item \citet{morinaga2019generalized} now pick $l \leq \bar \sigma_l$ and $u \geq \bar \sigma_u$ with $u/l \geq c_\sigma^{5/4}$ and some constant $v > 0$ to be quantified later to define the potential function $$ V(m,\bar \sigma)= \log f_\mu(m) + v \max\left\{0, \log \frac {c_\sigma l}{\bar \sigma}, \log \frac {c_\sigma^{\frac 14} \bar \sigma }{u} \right\} $$ It is clear that $V(m,\bar \sigma) \geq \log f_\mu(m)$ and thus, if $\Psi$-linear convergence is shown with the potential $\Psi = V$, then it also holds for $\Psi(m)=\log f_\mu(m)$. The second term penalizes $\bar \sigma \notin [l, u]$ and thus allows to measure progress when $\sigma$ has too large or too small value so that progress in $f_\mu(m)$ is unlikely or very small. \item Using this potential, the expected truncated single-step progress is derived. To be more exact, we pick $\mathcal{A} > 0$ and define the sequence \begin{equation}\label{eq:Y_orig} Y^{(t+1)}=Y^{(t)}+\max\left\{ V(m^{(t+1)}, \bar \sigma^{(t+1)}) - V(m^{(t)}, \bar \sigma^{(t)}), -\mathcal{A}\right\},\quad Y^{(0)}=V(m^{(0)}, \bar \sigma^{(t)})\enspace. \end{equation} This bounds the single-step progress by $\mathcal{-A}$ and prevents technical difficulties in the proof due to very good steps which occur with low probability. With this sequence, the expected single-step progress is bounded by $$\mathbb{E}\left[Y^{(t+1)}-Y^{(t)}\mid Y^{(t)} \right] \leq -\mathcal{B} \enspace.$$ The result is obtained by maximizing the progress over $v$ and it is shown that for each $f$ there exists an interval $v \in (0,v_u)$ such, that $\mathcal{B} > 0$. \item Finally, with this bound in place, Theorem~1 in \cite{akimoto2018drift} is applied to bound convergence. \end{enumerate} Most important for us, the final step only depends on $\mathcal{A}$ and $\mathcal{B}$ and is thus independent of $V$. The third step in turn computes the expected progress of a single iteration, thus changing $V$ between two iterations does not affect this, as long as we ensure that the progress measured by a chosen $V^{(t)}$ relates to progress on $V(m,\bar{\sigma})$. Our proof strategy is therefore the following. We consider the (1+1)-ES in the setting of Corollary~\ref{corollary:functions}. We define the normalized step-size $$\bar \sigma^{(t)}= \frac {\sigma^{(t)}}{\sqrt[d]{\det A^{(0)}} \tilde f^{(t)}_\mu(\tilde m^{(t)})}$$ as well as a sequence of potential-functions \begin{equation}\label{eq:Vt} V^{(t)}(\tilde m,\bar \sigma)= \log \tilde f^{(t)}_\mu(\tilde m) - \log \sqrt[d]{\det A^{(0)}}+ v \max\left\{0, \log \frac {c_\sigma l}{\bar \sigma}, \log \frac {c_\sigma^{\frac 14} \bar \sigma }{u} \right\}\enspace. \end{equation} As due to Lemma~\ref{lemma:fmu_invariance} statement 2, $f_\mu(m^{(t)}) = \sqrt[d]{\det A^{(0)}} \tilde f^{(t)}_\mu(\tilde m^{(t)})$, our definition of $\bar \sigma$ coincides with equation~\eqref{eq:orig_bar_sigma}. Applying Lemma~\ref{lemma:fmu_invariance} to $V^{(t)}$, we obtain the properties $$V^{(t)}(\tilde m^{(t)},\bar \sigma^{(t)})=V(m^{(t)},\bar \sigma^{(t)})\quad\text{and}\quad V^{(t+1)}\left(\tilde m^{(t+1)}, \bar \sigma^{(t+1)} \right) =V^{(t)}\left(\tilde m^{(t+1)},\bar \sigma^{(t+1)}\right)\enspace.$$ Thus, the sequence of truncated single step progress in \eqref{eq:Y_orig} coincides with \begin{equation}\label{eq:Yt} Y^{(t+1)}=Y^{(t)} +\max\left\{ V^{(t)}(\tilde m^{(t+1)}, \bar \sigma^{(t+1)}) - V^{(t)}(\tilde m^{(t)}, \bar \sigma^{(t)}), -\mathcal{A}\right\},\quad Y^{(0)}=0\enspace. \end{equation} With this in place, we will find a feasible $v > 0$ and bound $$\mathbb{E}\left[Y^{(t+1)}-Y^{(t)}\mid Y^{(t)}\right] \leq -\mathcal{B}^{(t)} \leq -\mathcal{B} < 0 \enspace,$$ which produces the final result. We formalize this argument further in the proof of the final theorem: \begin{theorem} Consider minimization of the convex quadratic function $$ f(x) = \frac12 (x - x^*)^T H (x - x^*) + f^* $$ with the (1+4)-HE-ES. Let $\Psi(m)=\log \lVert f_{\mu}(m)\rVert$. The sequence $\left(m^{(t)}\right)_{t \in \mathbb{N}}$ converges $\Psi$-linearly to $x^*$ with a convergence rate independent of $H$. \end{theorem} \begin{proof} We consider the (1+1)-ES in the setting of Corollary~\ref{corollary:functions} and thus obtain a state-trajectory $(\tilde m^{(t)}, \sigma^{(t)})_{t \in \mathbb{N}}$ with function-sequence $(\tilde f^{(t)})_{t \in \mathbb{N}}$ so, that $\nabla^2 \tilde f^{(t)} \rightarrow \alpha I$ and $\det(\nabla^2 \tilde f^{(t)})=\alpha^d$. Pick $\beta > 1$ arbitrarily and consider the function space $$F(\alpha, \beta)=\Big\{\tilde f(x)=f^* + (x-x^*)^TQ(x-x^*) \,\Big|\, x^* \in \mathbb{R}^d, \det(Q) = \alpha^d, \kappa(Q)\leq \beta \Big\}\enspace.$$ We note that for given $\alpha$ and $\beta$, the choice of matrices $Q$ in $F(\alpha, \beta)$ is restricted to a compact set. Therefore, a continuous function of $Q$ attains its infimum and supremum. As $\nabla^2 \tilde f^{(t)}\rightarrow \alpha I$, we have $\kappa_t = \kappa(\nabla^2 \tilde f^{(t)})\rightarrow 1$ due to continuity. Therefore, there exists a $T_0 \in \mathbb{N}$ such, that $\kappa_t < \beta$ and $\tilde f^{(t)} \in F(\alpha, \beta)$ for all $t > T_0$. From now on, we will only consider $t > T_0$. Proposition 4 and Proposition 12 in \citet{morinaga2019generalized} establish that for each $\tilde f \in F(\alpha,\beta)$ and each choice $0 < p_u < 1/5 < p_l < 1/2$ there exists a $0 <\bar \sigma_l < \bar \sigma_u< \infty$ such, that step 1 is fulfilled. We can thus pick $0< l < u < \infty$ such, that $l < \bar \sigma_l < \bar \sigma_u < u $ for all $\tilde f \in F(\alpha,\beta)$. With this choice of $l$ and $u$ and $v>0$, we can define $V^{(t)}$ and $Y^{(t)}$ as in equations \eqref{eq:Vt} and \eqref{eq:Yt}, respectively. With chosen $\mathcal{A} > 0$, and $v > 0$ sufficiently small, Proposition 6 in \citet{morinaga2019generalized} gives a bound on the expected single-step progress of $$\mathbb{E}\left[Y^{(t+1)}-Y^{(t)}\mid Y^{(t)}\right] < -\mathcal{B}^{(t)}\enspace.$$ While the bound $\mathcal{B}^{(t)}$ is obtained for a specific $v^{(t)} > 0$, \citet{morinaga2019generalized} show that we still obtain positive progress for $0 < v \leq v^{(t)}$. As $v^{(t)}> 0$ is a continuous function of $\kappa(\nabla^2 \tilde f^{(t)})$, it attains its minimum within the set $F(\alpha,\beta)$ and therefore we pick $v = \inf_{t \in \mathbb{N}} v^{(t)} > 0$. Let $\mathcal{B}_v^{(t)} > 0$ denote the progress rates obtained for this choice of $v$. Again, due to continuity of $\mathcal{B}_v^{(t)}$ as a function of $\tilde f \in F(\alpha,\beta)$, we can define $\mathcal{B} = \inf_{t \in \mathbb{N}} \mathcal{B}_v^{(t)}> 0$. Finally, with $\mathcal{A}$ and $\mathcal{B}$ in place, we can apply Theorem~1 in \cite{akimoto2018drift} to obtain linear convergence. Since $\beta$ was chosen independently of $H$, the rate of convergence is independent of the problem instance and its difficulty~$\kappa(H)$. \end{proof} Our result is the first proof of linear convergence of a CMA-based elitist ES, and the first proof of linear convergence of any ES at a rate that is independent of $H$. The result is of interest in a broader context, because it can naturally be extended to other CMA-algorithms as the proof itself only uses two properties: the determinant of $C^{(t)}$ is constant and $C^{(t)} \rightarrow \alpha H^{-1}$. The first condition poses no difficulties for algorithm design, as we can always use a matrix with normalized variance for sampling, i.e. sample offspring from $\mathcal{N}\big(m^{(t)}, (\sigma{(t)})^2 \cdot C^{(t)} / \sqrt[d]{\det C^{(t)}}\big)$. Therefore, our proof can also be applied to the covariance-matrix adaptation algorithm proposed by \citet{stich2016variable} when applied to the (1+1)-ES. We expect similar results for a hybrid-algorithm that could be constructed from an (1+1)-ES using the BOBYQA approximation of the Hessian matrix \citep{powell2009bobyqa}. \section{Conclusion} We have established that the covariance matrix update of the recently proposed Hessian Estimation Evolution Strategy is stable. It makes the covariance matrix converge to a multiple of the inverse Hessian of a convex quadratic objective function, and even in face of randomly sampled offspring the covariance matrix cannot degrade. This strong guarantee highlights that the update mechanism is very different from CMA-ES and similar algorithms. It also allows us to derive a strong convergence speed guarantee, namely linear convergence of a variable metric evolution strategy at the optimal convergence rate, in the sense that the convergence speed coincides with the speed of the same algorithm without covariance matrix adaptation applied to the sphere function. To the best of our knowledge, this is the first result of this type for a variable metric evolution strategy. { \small
2,869,038,155,776
arxiv
\section{Introduction} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/shouye_00.png} \end{center} \caption{ The deblocking results on image '0007' from Flickr1024~\cite{wang2019flickr1024} at quality factor 10. On the left is the ground-truth (GT). The first row on the right is the corresponding left and right compressed image patches. Compared to the state-of-the-art methods (DnCNN~\cite{zhang2017beyond}, QGCN~\cite{li2020learning} and iPASSR~\cite{wang2021symmetric}), our PTNet can generate better results due to the effective use of information from both views. } \label{fig:shouye} \end{figure} With recent advances in dual cameras, stereo images have shown great commercial value in many practical applications, including smartphones and autonomous vehicles. Usually, stereo images require a large number of bits to store the information from two views, resulting in the challenges of storage and transmission. Image compression algorithms can help to reduce the data size of the original digital stereo images, but inevitably introduce complex compression noise, such as blocking artifacts~\cite{dong2015compression}. This may lead to the degradation of the visual quality and the performance of the subsequent vision tasks. Therefore, exploring methods for compressed stereo image artifacts removal is urgently needed, especially for the widely used JPEG format. JPEG is one of the most widely used image compression algorithms, and its processing procedure consists of four steps, including block division, discrete cosine transformation (DCT), quantization and entropy coding. The block-based JPEG compression algorithm ignores spatial correlations between image blocks, which results in image discontinuities at block boundaries. To cope with this problem, the early approaches~\cite{chen2001adaptive,foi2007pointwise,liu2018graph} focus on filter design or employ various optimizations, but still suffer from blurring the images. Some deep learning-based approaches~\cite{dong2015compression,fu2019jpeg,zhang2017beyond,zhang2019residual,jiang2021towards} with novel architectures attempt to remove the compression artifacts by learning a nonlinear mapping between the compressed and original images. These above methods are all designed for single image deblocking, and to the best of our knowledge, no work has been conducted on stereo image deblocking. While these algorithms can also be used to recover the left and right images independently, their performance may be severely limited due to the lack of additional information from another view. Especially, some details are lost in one view, but may exist in another view (as shown in Fig.~\ref{fig:shouye}). Recently, some methods~\cite{jeon2018enhancing,wang2019learning,song2020stereoscopic,wang2021symmetric} have been proposed for stereo image super-resolution, which is the most relevant research topic for the stereo image deblocking task. Wang \emph{et al.}~\cite{wang2019learning} design a parallax-attention module to handle different stereo images with large disparity variations. The well-designed parallax-attention module utilizes the predicted transformation matrix to achieve view alignment. Some follow-up methods~\cite{song2020stereoscopic,wang2021symmetric} improve the stereo correspondence performance by improving the parallax attention module. Although these methods achieve good results in stereo image super-resolution, they perform poorly in stereo deblocking. The main reason is that the compression artifacts destroy the stereo correspondence between the two views, resulting in the difficulties of pixel-level alignment. Therefore, we consider using the transformer to perform a robust matching search on the reference view instead of pixel-level alignment. In this paper, we propose a novel parallax transformer network (PTNet) to integrate the information from stereo image pairs for stereo image JPEG artifacts removal. The details of overall framework are shown in Fig.~\ref{fig:framework}. We design a symmetric bi-directional parallax transformer module (biPTM) to computes the relevance between left and right image features and further match these features, enabling cross-view interaction. Specifically, for any region in the target view, we use the mutual attention mechanism to extract the region features with the highest relevance in the reference view, and use them to enhance the target region. Note that the goal of biPTM is to find the required reference features for the target regions and does not do view alignment, so it can perform well even with significant disparities and compression artifacts. Considering the issue of occlusion, a confidence-based cross-view fusion module (CCFM) is proposed to effectively integrate cross-view information. To achieve better cross-view interaction, we adopt a coarse-to-fine design that utilizes the enhanced features for further cross-view feature matching. To sum up, our main contributions are as follows: \begin{itemize} \item We propose a novel parallax transformer network for stereo image JPEG artifacts removal, which exploits the information complementarity between left and right compressed images to achieve better stereo image deblocking. To the best of our knowledge, this is the first effort to address this task. \item A novel symmetric bi-directional parallax transformer module is proposed to implement cross-view interaction, which is based on the mutual attention mechanism and achieves effective feature matching. \item Considering the occlusion issues, we propose a confidence-based cross-view fusion module that enables effective feature fusion for both views. \item Our approach achieves the state-of-the-art performance as compared to recent single-image JPEG artifacts removal methods and a stereo image super-resolution method. \end{itemize} \begin{figure*}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/overall-framework_00.png} \end{center} \caption{ The architecture of the proposed PTNet. The proposed biPTM is designed to achieve cross-view interaction, which is based on the mutual attention mechanism of two different views. CCFM is designed to effectively fuse the cross-view features, which can help to solve the issues of occlusions and boundaries. In addition, MSB is a well-designed multi-scale feature extraction Module, and RDB is the residual dense block~\cite{zhang2018residual}. The details of MSB and RDB can be found in appendix.} \label{fig:framework} \end{figure*} \section{Related Work} \subsection{JPEG Artifacts Removal} JPEG artifacts removal has been studied for a long time and notable progress has been achieved in the past few years. Early methods~\cite{zhang2013compression,chen2001adaptive,foi2007pointwise} attempt to remove the compression artifacts by designing specific filters. Others treat JPEG artifacts removal as an ill-posed inverse problem and solve it by using sparse representation~\cite{chang2013reducing}, graph~\cite{liu2018graph} and regression trees~\cite{jancsary2012loss}. Witnessing the recent success of convolutional neural networks (CNNs) in most computer vision tasks~\cite{he2016deep,long2015fully}, learning-based methods~\cite{dong2015compression,fu2019jpeg,zhang2017beyond,zhang2019residual,li2020learning,wang2020jpeg,Fu2021TNNS,zhen2020CSVT,galteri2019deep,jiang2021towards,9607618,zhang2020residual} have recently attracted a lot of attention, and have been explored for image deblocking. Zhang~\emph{et al.}~\cite{zhang2017beyond} utilize the residual learning~\cite{he2016deep} and batch normalization~\cite{ioffe2015batch} to speed up the training process as well as boost the deblocking performance. Fu~\emph{et al.}~\cite{fu2019jpeg} design a deep convolutional sparse coding (DCSC) network architecture to effectively reduce JPEG artifacts by using dilated convolution~\cite{yu2015multi}. QGCN~\cite{li2020learning} is able to handle a wide range of quality factors due to the novel utilization of the quantization tables as part of the training data. However, the existing methods are all designed for single image deblocking, and their performance is limited in stereo deblocking since additional information from another view is not exploited. In this paper, we propose a novel parallax transformer network which exploits the information complementarity between two views to achieve better stereo image deblocking. \subsection{Stereo Image Super-Resolution} In recent years, many deep learning-based methods~\cite{jeon2018enhancing,wang2019learning,song2020stereoscopic,wang2021symmetric,ying2020stereo} have been proposed to tackle the problem of stereo image super-resolution, and achieve promising results. Wang~\emph{et al.}~\cite{wang2019learning} try to combine stereo matching and stereo image super-resolution, and propose a parallax attention network named as PASSRnet, which can cope with the issue of varying parallax. Especially, the proposed parallax-attention network can capture stereo correspondence. Inspired by~\cite{wang2019learning}, Song~\emph{et al.}~\cite{song2020stereoscopic} propose a self and parallax attention network to aggregate the information from its own view and the second view simultaneously. On the basis of PASSRnet, Wang~\emph{et al.}~\cite{wang2021symmetric} make a symmetrical design and propose iPASSR, which can super-resolve both sides of views within a single inference. These parallax attention-based methods all attempt to capture the stereo correspondence and warp the features of the second view to the target view at the pixel level, thereby improving the super-resolution performance of the target view. However, the above methods are not suitable for stereo image deblocking task and show poor performance. The main reason is that the compression artifacts destroy the original texture information of the image, which makes pixel-level view alignment difficult. As shown in Fig.~\ref{fig:motivation}, the matching regions also show different textures after being compressed. Unlike these methods, our method attempts to find the most relevant features in both views, which is achieved by a robust transformer-based matching. In particular, for the occlusions and boundaries, we can also find the most relevant matching features for them, and use a confidence-based weighting method for feature fusion. \subsection{Vision Transformer} Recently, Transformer-based models~\cite{khan2021transformers,han2020survey,yang2020learning} have achieved promising performance in various vision tasks, such as image recognition~\cite{dosovitskiy2020image,touvron2021training}, object detection~\cite{carion2020end,zhu2020deformable} and video understanding~\cite{girdhar2019video}. Some approaches are designed for image restoration~\cite{chen2021pre,9607618,wang2021uformer}. Chen~\emph{et al.}~\cite{chen2021pre} study the low-level computer vision task (e.g., denoising, super-resolution and deraining) and develop a new pre-trained model. These methods focus on the feature fusion based on self-attention mechanism, and aim to achieve excellent performance. However, unlike previous methods, we design a symmetric bi-directional parallax transformer module to achieve prediction of parallax information, which is then used for stereo image feature matching. Especially, the proposed module builds a mutual attention mechanism between information from two views, and performs stereo image feature matching. \section{Method} \subsection{Motivation} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/motivation_00.png} \end{center} \caption{ An example of stereo image deblocking results on image ’0001’ from Flickr1024~\cite{wang2019flickr1024} at quality factor 10. On the left is the JPEG-compressed stereo image pair. Note that we mark the matching regions of the left and right views with a red box. On the right are the marked regions, their deblocking results of QGCN~\cite{li2020learning} and our PTNet, and the corresponding ground-truth. } \label{fig:motivation} \end{figure} There are numerous matching regions in the left and right views of a stereo image pair. When the stereo image pair is compressed, these regions are significantly degraded and usually exhibit similar degraded contents. As shown in Fig.~\ref{fig:motivation}, on the left is a JPEG-compressed stereo image pair. In this stereo image pair, most of the regions match each other despite occlusions and boundaries. For in-depth analysis, a matching region is selected as an example and marked in the images. On the right of Fig.~\ref{fig:motivation}, we provide a zoomed-in view of the region (first row) and its corresponding ground-truth (GT) (fourth row). We find that although the GT patches of the left and right view are similar, their corresponding compressed patches have different details. Specifically, the letter N in the left patch is clearer while the letter A in the right patch is clearer. This inspires us to attempt using the information of two views simultaneously for stereo image deblocking, since the information of the two views can complement each other. There are two main reasons for this phenomenon: 1) Existence of parallax between two views. This causes the matching regions of two views to be similar but not completely consistent. 2) Block-based compression processing. The JPEG compression algorithm uses $8\times8$ blocks as the basic processing unit, which may cause overlaps. For example, a letter falls in two processing units simultaneously in one view, but only exists in one unit in another view. These two reasons cause the matching regions to show different degradations when they are JPEG compressed. Therefore, the information of two views are complementary. Benefiting from the binocular information, our results may achieve better results than single-image deblocking algorithms, as shown in Fig.~\ref{fig:motivation}. \subsection{Overview of Our PTNet} The goal of our PTNet is to reconstruct the deblocking results ($I^{d}_{L}$, $I^{d}_{R}$) from a JPEG-compressed stereo image pair ($I^{c}_{L}$, $I^{c}_{R}$), aiming to keep deblocking results ($I^{d}_{L}$, $I^{d}_{R}$) and the corresponding uncompressed stereo image pair ($I_{L}$, $I_{R}$) consistent in pixel. The architecture of our PTNet is shown in the Fig.~\ref{fig:framework}, which mainly consists of three parts: feature extraction, cross-view interaction and reconstruction. Note that the entire network is symmetric and the weights of its left and right branches are shared. Specifically, given ($I^{c}_{L}$, $I^{c}_{R}$), we first extract the features ($F_{L}$, $F_{R}$) of the left and right images separately, which are used for subsequent feature matching and reconstruction. This process is denoted as, \begin{equation} F_L=H_{FE}(I^c_L),~F_R=H_{FE}(I^c_R), \end{equation} where $H_{FE}(\cdot)$ represents the feature extraction module. Following the previous works~\cite{ fu2019jpeg,wang2020jpeg}, we design a multi-scale feature extraction block (MSB) to enhance the feature extraction capability of the model. In addition, we also adopt four residual dense blocks (RDBs)~\cite{zhang2018residual} in our model. The details of MSB and RDB can be found in appendix. These extracted features ($F_L$, $F_R$) are then used for feature matching and feature enhancement in the cross-view interaction module. This module adopts the coarse-to-fine design and is mainly divided into two stages. Each stage consists of one bi-directional parallax transformer module (biPTM) and one confidence-based cross-view fusion module (CCFM). In the first stage, we achieve effective cross-view information interaction, and we further enhance the information interaction in the second stage. Especially, since the first stage utilizes the binocular information to enhance the features of two views, the second stage can achieve more accurate feature matching. This can be expressed as, \begin{equation} F^1_L,F^1_R=H_{CVI^1}(F_L,~F_R),~F^2_L,F^2_R=H_{CVI^2}(F^1_L,~F^1_R), \end{equation} where $H_{CVI^1}(\cdot)$ and $H_{CVI^2}(\cdot)$ stand for the functions of two stages in the cross-view interaction module respectively. The details of biPTM and CCFM will be explained in later sections. Finally, these features ($F^2_L$, $F^2_R$) are used in the reconstruction module to generate our deblocking results. This module is mainly composed of four RDBs. Aiming to reconstruct better results, we also add a global residual design. This can be expressed as \begin{equation} I^{d}_{L}, I^{d}_{R}=H_{R}(F^2_L,~F^2_R,~I^{c}_{L},~I^{c}_{R}), \end{equation} where $H_{R}(\cdot)$ represents the reconstruction module. \subsection{Bi-Directional Parallax Transformer} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/parallax-transformer_00.png} \end{center} \caption{Architecture of the proposed bi-directional parallax transformer module (biPTM). $F_L$ and $F_R$ represent the feature maps of left view and right view. $F_L\downarrow$ and $F_R\downarrow$ are obtained by downsampling $F_L$ and $F_R$. $M_{L\rightarrow R}$ and $M_{R\rightarrow L}$ indicate the hard attention maps, which are computed from relevance calculation module and used to match feature maps of different views. $F_{L\rightarrow R}$ and $F_{R\rightarrow L}$ are the converted features. $C_{L\rightarrow R}$ and $C_{R\rightarrow L}$ are the corresponding confidence maps.} \label{fig:transformer} \end{figure} The compression artifacts cause difficulties in pixel-level view alignment, and inaccurate alignment may affect the performance of stereo image deblocking. Therefore, we consider finding the required reference features for the target region without view alignment. We utilize the mutual attention mechanism to match features with similar textures between different views. To this end, a symmetric bi-directional parallax transformer module (biPTM) is proposed, which is shown in Fig.~\ref{fig:transformer}. Our biPTM takes the features of the left and right view as input, and outputs the cross-view converted features and their confidence maps. Note that the cross-view conversion of our two-view features is symmetric. Here, we introduce the calculation process of the feature conversion from the left view to the right view in detail. Firstly, the left and right image features ($F_L$, $F_R$) are downsampled by a factor of 4, which can effectively reduce the calculation amount of the module. We make the three basic elements of the attention mechanism inside a transformer as \begin{equation} Q=F_R\downarrow,~K=F_L\downarrow,~V=F_L, \end{equation} where Q, K, V represent query, key and value respectively. Q and K are unfolded into patches and normalized, denoted as \begin{equation} \bar{q}_i=\frac{q_i}{||q_i||}~(i\in[1,H_{F_R\downarrow}\times W_{F_R\downarrow}]), \end{equation} \begin{equation} \bar{k}_j=\frac{k_j}{||k_j||}~(j\in[1,H_{F_L\downarrow}\times W_{F_L\downarrow}]), \end{equation} where $H_{F_R}$ and $W_{F_R}$ represent height and width of $F_R$, $H_{F_L}$ and $W_{F_L}$ represent height and width of $F_L$, respectively. Then we calculate the relevance R between the left and right features ($F_L$, $F_R$) by estimating the similarity between Q and K in the relevance calculation module. This can be expressed as, \begin{equation} R=Q\cdot K^T \end{equation} where R consists of $i\times j$ probability values $r_{ij}$. After that, we use a hard attention mechanism to weight $V$ for each query $q_i$ based on $R$. Therefore, only the most relevant features in $V$ are converted for each query $q_i$ by using the hard attention mechanism. The hard attention map $M_{L\rightarrow R}$ can be obtained by finding the maximum probability of $R$ in the $j$ dimension. This can be expressed as, \begin{equation} m_i = \underset{j}{\arg \max} r_{ij},~c_i = \underset{j}{\max} r_{ij}, \end{equation} where the value of $m_i$ in $M_{L\rightarrow R}$ is a coordinate index, which means the most relevant position in $F_L$ corresponds to the $i^{th}$ position in $F_R$, the value of $c_i$ is the probability value of $m_i$. Then we unfold the $V$ into patches, and each patch is four times the size of $q_i$, denoted as $v_i~(i\in[1,H_{F_R\downarrow}\times W_{F_R\downarrow}])$. Based on the obtained $M_{L\rightarrow R}$, an index selection operation is used to process $v_i$ to obtain the converted patch $z_i$, denoted as $z_i = v_{m_i}$. Finally, the converted patch $z_i$ is folded to generate the converted features $F_{L\rightarrow R}$. Since the matching probability value of occlusions and boundaries will be relatively low, the probability value $c_i$ can be used to generate the confidence map $C_{L\rightarrow R}$ by using folding operation. Similarly, we can obtain $F_{R\rightarrow L}$ and $C_{R\rightarrow L}$ by resetting Q, K and V as, \begin{equation} Q=F_L\downarrow,~K=F_R\downarrow,~V=F_R. \end{equation} To simplify the calculation, we obtain the corresponding relevance by transposing the previously obtained $R$. \subsection{Cross-View Feature Fusion} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/fusion-block_00.png} \end{center} \caption{ Architecture of the proposed confidence-based cross-view fusion module (CCFM). $F_L$ and $F_{R\rightarrow L}$ represent the feature maps of left view and the converted feature maps of right view. $C_{R_L}$ stands for the confidence map of $F_{R_L}$. $F^{'}_{L}$ represents the fused features. RDB is the residual dense block~\cite{zhang2018residual}, and CA~\cite{hu2018squeeze} represents the channel attention module.} \label{fig:fusion} \end{figure} Due to the issues of occlusions and boundaries in stereo image processing, these occlusion and boundary regions do not match well with another view. To address this problem, we propose a confidence-based cross-view fusion module (CCFM) to achieve effective cross-view feature fusion, in which the cross-view features are weighted with the confidence maps produced by biPTM. The details of CCFM are shown in Fig.~\ref{fig:fusion}. Note that the weights of CCFM are shared, and the corresponding calculation process is symmetric in the left and right branches. Here, we introduce the fusion process of $F_L$ and $F^1_{R\rightarrow L}$ in detail. First, $F_L$ is concatenated with $F^1_{R\rightarrow L}$ and fed into one RDB~\cite{zhang2018residual} for initial feature fusion. We consider that regions with high confidence are more inclined to adopt converted features $F^1_{R\rightarrow L}$, and regions with low confidence adopt the features of the target view $F_L$. Therefore, a confidence-based weighting method is designed to fuse $F_L$ and $F^1_{R\rightarrow L}$. This can be expressed as, \begin{equation} F^{1'}_{R\rightarrow L}=C^1_{R\rightarrow L}\odot f_{RDB}([F_L, F^1_{R\rightarrow L}])+(1-C^1_{R\rightarrow L})\odot F_L \end{equation} where $f_{RDB}$ represents the function of RDB. With the help of this confidence-based weighting method, occluded regions of converted features $F^1_{R\rightarrow L}$ can be filled with the corresponding features $F_L$ from the target view, leading to continuous spatial distributions. Finally, $F^{1'}_{R\rightarrow L}$ is concatenated with $F_L$ again, and then fed to a channel attention layer (CA)~\cite{hu2018squeeze} and a convolution layer to generate the final fused features $F^1_L$. Similarly, we can obtain $F^1_R$, $F^2_L$ and $F^2_R$ by following the same calculation process with different input features. \subsection{Optimization } Given a training dataset with $N$ stereo image pairs~$\{I^i_L,~I^i_R\}^N_{i=1}$, we can obtain its corresponding JPEG-compressed stereo image pairs~$\{I^{c,i}_L,~I^{c,i}_R\}^N_{i=1}$ and the reconstructed results~$\{I^{d,i}_L,~I^{d,i}_R\}^N_{i=1}$. Following the previous works~\cite{li2020learning,fu2019jpeg}, we also adopt the $l_1$~norm for network training, since $l_1$~norm can yield the sharper image results. The loss function is denoted as, \begin{equation} L=\frac{1}{N}\sum_{i=1}^{N}\{||I^i_L-I^{d,i}_L||_1+||I^i_R-I^{d,i}_R||_1\}. \end{equation} During our PTNet training, Pytorch is used as the training toolbox, and the Adam optimization algorithm [50] with $\beta1 = 0.9$, $\beta2 = 0.999$, and a mini-batch size of 48 is adopted. All the experiments are conducted on three NVIDIA GeForce RTX 1080 Ti. The learning rate is changed from $2\times10^{-4}$ to $2\times10^{-6}$ at the interval of twenty epochs. The training was stopped after 60 epochs since more epochs do not provide further consistent improvement. \begin{table*}[t!] \small \caption{Performance comparisons of various methods based on the grayscale left images from Flickr1024~\cite{wang2019flickr1024}, KITTI2012~\cite{geiger2012we}, KITTI2015~\cite{menze2015object} and Middlebury~\cite{scharstein2014high}. Here, PSNR|SSIM|PSNR-B values achieved on the left images (\emph{i.e., Left}) are reported. The best results are boldfaced. }\label{tab:left \begin{tabular}{c|c|c|c|c|c|c|c} \hline Dataset & QF & JPEG & DnCNN~\cite{zhang2017beyond} & DCSC~\cite{fu2019jpeg} & QGCN~\cite{li2020learning} & iPASSR~\cite{wang2021symmetric} & \textbf{PTNet} \\ \hline \multirow{3}{*}{Flickr1024} & 10 & 25.99/0.7868/23.72 & 27.40/0.8231/27.02 & 27.56/0.8287/27.15 &27.72/0.8351/27.43 &27.76/0.8342/27.21 & \textbf{28.05/0.8403/27.54} \\ \cline{2-8} & 20 & 28.08/0.8614/25.75 & 29.66/0.8895/29.05 & 29.84/0.8926/29.16 &30.09/0.8975/29.51 &30.12/0.8973/29.42 & \textbf{30.39/0.9017/29.59} \\ \cline{2-8} & 30 & 29.42/0.8938/27.14 & 31.09/0.9172/30.39 & 31.26/0.9190/30.48 &31.58/0.9243/30.85 &31.58/0.9232/30.77 & \textbf{31.83/0.9264/30.89} \\ \hline \multirow{3}{*}{KITTI2012} & 10 & 29.27/0.8292/26.46 & 30.82/0.8665/30.53 & 30.99/0.8711/30.65 &31.20/0.8759/30.95 &31.01/0.8716/30.55 & \textbf{31.43/0.8786/31.05} \\ \cline{2-8} & 20 & 31.72/0.8919/28.89 & 33.28/0.9152/32.78 & 33.42/0.9175/32.92 &33.60/0.9201/33.26 &33.46/0.9186/33.04 & \textbf{33.85/0.9231/33.30} \\ \cline{2-8} & 30 & 33.07/0.9170/30.27 & 34.65/0.9347/34.02 & 34.80/0.9362/34.18 &34.97/0.9388/34.46 &34.85/0.9372/34.30 & \textbf{35.18/0.9404/34.48} \\ \hline \multirow{3}{*}{KITTI2015} & 10 & 29.31/0.8230/26.22 & 30.90/0.8615/30.53 & 31.06/0.8665/30.60 &31.31/0.8714/\textbf{30.96} &31.05/0.8669/30.48 & \textbf{31.42/0.8730}/30.92 \\ \cline{2-8} & 20 & 32.02/0.8937/28.75 & 33.59/0.9177/32.88 & 33.72/0.9200/33.00 &33.96/0.9226/\textbf{33.27} &33.77/0.9211/33.15 & \textbf{34.07/0.9245/33.27} \\ \cline{2-8} & 30 & 33.54/0.9220/30.23 & 35.13/0.9401/34.20 & 35.26/0.9415/34.39 &35.46/0.9436/\textbf{34.63} &35.32/0.9424/34.58 & \textbf{35.57/0.9449/}34.58 \\ \hline \multirow{3}{*}{Middlebury} & 10 & 29.65/0.8114/27.09 & 31.38/0.8529/31.22 & 31.57/0.8582/31.38 &31.85/0.8643/31.73 &31.67/0.8602/31.38 & \textbf{32.05/0.8676/31.88} \\ \cline{2-8} & 20 & 32.06/0.8826/29.43 & 33.79/0.9081/33.42 & 33.98/0.9111/33.64 &34.26/0.9156/34.03 &34.12/0.9136/33.84 & \textbf{34.51/0.9200/34.12} \\ \cline{2-8} & 30 & 33.40/0.9110/30.86 & 35.16/0.9304/34.70 & 35.35/0.9325/34.95 &35.54/0.9361/35.23 &35.46/0.9349/35.14 & \textbf{35.85/0.9400/35.40} \\ \hline \end{tabular} \end{table*} \begin{table*}[t!] \small \caption{Performance comparisons of various methods based on the grayscale stereo image pairs from Flickr1024~\cite{wang2019flickr1024}, KITTI2012~\cite{geiger2012we}, KITTI2015~\cite{menze2015object} and Middlebury~\cite{scharstein2014high}. Here, PSNR|SSIM|PSNR-B values achieved on the stereo image pairs (\emph{i.e., (Left + Right) /2}) are reported. The best results are boldfaced. }\label{tab:both \begin{tabular}{c|c|c|c|c|c|c|c} \hline Dataset & QF & JPEG & DnCNN~\cite{zhang2017beyond} & DCSC~\cite{fu2019jpeg} & QGCN~\cite{li2020learning} & iPASSR~\cite{wang2021symmetric} & \textbf{PTNet} \\ \hline \multirow{3}{*}{Flickr1024}& 10 & 26.00/0.7860/23.74 & 27.41/0.8223/27.03 & 27.57/0.8279/27.16 &27.74/0.8345/27.44 &27.78/0.8335/27.22 & \textbf{28.07/0.8397/27.55} \\ \cline{2-8} & 20 & 28.09/0.8607/25.76 & 29.67/0.8889/29.06 & 29.85/0.8920/29.17 &30.10/0.8970/29.53 &30.13/0.8967/29.43 & \textbf{30.41/0.9011/29.61} \\ \cline{2-8} & 30 & 29.43/0.8933/27.15 & 31.09/0.9166/30.40 & 31.26/0.9185/30.49 &31.59/0.9240/30.86 &31.58/0.9227/30.77 & \textbf{31.83/0.9259/30.90} \\ \hline \multirow{3}{*}{KITTI2012} & 10 & 29.12/0.8267/26.33 & 30.64/0.8641/30.33 & 30.81/0.8687/30.44 &31.00/0.8732/30.75 &30.83/0.8693/30.35 & \textbf{31.23/0.8761/30.83} \\ \cline{2-8} & 20 & 31.52/0.8897/28.71 & 33.05/0.9131/32.51 & 33.19/0.9154/32.65 &33.36/0.9180/32.99 &33.24/0.9166/32.78 & \textbf{33.61/0.9209/33.01} \\ \cline{2-8} & 30 & 32.85/0.9149/30.08 & 34.40/0.9327/33.72 & 34.55/0.9343/33.89 &34.71/0.9374/\textbf{34.18} &34.61/0.9353/34.02 & \textbf{34.92/0.9384/34.18} \\ \hline \multirow{3}{*}{KITTI2015} & 10 & 29.72/0.8314/26.57 & 31.37/0.8708/31.04 & 31.54/0.8740/31.12 &31.82/0.8807/31.50 &31.53/0.8760/31.00 & \textbf{31.97/0.8831/31.52} \\ \cline{2-8} & 20 & 32.55/0.9008/29.20 & 34.16/0.9245/33.54 & 34.30/0.9268/33.67 &34.57/0.9292/34.01 &34.35/0.9278/33.81 & \textbf{34.73/0.9319/34.02} \\ \cline{2-8} & 30 & 34.13/0.9279/30.73 & 35.76/0.9455/34.93 & 35.90/0.9469/35.12 &36.13/0.9490/\textbf{35.46} &35.96/0.9478/35.30 & \textbf{36.28/0.9507/}35.39 \\ \hline \multirow{3}{*}{Middlebury}& 10 & 29.62/0.8105/27.02 & 31.32/0.8518/31.14 & 31.53/0.8572/31.25 &31.74/0.8624/31.48 &31.62/0.8594/31.26 & \textbf{32.03/0.8672/31.75} \\ \cline{2-8} & 20 & 32.03/0.8827/29.35 & 33.76/0.9084/33.30 & 33.96/0.9113/33.48 &34.22/0.9164/33.71 &34.10/0.9140/33.69 & \textbf{34.51/0.9207/33.97} \\ \cline{2-8} & 30 & 33.38/0.9112/30.76 & 35.15/0.9310/34.57 & 35.35/0.9331/34.79 &35.57/0.9368/35.07 &35.48/0.9356/35.01 & \textbf{35.88/0.9409/35.25} \\ \hline \end{tabular} \end{table*} \section{Experiments} \subsection{Datasets and Evaluation} Following iPASSR~\cite{wang2021symmetric}, we also use 60 images from Middlebury~\cite{scharstein2014high} and 800 images from Flickr1024~\cite{wang2019flickr1024} as the training dataset. For test, we adopt 5 images from Middlebury, 20 images from KITTI 2012~\cite{geiger2012we}, 20 images from KITTI 2015~\cite{menze2015object}, and 112 images from Flickr1024 as the test dataset, which is the same as iPASSR. To train the proposed PTNet, the images are first cropped into patches of size $64\times 160$ with a stride of 20. These patches are then processed by JPEG compression algorithm with a random quality factor $QF\in[10,30]$ to get the corresponding compressed image patches. In this paper, Python Image Library (PIL) is adopted to encode images into JPEG format, since it employs a standard quantization table proposed by the Independent JPEG Group. In addition, these patches are randomly flipped horizontally and vertically for data augmentation. We only focus on the restoration of the luminance channel (in YCrCb space) in this paper. Following~\cite{fu2019jpeg,li2020learning}, we apply the PSNR, structural similarity (SSIM)~\cite{wang2004image}, and PSNR-B~\cite{yim2010quality} to evaluate the model performance. Referring to iPASSR~\cite{wang2021symmetric}, we report PSNR, SSIM and PSNR-B scores on the left view (\emph{i.e., Left}) and the average PSNR, SSIM and PSNR-B scores on stereo image pairs (\emph{i.e., (Left + Right) /2}). \subsection{Comparison against SOTA Methods} In this section, the proposed PTNet and the state-of-the-art algorithms including DnCNN~\cite{zhang2017beyond}, DCSC~\cite{fu2019jpeg}, QGCN~\cite{li2020learning} and iPASSR~\cite{wang2021symmetric} are compared quantitatively and qualitatively. DnCNN, DCSC and QGCN are single image deblocking methods, and iPASSR is a high-performance stereo image super-resolution method. To conduct a fair comparison, DnCNN and QGCN are finetuned on the training dataset for 10 epochs. We use the pre-trained model of DCSC to test its performance due to the unavailability of the training code. For iPASSR, we set its scale factor to 1, and use the luminance channel as input. Then iPASSR can be trained on the training dataset for stereo image deblocking. \textbf{Quantitative results.} Tables \ref{tab:left} and \ref{tab:both} show the quantitative results on four datasets with JPEG QF 10, 20 and 30, respectively. Specifically, Table \ref{tab:left} shows the performances of all test algorithm on the left view. It can be found that the proposed PTNet achieves the best performance at all JPEG QF. Compared with the single image deblocking methods, our PTNet achieves a significant performance improvement. The main reason is that PTNet makes full use of the information of two views and achieves better deblocking results. Although iPASSR also takes information of two views as input, it does not take into account that compression artifacts destroy stereo correspondence, and inaccurate feature warping leads to poor performance. In contrast, our PTNet still performs well in the presence of compression artifacts. To comprehensively evaluate the performance of stereo image deblocking, we report the average performance on two views, and the experimental results in Table \ref{tab:both} also confirm that our PTNet outperforms other compared methods. \begin{figure*}[t] \begin{center} \includegraphics[width=0.98\linewidth]{./image/visual_results_00.png} \end{center} \caption{ Visual comparisons on the images '0003' (a) and '0043' (b) from Flickr1024~\cite{wang2019learning} at QF 10. The proposed PTNet is compared with the state-of-the-art methods including DnCNN~\cite{zhang2017beyond}, DCSC~\cite{fu2019jpeg}, QGCN~\cite{li2020learning} and iPASSR~\cite{wang2021symmetric}. The first row shows the deblocking results on the left view, while the second row shows the deblocking results on the right view. The number below each image patch represents the PSNR value. Note that our PTNet can produce better results compared to other methods.} \label{fig:visual_1} \end{figure*} \begin{table*}[t!] \small \caption{Performance comparisons between variations of our PTNet based on the grayscale images from Flickr1024~\cite{wang2019flickr1024}, KITTI2012~\cite{geiger2012we}, KITTI2015~\cite{menze2015object} and Middlebury~\cite{scharstein2014high}. Here, PSNR|SSIM|PSNR-B values achieved on the left images (\emph{i.e., Left}) are reported. The best results are boldfaced. }\label{tab:ablation \begin{tabular}{c|c|c|c|c|c} \hline Dataset & QF & w/o biPTM \& CCFM & w/o CCFM & w/o CTF & \textbf{PTNet} \\ \hline \multirow{3}{*}{Flickr1024} & 10 &27.85/0.8347/27.34 &27.98/0.8395/27.49 &28.01/0.8392/27.54 & \textbf{28.05/0.8403/27.54} \\ \cline{2-6} & 20 &30.17/0.8975/29.39 &30.32/0.9009/29.55 &30.35/0.9011/29.58 & \textbf{30.39/0.9017/29.59} \\ \cline{2-6} & 30 &31.62/0.9233/30.68 &31.76/0.9259/30.85 &31.78/0.9258/30.89 & \textbf{31.83/0.9264/30.89} \\ \hline \multirow{3}{*}{KITTI2012} & 10 &31.14/0.8735/30.79 &31.36/0.8778/31.02 &31.39/0.8781/ \textbf{31.05} & \textbf{31.43/0.8786/31.05} \\ \cline{2-6} & 20 &33.57/0.9196/33.04 &33.76/0.9225/33.26 &33.82/0.9228/33.29 & \textbf{33.85/0.9231/33.30} \\ \cline{2-6} & 30 &34.93/0.9379/34.23 &35.11/0.9401/34.45 &35.13/0.9402/34.43 & \textbf{35.18/0.9404/34.48} \\ \hline \multirow{3}{*}{KITTI2015} & 10 &31.19/0.8687/30.74 &31.38/0.8726/30.90 &31.39/0.8724/30.90 & \textbf{31.42/0.8730/30.92} \\ \cline{2-6} & 20 &33.85/0.9218/33.11 &34.00/0.9241/33.24 &34.03/0.9241/33.24 & \textbf{34.07/0.9245/33.27} \\ \cline{2-6} & 30 &35.39/0.9430/34.44 &35.50/0.9445/34.57 &35.52/0.9444/34.54 & \textbf{35.57/0.9449/34.58} \\ \hline \multirow{3}{*}{Middlebury} & 10 &31.77/0.8614/31.59 &31.99/0.8669/31.81 &32.00/0.8666/31.82 & \textbf{32.05/0.8676/31.88} \\ \cline{2-6} & 20 &34.19/0.9143/33.81 &34.45/0.9193/\textbf{34.12} &34.45/0.9193/34.08 & \textbf{34.51/0.9200/34.12} \\ \cline{2-6} & 30 &35.51/0.9351/35.06 &35.79/0.9394/35.39 &35.80/0.9394/35.39 & \textbf{35.85/0.9400/35.40} \\ \hline \hline \emph{Params.} &-&0.90 M&0.90 M&0.91 M&0.91 M \\ \hline \end{tabular} \end{table*} \textbf{Qualitative results.} The proposed PTNet can produce deblocking results with high perceptual quality, and the qualitative comparison results are shown in Fig.\ref{fig:visual_1}. Compared to other methods, our PTNet can remove compression artifacts more effectively and recover high-fidelity textures. The main reason is that PTNet makes good use of the additional information provided by the second view. Although iPASSR also utilizes information from two views for stereo image deblocking, its reconstructed results are more blurry than ours, because inaccurate pixel-level stereo matching may affect the performance of feature fusion. \subsection{Ablation Study} \begin{figure*}[t] \begin{center} \includegraphics[width=0.92\linewidth]{./image/feature_analysis_00.png} \end{center} \caption{ Visualization of feature maps generated by our PTNet on the image 'piano' from Middlebury~\cite{scharstein2014high}. Since PTNet is symmetric, we only show feature matching from the left view to the right view. The first column is the compressed images at QF 10. The first row shows the feature maps of the first stage in the cross-view interaction, including $F_L$, $F^1_{L\rightarrow R}$, $F_R$ and $C^1_{L\rightarrow R}$. The second row shows the feature maps of the second stage, including $F^1_L$, $F^2_{L\rightarrow R}$, $F^1_R$ and $C^2_{L\rightarrow R}$. Better zoom in.} \label{fig:visualization} \end{figure*} \begin{table}[t!] \small \caption{Performance comparisons between iPASSR and iPASSR+. }\label{tab:iPASSR \begin{tabular}{c|c|c|c} \hline Dataset & QF & iPASSR & iPASSR+ \\ \hline \multirow{3}{*}{Flickr1024} & 10 &27.76/0.8342/27.21 & 27.92/0.8361/27.48 \\ \cline{2-4} & 20 &30.12/0.8973/29.42 & 30.30/0.8998/29.57 \\ \cline{2-4} & 30 &31.58/0.9232/30.77 & 31.74/0.9250/30.85 \\ \hline \multirow{3}{*}{KITTI2012} & 10 &31.01/0.8716/30.55 & 31.26/0.8751/30.93 \\ \cline{2-4} & 20 &33.46/0.9186/33.04 & 33.74/0.9214/33.25 \\ \cline{2-4} & 30 &34.85/0.9372/34.30 & 35.07/0.9393/34.42 \\ \hline \multirow{3}{*}{KITTI2015} & 10 &31.05/0.8669/30.48 & 31.30/0.8697/30.85 \\ \cline{2-4} & 20 &33.77/0.9211/33.15 & 33.98/0.9230/33.24 \\ \cline{2-4} & 30 &35.32/0.9424/34.58 & 35.48/0.9437/34.52 \\ \hline \multirow{3}{*}{Middlebury} & 10 &31.67/0.8602/31.38 & 31.92/0.8641/31.75 \\ \cline{2-4} & 20 &34.12/0.9136/33.84 & 34.42/0.9182/34.10 \\ \cline{2-4} & 30 &35.46/0.9349/35.14 & 35.76/0.9384/35.36 \\ \hline \end{tabular} \end{table} In this section, we study and analyze the contributions of different modules to our PTNet, including the bi-directional parallax transformer module (biPTM), the confidence-based cross-view fusion module (CCFM) and the coarse-to-fine (CTF) structure. To this end, we remove these modules from our PTNet separately. Since confidence maps are not available when biPTM is removed, we remove both biPTM and CCFM to verify the effectiveness of biPTM. We also add several RDBs, and several convolutional layers in the variation of our PTNet, aiming to keep similar model size. We test the performances of PTNet without biPTM and CCFM (w/o biPTM \& CCFM), PTNET without CCFM (w/o CCFM) and PTNet without CTF (w/o CTF). Specifically, w/o biPTM \& CCFM concatenates the features of two views for fusion, w/o CCFM removes the operation of the feature weighting calculation and w/o CTF only uses one stage for cross-view interaction. The experimental results are shown in Table~\ref{tab:ablation}. It can be found that the performances of three variations all decrease compared with PTNet on all datasets. This confirms that our proposed modules can effectively improve the performance of the model for stereo image deblocking. Note that our PTNet has a significant performance improvement compared to w/o biPTM \& CCFM. This means that biPTM contributes the most to the improvement of model performance. In addition, we also conduct a comparative experiment to further confirm that our biPTM can indeed improve the performance for stereo image deblocking. We replace the view alignment module in iPASSR with biPTM, and name this model iPASSR+. As shown in Table~\ref{tab:iPASSR}, the performance of iPASSR+ is significantly improved on all datasets. This demonstrates the effectiveness of our biPTM for stereo image deblocking. \subsection{Visualization Results} To more intuitively show that our biPTM can achieve good cross-view feature matching, we visualize the features of both stages of biPTM, as shown in Fig.~\ref{fig:visualization}. Firstly, we can find that $F_L$ and $F_R$ are not aligned, and concatenating them for fusion does not achieve good cross-view interaction, which is confirmed by the ablation experiments. Our biPTM can provide an efficient converted feature $F^1_{L\rightarrow R}$ for $F_R$ even with significant artifacts in the images. Specifically, in the corresponding regions, $F^1_{L\rightarrow R}$ has the texture features that match the $F_R$, so better cross-view feature fusion can be achieved. In addition, we can make similar conclusions in the second stage through observation. Secondly, the confidence map $C^1_{L\rightarrow R}$ shows small confidence values at the boundaries, which is consistent with the observation of the input stereo image pair. Note that the unconfident regions of $C^2_{L\rightarrow R}$ become smaller in the second stage, which also verifies that the features enhanced by the first stage can achieve more reliable feature matching. \section{Conclusion} In this paper, we investigate the problem of stereo image JPEG artifacts removal for the first time and provide an in-depth analysis. To this end, we propose a novel parallax transformer network (PTNet) to simultaneously remove compression artifacts from two views. Specifically, we design a symmetric bi-directional parallax transformer module (biPTM) to computes the relevance between the features of two views, and further match these features, enabling cross-view interaction. Due to the issues of occlusions and boundaries, a confidence-based cross-view fusion module (CCFM) is proposed to effectively integrate cross-view information. Experimental results demonstrate that our PTNet outperforms the test SOTA methods, and extensive ablation studies are performed to verify the effectiveness of our proposed modules. Furthermore, the proposed method can also be feasibly extended to cope with other stereo image processing tasks, such as stereo image deblurring. In the future, we will further explore the possibility of our method for different stereo image processing tasks. \bibliographystyle{ACMMM} \section{Introduction} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/shouye_00.png} \end{center} \caption{ The deblocking results on image '0007' from Flickr1024~\cite{wang2019flickr1024} at quality factor 10. On the left is the ground-truth (GT). The first row on the right is the corresponding left and right compressed image patches. Compared to the state-of-the-art methods (DnCNN~\cite{zhang2017beyond}, QGCN~\cite{li2020learning} and iPASSR~\cite{wang2021symmetric}), our PTNet can generate better results due to the effective use of information from both views. } \label{fig:shouye} \end{figure} With recent advances in dual cameras, stereo images have shown great commercial value in many practical applications, including smartphones and autonomous vehicles. Usually, stereo images require a large number of bits to store the information from two views, resulting in the challenges of storage and transmission. Image compression algorithms can help to reduce the data size of the original digital stereo images, but inevitably introduce complex compression noise, such as blocking artifacts~\cite{dong2015compression}. This may lead to the degradation of the visual quality and the performance of the subsequent vision tasks. Therefore, exploring methods for compressed stereo image artifacts removal is urgently needed, especially for the widely used JPEG format. JPEG is one of the most widely used image compression algorithms, and its processing procedure consists of four steps, including block division, discrete cosine transformation (DCT), quantization and entropy coding. The block-based JPEG compression algorithm ignores spatial correlations between image blocks, which results in image discontinuities at block boundaries. To cope with this problem, the early approaches~\cite{chen2001adaptive,foi2007pointwise,liu2018graph} focus on filter design or employ various optimizations, but still suffer from blurring the images. Some deep learning-based approaches~\cite{dong2015compression,fu2019jpeg,zhang2017beyond,zhang2019residual,jiang2021towards} with novel architectures attempt to remove the compression artifacts by learning a nonlinear mapping between the compressed and original images. These above methods are all designed for single image deblocking, and to the best of our knowledge, no work has been conducted on stereo image deblocking. While these algorithms can also be used to recover the left and right images independently, their performance may be severely limited due to the lack of additional information from another view. Especially, some details are lost in one view, but may exist in another view (as shown in Fig.~\ref{fig:shouye}). Recently, some methods~\cite{jeon2018enhancing,wang2019learning,song2020stereoscopic,wang2021symmetric} have been proposed for stereo image super-resolution, which is the most relevant research topic for the stereo image deblocking task. Wang \emph{et al.}~\cite{wang2019learning} design a parallax-attention module to handle different stereo images with large disparity variations. The well-designed parallax-attention module utilizes the predicted transformation matrix to achieve view alignment. Some follow-up methods~\cite{song2020stereoscopic,wang2021symmetric} improve the stereo correspondence performance by improving the parallax attention module. Although these methods achieve good results in stereo image super-resolution, they perform poorly in stereo deblocking. The main reason is that the compression artifacts destroy the stereo correspondence between the two views, resulting in the difficulties of pixel-level alignment. Therefore, we consider using the transformer to perform a robust matching search on the reference view instead of pixel-level alignment. In this paper, we propose a novel parallax transformer network (PTNet) to integrate the information from stereo image pairs for stereo image JPEG artifacts removal. The details of overall framework are shown in Fig.~\ref{fig:framework}. We design a symmetric bi-directional parallax transformer module (biPTM) to computes the relevance between left and right image features and further match these features, enabling cross-view interaction. Specifically, for any region in the target view, we use the mutual attention mechanism to extract the region features with the highest relevance in the reference view, and use them to enhance the target region. Note that the goal of biPTM is to find the required reference features for the target regions and does not do view alignment, so it can perform well even with significant disparities and compression artifacts. Considering the issue of occlusion, a confidence-based cross-view fusion module (CCFM) is proposed to effectively integrate cross-view information. To achieve better cross-view interaction, we adopt a coarse-to-fine design that utilizes the enhanced features for further cross-view feature matching. To sum up, our main contributions are as follows: \begin{itemize} \item We propose a novel parallax transformer network for stereo image JPEG artifacts removal, which exploits the information complementarity between left and right compressed images to achieve better stereo image deblocking. To the best of our knowledge, this is the first effort to address this task. \item A novel symmetric bi-directional parallax transformer module is proposed to implement cross-view interaction, which is based on the mutual attention mechanism and achieves effective feature matching. \item Considering the occlusion issues, we propose a confidence-based cross-view fusion module that enables effective feature fusion for both views. \item Our approach achieves the state-of-the-art performance as compared to recent single-image JPEG artifacts removal methods and a stereo image super-resolution method. \end{itemize} \begin{figure*}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/overall-framework_00.png} \end{center} \caption{ The architecture of the proposed PTNet. The proposed biPTM is designed to achieve cross-view interaction, which is based on the mutual attention mechanism of two different views. CCFM is designed to effectively fuse the cross-view features, which can help to solve the issues of occlusions and boundaries. In addition, MSB is a well-designed multi-scale feature extraction Module, and RDB is the residual dense block~\cite{zhang2018residual}. The details of MSB and RDB can be found in appendix.} \label{fig:framework} \end{figure*} \section{Related Work} \subsection{JPEG Artifacts Removal} JPEG artifacts removal has been studied for a long time and notable progress has been achieved in the past few years. Early methods~\cite{zhang2013compression,chen2001adaptive,foi2007pointwise} attempt to remove the compression artifacts by designing specific filters. Others treat JPEG artifacts removal as an ill-posed inverse problem and solve it by using sparse representation~\cite{chang2013reducing}, graph~\cite{liu2018graph} and regression trees~\cite{jancsary2012loss}. Witnessing the recent success of convolutional neural networks (CNNs) in most computer vision tasks~\cite{he2016deep,long2015fully}, learning-based methods~\cite{dong2015compression,fu2019jpeg,zhang2017beyond,zhang2019residual,li2020learning,wang2020jpeg,Fu2021TNNS,zhen2020CSVT,galteri2019deep,jiang2021towards,9607618,zhang2020residual} have recently attracted a lot of attention, and have been explored for image deblocking. Zhang~\emph{et al.}~\cite{zhang2017beyond} utilize the residual learning~\cite{he2016deep} and batch normalization~\cite{ioffe2015batch} to speed up the training process as well as boost the deblocking performance. Fu~\emph{et al.}~\cite{fu2019jpeg} design a deep convolutional sparse coding (DCSC) network architecture to effectively reduce JPEG artifacts by using dilated convolution~\cite{yu2015multi}. QGCN~\cite{li2020learning} is able to handle a wide range of quality factors due to the novel utilization of the quantization tables as part of the training data. However, the existing methods are all designed for single image deblocking, and their performance is limited in stereo deblocking since additional information from another view is not exploited. In this paper, we propose a novel parallax transformer network which exploits the information complementarity between two views to achieve better stereo image deblocking. \subsection{Stereo Image Super-Resolution} In recent years, many deep learning-based methods~\cite{jeon2018enhancing,wang2019learning,song2020stereoscopic,wang2021symmetric,ying2020stereo} have been proposed to tackle the problem of stereo image super-resolution, and achieve promising results. Wang~\emph{et al.}~\cite{wang2019learning} try to combine stereo matching and stereo image super-resolution, and propose a parallax attention network named as PASSRnet, which can cope with the issue of varying parallax. Especially, the proposed parallax-attention network can capture stereo correspondence. Inspired by~\cite{wang2019learning}, Song~\emph{et al.}~\cite{song2020stereoscopic} propose a self and parallax attention network to aggregate the information from its own view and the second view simultaneously. On the basis of PASSRnet, Wang~\emph{et al.}~\cite{wang2021symmetric} make a symmetrical design and propose iPASSR, which can super-resolve both sides of views within a single inference. These parallax attention-based methods all attempt to capture the stereo correspondence and warp the features of the second view to the target view at the pixel level, thereby improving the super-resolution performance of the target view. However, the above methods are not suitable for stereo image deblocking task and show poor performance. The main reason is that the compression artifacts destroy the original texture information of the image, which makes pixel-level view alignment difficult. As shown in Fig.~\ref{fig:motivation}, the matching regions also show different textures after being compressed. Unlike these methods, our method attempts to find the most relevant features in both views, which is achieved by a robust transformer-based matching. In particular, for the occlusions and boundaries, we can also find the most relevant matching features for them, and use a confidence-based weighting method for feature fusion. \subsection{Vision Transformer} Recently, Transformer-based models~\cite{khan2021transformers,han2020survey,yang2020learning} have achieved promising performance in various vision tasks, such as image recognition~\cite{dosovitskiy2020image,touvron2021training}, object detection~\cite{carion2020end,zhu2020deformable} and video understanding~\cite{girdhar2019video}. Some approaches are designed for image restoration~\cite{chen2021pre,9607618,wang2021uformer}. Chen~\emph{et al.}~\cite{chen2021pre} study the low-level computer vision task (e.g., denoising, super-resolution and deraining) and develop a new pre-trained model. These methods focus on the feature fusion based on self-attention mechanism, and aim to achieve excellent performance. However, unlike previous methods, we design a symmetric bi-directional parallax transformer module to achieve prediction of parallax information, which is then used for stereo image feature matching. Especially, the proposed module builds a mutual attention mechanism between information from two views, and performs stereo image feature matching. \section{Method} \subsection{Motivation} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/motivation_00.png} \end{center} \caption{ An example of stereo image deblocking results on image ’0001’ from Flickr1024~\cite{wang2019flickr1024} at quality factor 10. On the left is the JPEG-compressed stereo image pair. Note that we mark the matching regions of the left and right views with a red box. On the right are the marked regions, their deblocking results of QGCN~\cite{li2020learning} and our PTNet, and the corresponding ground-truth. } \label{fig:motivation} \end{figure} There are numerous matching regions in the left and right views of a stereo image pair. When the stereo image pair is compressed, these regions are significantly degraded and usually exhibit similar degraded contents. As shown in Fig.~\ref{fig:motivation}, on the left is a JPEG-compressed stereo image pair. In this stereo image pair, most of the regions match each other despite occlusions and boundaries. For in-depth analysis, a matching region is selected as an example and marked in the images. On the right of Fig.~\ref{fig:motivation}, we provide a zoomed-in view of the region (first row) and its corresponding ground-truth (GT) (fourth row). We find that although the GT patches of the left and right view are similar, their corresponding compressed patches have different details. Specifically, the letter N in the left patch is clearer while the letter A in the right patch is clearer. This inspires us to attempt using the information of two views simultaneously for stereo image deblocking, since the information of the two views can complement each other. There are two main reasons for this phenomenon: 1) Existence of parallax between two views. This causes the matching regions of two views to be similar but not completely consistent. 2) Block-based compression processing. The JPEG compression algorithm uses $8\times8$ blocks as the basic processing unit, which may cause overlaps. For example, a letter falls in two processing units simultaneously in one view, but only exists in one unit in another view. These two reasons cause the matching regions to show different degradations when they are JPEG compressed. Therefore, the information of two views are complementary. Benefiting from the binocular information, our results may achieve better results than single-image deblocking algorithms, as shown in Fig.~\ref{fig:motivation}. \subsection{Overview of Our PTNet} The goal of our PTNet is to reconstruct the deblocking results ($I^{d}_{L}$, $I^{d}_{R}$) from a JPEG-compressed stereo image pair ($I^{c}_{L}$, $I^{c}_{R}$), aiming to keep deblocking results ($I^{d}_{L}$, $I^{d}_{R}$) and the corresponding uncompressed stereo image pair ($I_{L}$, $I_{R}$) consistent in pixel. The architecture of our PTNet is shown in the Fig.~\ref{fig:framework}, which mainly consists of three parts: feature extraction, cross-view interaction and reconstruction. Note that the entire network is symmetric and the weights of its left and right branches are shared. Specifically, given ($I^{c}_{L}$, $I^{c}_{R}$), we first extract the features ($F_{L}$, $F_{R}$) of the left and right images separately, which are used for subsequent feature matching and reconstruction. This process is denoted as, \begin{equation} F_L=H_{FE}(I^c_L),~F_R=H_{FE}(I^c_R), \end{equation} where $H_{FE}(\cdot)$ represents the feature extraction module. Following the previous works~\cite{ fu2019jpeg,wang2020jpeg}, we design a multi-scale feature extraction block (MSB) to enhance the feature extraction capability of the model. In addition, we also adopt four residual dense blocks (RDBs)~\cite{zhang2018residual} in our model. The details of MSB and RDB can be found in appendix. These extracted features ($F_L$, $F_R$) are then used for feature matching and feature enhancement in the cross-view interaction module. This module adopts the coarse-to-fine design and is mainly divided into two stages. Each stage consists of one bi-directional parallax transformer module (biPTM) and one confidence-based cross-view fusion module (CCFM). In the first stage, we achieve effective cross-view information interaction, and we further enhance the information interaction in the second stage. Especially, since the first stage utilizes the binocular information to enhance the features of two views, the second stage can achieve more accurate feature matching. This can be expressed as, \begin{equation} F^1_L,F^1_R=H_{CVI^1}(F_L,~F_R),~F^2_L,F^2_R=H_{CVI^2}(F^1_L,~F^1_R), \end{equation} where $H_{CVI^1}(\cdot)$ and $H_{CVI^2}(\cdot)$ stand for the functions of two stages in the cross-view interaction module respectively. The details of biPTM and CCFM will be explained in later sections. Finally, these features ($F^2_L$, $F^2_R$) are used in the reconstruction module to generate our deblocking results. This module is mainly composed of four RDBs. Aiming to reconstruct better results, we also add a global residual design. This can be expressed as \begin{equation} I^{d}_{L}, I^{d}_{R}=H_{R}(F^2_L,~F^2_R,~I^{c}_{L},~I^{c}_{R}), \end{equation} where $H_{R}(\cdot)$ represents the reconstruction module. \subsection{Bi-Directional Parallax Transformer} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/parallax-transformer_00.png} \end{center} \caption{Architecture of the proposed bi-directional parallax transformer module (biPTM). $F_L$ and $F_R$ represent the feature maps of left view and right view. $F_L\downarrow$ and $F_R\downarrow$ are obtained by downsampling $F_L$ and $F_R$. $M_{L\rightarrow R}$ and $M_{R\rightarrow L}$ indicate the hard attention maps, which are computed from relevance calculation module and used to match feature maps of different views. $F_{L\rightarrow R}$ and $F_{R\rightarrow L}$ are the converted features. $C_{L\rightarrow R}$ and $C_{R\rightarrow L}$ are the corresponding confidence maps.} \label{fig:transformer} \end{figure} The compression artifacts cause difficulties in pixel-level view alignment, and inaccurate alignment may affect the performance of stereo image deblocking. Therefore, we consider finding the required reference features for the target region without view alignment. We utilize the mutual attention mechanism to match features with similar textures between different views. To this end, a symmetric bi-directional parallax transformer module (biPTM) is proposed, which is shown in Fig.~\ref{fig:transformer}. Our biPTM takes the features of the left and right view as input, and outputs the cross-view converted features and their confidence maps. Note that the cross-view conversion of our two-view features is symmetric. Here, we introduce the calculation process of the feature conversion from the left view to the right view in detail. Firstly, the left and right image features ($F_L$, $F_R$) are downsampled by a factor of 4, which can effectively reduce the calculation amount of the module. We make the three basic elements of the attention mechanism inside a transformer as \begin{equation} Q=F_R\downarrow,~K=F_L\downarrow,~V=F_L, \end{equation} where Q, K, V represent query, key and value respectively. Q and K are unfolded into patches and normalized, denoted as \begin{equation} \bar{q}_i=\frac{q_i}{||q_i||}~(i\in[1,H_{F_R\downarrow}\times W_{F_R\downarrow}]), \end{equation} \begin{equation} \bar{k}_j=\frac{k_j}{||k_j||}~(j\in[1,H_{F_L\downarrow}\times W_{F_L\downarrow}]), \end{equation} where $H_{F_R}$ and $W_{F_R}$ represent height and width of $F_R$, $H_{F_L}$ and $W_{F_L}$ represent height and width of $F_L$, respectively. Then we calculate the relevance R between the left and right features ($F_L$, $F_R$) by estimating the similarity between Q and K in the relevance calculation module. This can be expressed as, \begin{equation} R=Q\cdot K^T \end{equation} where R consists of $i\times j$ probability values $r_{ij}$. After that, we use a hard attention mechanism to weight $V$ for each query $q_i$ based on $R$. Therefore, only the most relevant features in $V$ are converted for each query $q_i$ by using the hard attention mechanism. The hard attention map $M_{L\rightarrow R}$ can be obtained by finding the maximum probability of $R$ in the $j$ dimension. This can be expressed as, \begin{equation} m_i = \underset{j}{\arg \max} r_{ij},~c_i = \underset{j}{\max} r_{ij}, \end{equation} where the value of $m_i$ in $M_{L\rightarrow R}$ is a coordinate index, which means the most relevant position in $F_L$ corresponds to the $i^{th}$ position in $F_R$, the value of $c_i$ is the probability value of $m_i$. Then we unfold the $V$ into patches, and each patch is four times the size of $q_i$, denoted as $v_i~(i\in[1,H_{F_R\downarrow}\times W_{F_R\downarrow}])$. Based on the obtained $M_{L\rightarrow R}$, an index selection operation is used to process $v_i$ to obtain the converted patch $z_i$, denoted as $z_i = v_{m_i}$. Finally, the converted patch $z_i$ is folded to generate the converted features $F_{L\rightarrow R}$. Since the matching probability value of occlusions and boundaries will be relatively low, the probability value $c_i$ can be used to generate the confidence map $C_{L\rightarrow R}$ by using folding operation. Similarly, we can obtain $F_{R\rightarrow L}$ and $C_{R\rightarrow L}$ by resetting Q, K and V as, \begin{equation} Q=F_L\downarrow,~K=F_R\downarrow,~V=F_R. \end{equation} To simplify the calculation, we obtain the corresponding relevance by transposing the previously obtained $R$. \subsection{Cross-View Feature Fusion} \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./image/fusion-block_00.png} \end{center} \caption{ Architecture of the proposed confidence-based cross-view fusion module (CCFM). $F_L$ and $F_{R\rightarrow L}$ represent the feature maps of left view and the converted feature maps of right view. $C_{R_L}$ stands for the confidence map of $F_{R_L}$. $F^{'}_{L}$ represents the fused features. RDB is the residual dense block~\cite{zhang2018residual}, and CA~\cite{hu2018squeeze} represents the channel attention module.} \label{fig:fusion} \end{figure} Due to the issues of occlusions and boundaries in stereo image processing, these occlusion and boundary regions do not match well with another view. To address this problem, we propose a confidence-based cross-view fusion module (CCFM) to achieve effective cross-view feature fusion, in which the cross-view features are weighted with the confidence maps produced by biPTM. The details of CCFM are shown in Fig.~\ref{fig:fusion}. Note that the weights of CCFM are shared, and the corresponding calculation process is symmetric in the left and right branches. Here, we introduce the fusion process of $F_L$ and $F^1_{R\rightarrow L}$ in detail. First, $F_L$ is concatenated with $F^1_{R\rightarrow L}$ and fed into one RDB~\cite{zhang2018residual} for initial feature fusion. We consider that regions with high confidence are more inclined to adopt converted features $F^1_{R\rightarrow L}$, and regions with low confidence adopt the features of the target view $F_L$. Therefore, a confidence-based weighting method is designed to fuse $F_L$ and $F^1_{R\rightarrow L}$. This can be expressed as, \begin{equation} F^{1'}_{R\rightarrow L}=C^1_{R\rightarrow L}\odot f_{RDB}([F_L, F^1_{R\rightarrow L}])+(1-C^1_{R\rightarrow L})\odot F_L \end{equation} where $f_{RDB}$ represents the function of RDB. With the help of this confidence-based weighting method, occluded regions of converted features $F^1_{R\rightarrow L}$ can be filled with the corresponding features $F_L$ from the target view, leading to continuous spatial distributions. Finally, $F^{1'}_{R\rightarrow L}$ is concatenated with $F_L$ again, and then fed to a channel attention layer (CA)~\cite{hu2018squeeze} and a convolution layer to generate the final fused features $F^1_L$. Similarly, we can obtain $F^1_R$, $F^2_L$ and $F^2_R$ by following the same calculation process with different input features. \subsection{Optimization } Given a training dataset with $N$ stereo image pairs~$\{I^i_L,~I^i_R\}^N_{i=1}$, we can obtain its corresponding JPEG-compressed stereo image pairs~$\{I^{c,i}_L,~I^{c,i}_R\}^N_{i=1}$ and the reconstructed results~$\{I^{d,i}_L,~I^{d,i}_R\}^N_{i=1}$. Following the previous works~\cite{li2020learning,fu2019jpeg}, we also adopt the $l_1$~norm for network training, since $l_1$~norm can yield the sharper image results. The loss function is denoted as, \begin{equation} L=\frac{1}{N}\sum_{i=1}^{N}\{||I^i_L-I^{d,i}_L||_1+||I^i_R-I^{d,i}_R||_1\}. \end{equation} During our PTNet training, Pytorch is used as the training toolbox, and the Adam optimization algorithm [50] with $\beta1 = 0.9$, $\beta2 = 0.999$, and a mini-batch size of 48 is adopted. All the experiments are conducted on three NVIDIA GeForce RTX 1080 Ti. The learning rate is changed from $2\times10^{-4}$ to $2\times10^{-6}$ at the interval of twenty epochs. The training was stopped after 60 epochs since more epochs do not provide further consistent improvement. \begin{table*}[t!] \small \caption{Performance comparisons of various methods based on the grayscale left images from Flickr1024~\cite{wang2019flickr1024}, KITTI2012~\cite{geiger2012we}, KITTI2015~\cite{menze2015object} and Middlebury~\cite{scharstein2014high}. Here, PSNR|SSIM|PSNR-B values achieved on the left images (\emph{i.e., Left}) are reported. The best results are boldfaced. }\label{tab:left \begin{tabular}{c|c|c|c|c|c|c|c} \hline Dataset & QF & JPEG & DnCNN~\cite{zhang2017beyond} & DCSC~\cite{fu2019jpeg} & QGCN~\cite{li2020learning} & iPASSR~\cite{wang2021symmetric} & \textbf{PTNet} \\ \hline \multirow{3}{*}{Flickr1024} & 10 & 25.99/0.7868/23.72 & 27.40/0.8231/27.02 & 27.56/0.8287/27.15 &27.72/0.8351/27.43 &27.76/0.8342/27.21 & \textbf{28.05/0.8403/27.54} \\ \cline{2-8} & 20 & 28.08/0.8614/25.75 & 29.66/0.8895/29.05 & 29.84/0.8926/29.16 &30.09/0.8975/29.51 &30.12/0.8973/29.42 & \textbf{30.39/0.9017/29.59} \\ \cline{2-8} & 30 & 29.42/0.8938/27.14 & 31.09/0.9172/30.39 & 31.26/0.9190/30.48 &31.58/0.9243/30.85 &31.58/0.9232/30.77 & \textbf{31.83/0.9264/30.89} \\ \hline \multirow{3}{*}{KITTI2012} & 10 & 29.27/0.8292/26.46 & 30.82/0.8665/30.53 & 30.99/0.8711/30.65 &31.20/0.8759/30.95 &31.01/0.8716/30.55 & \textbf{31.43/0.8786/31.05} \\ \cline{2-8} & 20 & 31.72/0.8919/28.89 & 33.28/0.9152/32.78 & 33.42/0.9175/32.92 &33.60/0.9201/33.26 &33.46/0.9186/33.04 & \textbf{33.85/0.9231/33.30} \\ \cline{2-8} & 30 & 33.07/0.9170/30.27 & 34.65/0.9347/34.02 & 34.80/0.9362/34.18 &34.97/0.9388/34.46 &34.85/0.9372/34.30 & \textbf{35.18/0.9404/34.48} \\ \hline \multirow{3}{*}{KITTI2015} & 10 & 29.31/0.8230/26.22 & 30.90/0.8615/30.53 & 31.06/0.8665/30.60 &31.31/0.8714/\textbf{30.96} &31.05/0.8669/30.48 & \textbf{31.42/0.8730}/30.92 \\ \cline{2-8} & 20 & 32.02/0.8937/28.75 & 33.59/0.9177/32.88 & 33.72/0.9200/33.00 &33.96/0.9226/\textbf{33.27} &33.77/0.9211/33.15 & \textbf{34.07/0.9245/33.27} \\ \cline{2-8} & 30 & 33.54/0.9220/30.23 & 35.13/0.9401/34.20 & 35.26/0.9415/34.39 &35.46/0.9436/\textbf{34.63} &35.32/0.9424/34.58 & \textbf{35.57/0.9449/}34.58 \\ \hline \multirow{3}{*}{Middlebury} & 10 & 29.65/0.8114/27.09 & 31.38/0.8529/31.22 & 31.57/0.8582/31.38 &31.85/0.8643/31.73 &31.67/0.8602/31.38 & \textbf{32.05/0.8676/31.88} \\ \cline{2-8} & 20 & 32.06/0.8826/29.43 & 33.79/0.9081/33.42 & 33.98/0.9111/33.64 &34.26/0.9156/34.03 &34.12/0.9136/33.84 & \textbf{34.51/0.9200/34.12} \\ \cline{2-8} & 30 & 33.40/0.9110/30.86 & 35.16/0.9304/34.70 & 35.35/0.9325/34.95 &35.54/0.9361/35.23 &35.46/0.9349/35.14 & \textbf{35.85/0.9400/35.40} \\ \hline \end{tabular} \end{table*} \begin{table*}[t!] \small \caption{Performance comparisons of various methods based on the grayscale stereo image pairs from Flickr1024~\cite{wang2019flickr1024}, KITTI2012~\cite{geiger2012we}, KITTI2015~\cite{menze2015object} and Middlebury~\cite{scharstein2014high}. Here, PSNR|SSIM|PSNR-B values achieved on the stereo image pairs (\emph{i.e., (Left + Right) /2}) are reported. The best results are boldfaced. }\label{tab:both \begin{tabular}{c|c|c|c|c|c|c|c} \hline Dataset & QF & JPEG & DnCNN~\cite{zhang2017beyond} & DCSC~\cite{fu2019jpeg} & QGCN~\cite{li2020learning} & iPASSR~\cite{wang2021symmetric} & \textbf{PTNet} \\ \hline \multirow{3}{*}{Flickr1024}& 10 & 26.00/0.7860/23.74 & 27.41/0.8223/27.03 & 27.57/0.8279/27.16 &27.74/0.8345/27.44 &27.78/0.8335/27.22 & \textbf{28.07/0.8397/27.55} \\ \cline{2-8} & 20 & 28.09/0.8607/25.76 & 29.67/0.8889/29.06 & 29.85/0.8920/29.17 &30.10/0.8970/29.53 &30.13/0.8967/29.43 & \textbf{30.41/0.9011/29.61} \\ \cline{2-8} & 30 & 29.43/0.8933/27.15 & 31.09/0.9166/30.40 & 31.26/0.9185/30.49 &31.59/0.9240/30.86 &31.58/0.9227/30.77 & \textbf{31.83/0.9259/30.90} \\ \hline \multirow{3}{*}{KITTI2012} & 10 & 29.12/0.8267/26.33 & 30.64/0.8641/30.33 & 30.81/0.8687/30.44 &31.00/0.8732/30.75 &30.83/0.8693/30.35 & \textbf{31.23/0.8761/30.83} \\ \cline{2-8} & 20 & 31.52/0.8897/28.71 & 33.05/0.9131/32.51 & 33.19/0.9154/32.65 &33.36/0.9180/32.99 &33.24/0.9166/32.78 & \textbf{33.61/0.9209/33.01} \\ \cline{2-8} & 30 & 32.85/0.9149/30.08 & 34.40/0.9327/33.72 & 34.55/0.9343/33.89 &34.71/0.9374/\textbf{34.18} &34.61/0.9353/34.02 & \textbf{34.92/0.9384/34.18} \\ \hline \multirow{3}{*}{KITTI2015} & 10 & 29.72/0.8314/26.57 & 31.37/0.8708/31.04 & 31.54/0.8740/31.12 &31.82/0.8807/31.50 &31.53/0.8760/31.00 & \textbf{31.97/0.8831/31.52} \\ \cline{2-8} & 20 & 32.55/0.9008/29.20 & 34.16/0.9245/33.54 & 34.30/0.9268/33.67 &34.57/0.9292/34.01 &34.35/0.9278/33.81 & \textbf{34.73/0.9319/34.02} \\ \cline{2-8} & 30 & 34.13/0.9279/30.73 & 35.76/0.9455/34.93 & 35.90/0.9469/35.12 &36.13/0.9490/\textbf{35.46} &35.96/0.9478/35.30 & \textbf{36.28/0.9507/}35.39 \\ \hline \multirow{3}{*}{Middlebury}& 10 & 29.62/0.8105/27.02 & 31.32/0.8518/31.14 & 31.53/0.8572/31.25 &31.74/0.8624/31.48 &31.62/0.8594/31.26 & \textbf{32.03/0.8672/31.75} \\ \cline{2-8} & 20 & 32.03/0.8827/29.35 & 33.76/0.9084/33.30 & 33.96/0.9113/33.48 &34.22/0.9164/33.71 &34.10/0.9140/33.69 & \textbf{34.51/0.9207/33.97} \\ \cline{2-8} & 30 & 33.38/0.9112/30.76 & 35.15/0.9310/34.57 & 35.35/0.9331/34.79 &35.57/0.9368/35.07 &35.48/0.9356/35.01 & \textbf{35.88/0.9409/35.25} \\ \hline \end{tabular} \end{table*} \section{Experiments} \subsection{Datasets and Evaluation} Following iPASSR~\cite{wang2021symmetric}, we also use 60 images from Middlebury~\cite{scharstein2014high} and 800 images from Flickr1024~\cite{wang2019flickr1024} as the training dataset. For test, we adopt 5 images from Middlebury, 20 images from KITTI 2012~\cite{geiger2012we}, 20 images from KITTI 2015~\cite{menze2015object}, and 112 images from Flickr1024 as the test dataset, which is the same as iPASSR. To train the proposed PTNet, the images are first cropped into patches of size $64\times 160$ with a stride of 20. These patches are then processed by JPEG compression algorithm with a random quality factor $QF\in[10,30]$ to get the corresponding compressed image patches. In this paper, Python Image Library (PIL) is adopted to encode images into JPEG format, since it employs a standard quantization table proposed by the Independent JPEG Group. In addition, these patches are randomly flipped horizontally and vertically for data augmentation. We only focus on the restoration of the luminance channel (in YCrCb space) in this paper. Following~\cite{fu2019jpeg,li2020learning}, we apply the PSNR, structural similarity (SSIM)~\cite{wang2004image}, and PSNR-B~\cite{yim2010quality} to evaluate the model performance. Referring to iPASSR~\cite{wang2021symmetric}, we report PSNR, SSIM and PSNR-B scores on the left view (\emph{i.e., Left}) and the average PSNR, SSIM and PSNR-B scores on stereo image pairs (\emph{i.e., (Left + Right) /2}). \subsection{Comparison against SOTA Methods} In this section, the proposed PTNet and the state-of-the-art algorithms including DnCNN~\cite{zhang2017beyond}, DCSC~\cite{fu2019jpeg}, QGCN~\cite{li2020learning} and iPASSR~\cite{wang2021symmetric} are compared quantitatively and qualitatively. DnCNN, DCSC and QGCN are single image deblocking methods, and iPASSR is a high-performance stereo image super-resolution method. To conduct a fair comparison, DnCNN and QGCN are finetuned on the training dataset for 10 epochs. We use the pre-trained model of DCSC to test its performance due to the unavailability of the training code. For iPASSR, we set its scale factor to 1, and use the luminance channel as input. Then iPASSR can be trained on the training dataset for stereo image deblocking. \textbf{Quantitative results.} Tables \ref{tab:left} and \ref{tab:both} show the quantitative results on four datasets with JPEG QF 10, 20 and 30, respectively. Specifically, Table \ref{tab:left} shows the performances of all test algorithm on the left view. It can be found that the proposed PTNet achieves the best performance at all JPEG QF. Compared with the single image deblocking methods, our PTNet achieves a significant performance improvement. The main reason is that PTNet makes full use of the information of two views and achieves better deblocking results. Although iPASSR also takes information of two views as input, it does not take into account that compression artifacts destroy stereo correspondence, and inaccurate feature warping leads to poor performance. In contrast, our PTNet still performs well in the presence of compression artifacts. To comprehensively evaluate the performance of stereo image deblocking, we report the average performance on two views, and the experimental results in Table \ref{tab:both} also confirm that our PTNet outperforms other compared methods. \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\linewidth]{./image/visual_results_00.png} \end{center} \caption{ Visual comparisons on the images '0003' (a) and '0043' (b) from Flickr1024~\cite{wang2019learning} at QF 10. The proposed PTNet is compared with the state-of-the-art methods including DnCNN~\cite{zhang2017beyond}, DCSC~\cite{fu2019jpeg}, QGCN~\cite{li2020learning} and iPASSR~\cite{wang2021symmetric}. The first row shows the deblocking results on the left view, while the second row shows the deblocking results on the right view. The number below each image patch represents the PSNR value. Note that our PTNet can produce better results compared to other methods.} \label{fig:visual_1} \end{figure*} \begin{table*}[t!] \small \caption{Performance comparisons between variations of our PTNet based on the grayscale images from Flickr1024~\cite{wang2019flickr1024}, KITTI2012~\cite{geiger2012we}, KITTI2015~\cite{menze2015object} and Middlebury~\cite{scharstein2014high}. Here, PSNR|SSIM|PSNR-B values achieved on the left images (\emph{i.e., Left}) are reported. The best results are boldfaced. }\label{tab:ablation \begin{tabular}{c|c|c|c|c|c} \hline Dataset & QF & w/o biPTM \& CCFM & w/o CCFM & w/o CTF & \textbf{PTNet} \\ \hline \multirow{3}{*}{Flickr1024} & 10 &27.85/0.8347/27.34 &27.98/0.8395/27.49 &28.01/0.8392/27.54 & \textbf{28.05/0.8403/27.54} \\ \cline{2-6} & 20 &30.17/0.8975/29.39 &30.32/0.9009/29.55 &30.35/0.9011/29.58 & \textbf{30.39/0.9017/29.59} \\ \cline{2-6} & 30 &31.62/0.9233/30.68 &31.76/0.9259/30.85 &31.78/0.9258/30.89 & \textbf{31.83/0.9264/30.89} \\ \hline \multirow{3}{*}{KITTI2012} & 10 &31.14/0.8735/30.79 &31.36/0.8778/31.02 &31.39/0.8781/ \textbf{31.05} & \textbf{31.43/0.8786/31.05} \\ \cline{2-6} & 20 &33.57/0.9196/33.04 &33.76/0.9225/33.26 &33.82/0.9228/33.29 & \textbf{33.85/0.9231/33.30} \\ \cline{2-6} & 30 &34.93/0.9379/34.23 &35.11/0.9401/34.45 &35.13/0.9402/34.43 & \textbf{35.18/0.9404/34.48} \\ \hline \multirow{3}{*}{KITTI2015} & 10 &31.19/0.8687/30.74 &31.38/0.8726/30.90 &31.39/0.8724/30.90 & \textbf{31.42/0.8730/30.92} \\ \cline{2-6} & 20 &33.85/0.9218/33.11 &34.00/0.9241/33.24 &34.03/0.9241/33.24 & \textbf{34.07/0.9245/33.27} \\ \cline{2-6} & 30 &35.39/0.9430/34.44 &35.50/0.9445/34.57 &35.52/0.9444/34.54 & \textbf{35.57/0.9449/34.58} \\ \hline \multirow{3}{*}{Middlebury} & 10 &31.77/0.8614/31.59 &31.99/0.8669/31.81 &32.00/0.8666/31.82 & \textbf{32.05/0.8676/31.88} \\ \cline{2-6} & 20 &34.19/0.9143/33.81 &34.45/0.9193/\textbf{34.12} &34.45/0.9193/34.08 & \textbf{34.51/0.9200/34.12} \\ \cline{2-6} & 30 &35.51/0.9351/35.06 &35.79/0.9394/35.39 &35.80/0.9394/35.39 & \textbf{35.85/0.9400/35.40} \\ \hline \hline \emph{Params.} &-&0.90 M&0.90 M&0.91 M&0.91 M \\ \hline \end{tabular} \end{table*} \textbf{Qualitative results.} The proposed PTNet can produce deblocking results with high perceptual quality, and the qualitative comparison results are shown in Fig.\ref{fig:visual_1}. Compared to other methods, our PTNet can remove compression artifacts more effectively and recover high-fidelity textures. The main reason is that PTNet makes good use of the additional information provided by the second view. Although iPASSR also utilizes information from two views for stereo image deblocking, its reconstructed results are more blurry than ours, because inaccurate pixel-level stereo matching may affect the performance of feature fusion. \subsection{Ablation Study} \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\linewidth]{./image/feature_analysis_00.png} \end{center} \caption{ Visualization of feature maps generated by our PTNet on the image 'piano' from Middlebury~\cite{scharstein2014high}. Since PTNet is symmetric, we only show feature matching from the left view to the right view. The first column is the compressed images at QF 10. The first row shows the feature maps of the first stage in the cross-view interaction, including $F_L$, $F^1_{L\rightarrow R}$, $F_R$ and $C^1_{L\rightarrow R}$. The second row shows the feature maps of the second stage, including $F^1_L$, $F^2_{L\rightarrow R}$, $F^1_R$ and $C^2_{L\rightarrow R}$. Better zoom in.} \label{fig:visualization} \end{figure*} \begin{table}[t!] \small \caption{Performance comparisons between iPASSR and iPASSR+. }\label{tab:iPASSR \begin{tabular}{c|c|c|c} \hline Dataset & QF & iPASSR & iPASSR+ \\ \hline \multirow{3}{*}{Flickr1024} & 10 &27.76/0.8342/27.21 & 27.92/0.8361/27.48 \\ \cline{2-4} & 20 &30.12/0.8973/29.42 & 30.30/0.8998/29.57 \\ \cline{2-4} & 30 &31.58/0.9232/30.77 & 31.74/0.9250/30.85 \\ \hline \multirow{3}{*}{KITTI2012} & 10 &31.01/0.8716/30.55 & 31.26/0.8751/30.93 \\ \cline{2-4} & 20 &33.46/0.9186/33.04 & 33.74/0.9214/33.25 \\ \cline{2-4} & 30 &34.85/0.9372/34.30 & 35.07/0.9393/34.42 \\ \hline \multirow{3}{*}{KITTI2015} & 10 &31.05/0.8669/30.48 & 31.30/0.8697/30.85 \\ \cline{2-4} & 20 &33.77/0.9211/33.15 & 33.98/0.9230/33.24 \\ \cline{2-4} & 30 &35.32/0.9424/34.58 & 35.48/0.9437/34.52 \\ \hline \multirow{3}{*}{Middlebury} & 10 &31.67/0.8602/31.38 & 31.92/0.8641/31.75 \\ \cline{2-4} & 20 &34.12/0.9136/33.84 & 34.42/0.9182/34.10 \\ \cline{2-4} & 30 &35.46/0.9349/35.14 & 35.76/0.9384/35.36 \\ \hline \end{tabular} \end{table} In this section, we study and analyze the contributions of different modules to our PTNet, including the bi-directional parallax transformer module (biPTM), the confidence-based cross-view fusion module (CCFM) and the coarse-to-fine (CTF) structure. To this end, we remove these modules from our PTNet separately. Since confidence maps are not available when biPTM is removed, we remove both biPTM and CCFM to verify the effectiveness of biPTM. We also add several RDBs, and several convolutional layers in the variation of our PTNet, aiming to keep similar model size. We test the performances of PTNet without biPTM and CCFM (w/o biPTM \& CCFM), PTNET without CCFM (w/o CCFM) and PTNet without CTF (w/o CTF). Specifically, w/o biPTM \& CCFM concatenates the features of two views for fusion, w/o CCFM removes the operation of the feature weighting calculation and w/o CTF only uses one stage for cross-view interaction. The experimental results are shown in Table~\ref{tab:ablation}. It can be found that the performances of three variations all decrease compared with PTNet on all datasets. This confirms that our proposed modules can effectively improve the performance of the model for stereo image deblocking. Note that our PTNet has a significant performance improvement compared to w/o biPTM \& CCFM. This means that biPTM contributes the most to the improvement of model performance. In addition, we also conduct a comparative experiment to further confirm that our biPTM can indeed improve the performance for stereo image deblocking. We replace the view alignment module in iPASSR with biPTM, and name this model iPASSR+. As shown in Table~\ref{tab:iPASSR}, the performance of iPASSR+ is significantly improved on all datasets. This demonstrates the effectiveness of our biPTM for stereo image deblocking. \subsection{Visualization Results} To more intuitively show that our biPTM can achieve good cross-view feature matching, we visualize the features of both stages of biPTM, as shown in Fig.~\ref{fig:visualization}. Firstly, we can find that $F_L$ and $F_R$ are not aligned, and concatenating them for fusion does not achieve good cross-view interaction, which is confirmed by the ablation experiments. Our biPTM can provide an efficient converted feature $F^1_{L\rightarrow R}$ for $F_R$ even with significant artifacts in the images. Specifically, in the corresponding regions, $F^1_{L\rightarrow R}$ has the texture features that match the $F_R$, so better cross-view feature fusion can be achieved. In addition, we can make similar conclusions in the second stage through observation. Secondly, the confidence map $C^1_{L\rightarrow R}$ shows small confidence values at the boundaries, which is consistent with the observation of the input stereo image pair. Note that the unconfident regions of $C^2_{L\rightarrow R}$ become smaller in the second stage, which also verifies that the features enhanced by the first stage can achieve more reliable feature matching. \section{Conclusion} In this paper, we investigate the problem of stereo image JPEG artifacts removal for the first time and provide an in-depth analysis. To this end, we propose a novel parallax transformer network (PTNet) to simultaneously remove compression artifacts from two views. Specifically, we design a symmetric bi-directional parallax transformer module (biPTM) to computes the relevance between the features of two views, and further match these features, enabling cross-view interaction. Due to the issues of occlusions and boundaries, a confidence-based cross-view fusion module (CCFM) is proposed to effectively integrate cross-view information. Experimental results demonstrate that our PTNet outperforms the test SOTA methods, and extensive ablation studies are performed to verify the effectiveness of our proposed modules. Furthermore, the proposed method can also be feasibly extended to cope with other stereo image processing tasks, such as stereo image deblurring. In the future, we will further explore the possibility of our method for different stereo image processing tasks. \bibliographystyle{ACMMM}
2,869,038,155,777
arxiv
\section{Introduction} Ultraluminous X-ray sources (ULXs) were initially touted as sub-Eddington accreting intermediate-mass black holes (IMBHs) with BH masses $100 \lesssim M_\mathrm{BH} \lesssim 10^{5}$ M$_{\odot}$ because of their location off the center of galaxies and their X-ray luminosities exceeding the Eddington limit of a 10 M$_\odot$ stellar-mass BH ($L_\mathrm{X} \geq 10^{39}$ erg s$^{-1}$; e.g., see review by \citealt{2017ARA&A..55..303K}). The recent finding of X-ray pulsations in some ULXs (\citealt{2014Natur.514..202B}; \citealt{2017Sci...355..817I,2017MNRAS.466L..48I}), together with dynamical mass measurements (\citealt{2013Natur.503..500L}) indicate that many of them are instead either stellar-mass BHs or neutron stars accreting at super-Eddington rates. Only those extreme ULXs with $L_\mathrm{X} \geq 5 \times 10^{40}$ erg s$^{-1}$, not easily explained by super-Eddington accretion, remain as possible IMBH candidates (see review by \citealt{2017IJMPD..2630021M}). This is the case of HLX-1 (e.g., \citealt{2009Natur.460...73F}; \citealt{2011ApJ...734..111D}; \citealt{2012Sci...337..554W}), tagged as the best IMBH candidate among ULXs, M82-X1 (\citealt{2001MNRAS.321L..29K}; \citealt{2014Natur.513...74P}), or NGC 2276-3c (\citealt{2012MNRAS.423.1154S}; \citealt{2013MNRAS.436.3128M,2015MNRAS.448.1893M}), three ULXs suggested to be the nucleus of a dwarf galaxy stripped during a minor merger with the ULX host galaxy (\citealt{2005MNRAS.357..275K}; \citealt{2013ApJ...768L..22S}; \citealt{2015MNRAS.448.1893M}). This scenario adds to the growing body of evidence that IMBHs or low-mass AGN ($M_\mathrm{BH} \lesssim 10^{6}$ M$_{\odot}$) can be found in dwarf galaxies (e.g., \citealt{2003ApJ...588L..13F}; \citealt{2004ApJ...607...90B}; \citealt{2004ApJ...610..722G,2007ApJ...670...92G}; \citealt{2013ApJ...775..116R}; \citealt{2015ApJ...809L..14B,2017ApJ...836...20B}; \citealt{2017ApJ...836..237N}; \citealt{2016ApJ...817...20M,2018MNRAS.478.2576M}), which has strong implications for understanding how supermassive BHs form. The finding of high-redshift quasars when the Universe was only 0.7 Gyr (e.g., \citealt{2011Natur.474..616M}; \citealt{2015Natur.518..512W}; \citealt{2018Natur.553..473B}) and of ultramassive BHs of more than 10$^{10}$ M$_\odot$ in the local Universe (\citealt{2011Natur.480..215M}; \citealt{2018MNRAS.474.1342M}) suggests that these behemoths must have been seeded by BHs of 10$^{2}-10^{5}$ M$_\odot$ in the early Universe and then grow via accretion and cosmological merging (\citealt{2003ApJ...582..559V}). Theoretical models predict that the leftover of those seed BHs that did not grow into supermassive should be found in local dwarf galaxies (e.g., \citealt{2010MNRAS.408.1139V}), where they might shine as ULXs when undergoing a minor merger that strips the dwarf galaxy of its stellar body. CXO J133815.6+043255 is a recently discovered ULX that bolsters this possibility. The ULX CXO J133815.6+043255 is located at a projected separation of 22 arcsec ($\sim$10 kpc) from the nucleus of the S0 Seyfert galaxy NGC 5252 (redshift $z$ = 0.0229; \citealt{2015ApJ...814....8K}). It has a \textit{Chandra} 0.5-8 keV X-ray luminosity of $\sim1.5 \times 10^{40}$ erg s$^{-1}$, which does not qualify it as an extreme ULX, and no significant signs of X-ray variability. However, it has some peculiar properties compared to other ULXs: (i) it has an optical counterpart that is clearly detected in ultraviolet and optical images obtained with the Sloan Digital Sky Survey (SDSS) and the \textit{Hubble} Space Telescope ($m_\mathrm{r} \sim$22 mag; \citealt{2015ApJ...814....8K}); (ii) its optical spectrum shows strong emission lines with a fairly small ($\sim$ 13 km s$^{-1}$) systematic velocity offset from that of the nucleus of the galaxy, revealing that the ULX is likely associated with NGC 5252 (\citealt{2015ApJ...814....8K}); (iii) the ULX is able to ionize the surrounding gas and to influence its kinematics, as revealed by the signs of gas rotation centered on the ULX (\citealt{2017ApJ...844L..21K}); (iv) it has a strong radio counterpart, with an average flux density in Very Large Array (VLA) observations of up to 0.3 arcsec resolution of 3.2 mJy and 1.4 mJy at 1.4 GHz and 8.4 GHz, respectively, and of 1.9 mJy at 4.9 GHz (\citealt{1994AJ....107.1227W}; \citealt{1995MNRAS.276.1262K}; \citealt{1995ApJ...450..559B}; see table 2 in \citealt{2017MNRAS.464L..70Y}). The ULX appears unresolved in all these observations, as well as in 1.6 GHz Multi-Element Radio Linked Interferometer Network (MERLIN) observations (\citealt{2001MNRAS.327..369T}) and in 1.6 GHz European VLBI Network (EVN) observations with a resolution of 3 milli-arcsec (mas; \citealt{2017MNRAS.464L..70Y}), and shows no evidence for variability of $>$ 10\% over 3 years at 1.4 GHz and 8.4 GHz (\citealt{2015ApJ...814....8K}). All the above led to the conclusion that CXO J133815.6+043255 is not powered by a stellar-mass BH (\citealt{2015ApJ...814....8K,2017ApJ...844L..21K}). In this Letter we report Very Long Baseline Array (VLBA) radio observations of the ULX CXO J133815.6+043255 at 4.4 GHz and 7.6 GHz that resolve, for the first time, its radio emission. The results make the blazar nature of the ULX very unlikely, in agreement with optical studies, and provide a more robust estimate of the ULX BH mass than in previous works, placing it in the realm of IMBHs. The observations, data reduction, and results obtained are described in Section~\ref{observations} and discussed in Section~\ref{discussion}. Final conclusions and open issues are provided in Sect.~\ref{conclusions}. Throughout the paper we adopt a $\Lambda$CDM cosmology with parameters $H_{0}=71$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}=0.73$ and $\Omega_{m}=0.27$, which yields a luminosity distance for NGC 5252 of 98.4 Mpc. \section{Observations and data reduction} \label{observations} The ULX CXO J133815.6+043255 was observed with the VLBA for eight hours on 2016 March 23 (project BL0226) simultaneously at 4.4 GHz and 7.6 GHz in order to provide spectral index information. Ten antennas participated in the observations: Brewster, Fort Davis, Hancock, Kitt Peak, Los Alamos, Mauna Kea, North Liberty, Owens Valley, Pie Town, and Saint Croix. The observations were performed in phase-reference mode, alternating between $\sim$5 min on the target source and $\sim$1 min on a nearby phase calibrator (J1330+071). The bright radio source 3C279 was also observed as fringe finder and bandpass calibrator. Each of the two frequencies were recorded at a rate of 1024 Mbps in dual circular polarization, using a bandwidth of 128 MHz split into 258 spectral channels, and were correlated in Socorro (New Mexico) with an averaging time of 2 s. The correlated visibility data were split into two uv datasets, centered at 4.4 GHz and 7.6 GHz, and independently calibrated in amplitude (based on system temperatures and antenna sensitivities) and fringe-fitted using the NRAO package Astronomical Imaging Processing Software (\textsc{AIPS}) software. Sampling-based calibration adjustments determined with the task \textsc{ACCOR} and ionospheric correction were also applied at both frequencies. We consider a systematic uncertainty of 5\% in the flux calibration. The imaging was performed in \textsc{AIPS} using \textsc{CLEAN} deconvolution and the natural weighting of the data. The resulting 4.4 GHz image has a root mean square (r.m.s.) noise of 26.4 $\mu$Jy beam$^{-1}$ and a synthesized beam size of 4.94 $\times$ 1.95 mas$^{2}$. To recover any diffuse emission we also tried imaging without natural weighting and the robust parameter set to 0; however, the extent of the 4.4 GHz emission remains the same as when using natural weighting. To obtain the radio map at 7.6 GHz we used the same beam size as that of the 4.4 GHz radio map in order to derive the spectral index. The resulting rms at 7.6 GHz is of 23.5 $\mu$Jy beam$^{-1}$. The imaged data were fitted with two-dimensional elliptical Gaussians with the \textsc{AIPS} task \textsc{IMFIT}. For the phase calibrator J1330+071, we measured an integrated flux density of $\sim$0.1 Jy at 4.4 GHz and 7.6 GHz. For the target CXO J133815.6+043255, the distance between the two peaks at 4.4 GHz was measured from the lowest 4$\sigma$ contours using the \textsc{AIPS} task \textsc{TVDIST}. The final images were produced using the \textsc{CASA}\footnote{\textsc{COMMON\ ASTRONOMY\ SOFTWARE\ APPLICATIONS}} software. \begin{figure} \includegraphics[width=0.43\textwidth]{J1338+04image4GHz_gaussiansbeam.pdf} \includegraphics[width=0.43\textwidth]{J1338+04image4GHz_7GHzcontours.pdf} \protect\caption[figure]{VLBA image of the ULX CXO J133815.6+043255 at 4.4 GHz. The synthesized beam size is 4.94 $\times$ 1.95 mas$^{2}$ with the major axis oriented at a position angle P.A.=$169.7^{\circ}$. \textbf{Top}: The 4.4 GHz contours are plotted as (-3,4,5,6,7,8,9) $\times$ the off-source r.m.s. noise of 26.4 $\mu$Jy beam$^{-1}$. The position and size of the components derived from the two-dimensional Gaussian fitting are marked with white crosses. The peak flux density of the east component is 0.24 mJy beam$^{-1}$, that of the west component is 0.26 mJy beam$^{-1}$. \textbf{Bottom}. The 7.6 GHz contours are plotted as (-3, 4, 4.5, 5) $\times$ the off-source r.m.s. noise of 23.5 $\mu$Jy beam$^{-1}$. The peak flux density of the 7.6 GHz emission is 0.12 mJy beam$^{-1}$ and is coincident with the 4.4 GHz east peak within the positional errors. North is up and east is left. The east and west 4.4 GHz peaks are marked as 'E' and 'W', respectively.} \label{figure} \end{figure} \subsection{VLBA detection of the ULX CXO J133815.6+043255} \label{detection} The ULX CXO J133815.6+043255 is detected with the VLBA at 4.4 GHz and 7.6 GHz. At 4.4 GHz two peaks of radio emission separated by 2.9 mas (1.4 pc at a distance of 98.4 Mpc) are detected, each at a signal-to-noise ratio S/N$\sim$9-10 (see Fig.~\ref{figure}). The extended radio structure is oriented NE-SW. The fit of a double (one per peak) two-dimensional elliptical Gaussian gives a flux density for the east component of 0.36 $\pm$ 0.07 mJy and for the west one of 0.29 $\pm$ 0.06 mJy (see Table~\ref{gaussianfitting}), where the errors have been derived as the quadratic sum of the uncertainty resulting from the Gaussian fitting and the 5\% systematic uncertainty on the flux densities. The whole structure has a total integrated flux density of 0.66 $\pm$ 0.09 mJy. The eastern component has a deconvolved size of 2.2 $\times$ 1.7 mas$^{2}$ (1.0 $\times$ 0.8 pc$^{2}$) oriented at a P.A. of 139$^{\circ}$. The western one is oriented at a P.A. of 155$^{\circ}$ and is resolved only on its major axis, hence its size of 1.5 mas (0.7 pc) should be taken as an upper limit. According to these results, the two components have a brightness temperature $T_\mathrm{B} > 5 \times 10^{6}$ K, indicating that the emission is non-thermal. At 7.6 GHz a single component that is not resolved is detected at S/N$\sim$5 (Fig.~\ref{figure}, bottom). Its peak flux density is 0.12 $\pm$ 0.03 mJy beam$^{-1}$. The 7.6 GHz detection is spatially coincident within the positional errors\footnote{The total positional error of each of the detected components is estimated, at each frequency, as the quadratic sum of the positional error of that component in the phase-referenced map, the positional error of the phase-reference calibrator, and the error of phase referencing due to ionospheric effects.} with the eastern component at 4.4 GHz (Fig.~\ref{figure}, bottom). This allows us to derive the spectral index for the eastern component, finding a steep value $\alpha = -2.0 \pm 0.1$ (where $S_{\nu} \propto \nu^{\alpha}$). For the west component we derive a 5$\sigma$ upper limit on its 7.6 GHz peak flux density of 0.2 $\mu$Jy beam$^{-1}$ from the r.m.s at the 4.4 GHz position. This yields an upper limit on its spectral index of $\alpha = -0.6$. \begin{table*} \begin{minipage}{\textwidth} \centering \caption{Results of the elliptical Gaussian fitting of the ULX CXO J133815.6+043255. Column designation:~(1) Component detected at S/N $>$ 5, (2-4) coordinates and positional error, (5) integrated flux, (6) peak flux, (7-8) angular and (projected) physical size, (9) brightness temperature.} \label{gaussianfitting} \begin{tabular}{lccccccc} \hline \hline Freq. & RA & DEC & Pos. Error & total & peak & Size & $T_\mathrm{B}$ \\ (GHz) & (J2000) & (J2000) & (mas) & (mJy) & (mJy beam$^{-1}$) & (mas) & (K) \\ \hline 4.4 east & 13$^{h}$38$^{m}$15$^{s}$.639 & +04$^{\circ}$32\arcmin 55\arcsec.3708 & 0.9 & 0.36 $\pm$ 0.07 & 0.24 $\pm$ 0.03 & 2.2 $\times$ 1.7 (1.0 $\times$ 0.8 pc$^{2}$) & $6.3 \times 10^{6}$ \\ 4.4 west & 13$^{h}$38$^{m}$15$^{s}$.639 & +04$^{\circ}$32\arcmin 55\arcsec.3696 & 0.8 & 0.29 $\pm$ 0.06 & 0.26 $\pm$ 0.03 & $<$1.5 ($<$0.7 pc) & $\geq 5.5 \times 10^{6}$ \\ 7.6 east & 13$^{h}$38$^{m}$15$^{s}$.639 & +04$^{\circ}$32\arcmin 55\arcsec.3706 & 1.0 & - & 0.12 $\pm$ 0.03 & $<$2.1 ($<$1.0 pc) & - \\ \hline \hline \end{tabular} \end{minipage} \end{table*} \section{Discussion} \label{discussion} \subsection{The extended radio jet of the ULX} The 4.4 GHz VLBA observations of ULX CXO J133815.6+043255 reveal the first detection of a ULX pc-scale jet resolved into two components. All previous VLBI observations of ULXs report, as far as we know, either compact or slightly extended radio cores (e.g., \citealt{2011AN....332..379M}; \citealt{2013MNRAS.436.3128M,2014ApJ...785..121M,2015MNRAS.448.1893M}; \citealt{2015MNRAS.446.3268C,2015MNRAS.452...24C}), including the 1.6 GHz EVN observations of the ULX CXO J133815.6+043255 (\citealt{2017MNRAS.464L..70Y}). These EVN observations showed the detection of a compact radio component with a flux density of 1.8 $\pm$ 0.1 mJy and a flat spectral index $\alpha \sim$ -0.1, which \cite{2017MNRAS.464L..70Y} identified as the radio core of CXO J133815.6+043255. The EVN compact radio structure is however resolved by our higher-resolution 4.4 GHz VLBA observations into two components with a total flux density of 0.66 $\pm$ 0.09 mJy, indicating that the 1.6 GHz radio detection has some diffuse jet emission and that it cannot be ascribed to the core. This is reinforced by the steep $\alpha \leq$-1.6 and $\alpha \leq$-1.9 (considered as upper limits given the different beam resolution and non-simultaneity of the observations) found for the east and west components, respectively, when using the 1.6 GHz flux density of \cite{2017MNRAS.464L..70Y} and the 4.4 GHz VLBA fluxes to derive the spectral indexes. From the 4.4 and 7.6 GHz VLBA detections we find that the east component has a steep spectral index ($\alpha = -2.0 \pm 0.1$) and that it is resolved with a size of 1.0 $\times$ 0.8 pc$^{2}$ at 4.4 GHz, which precludes its identification as the radio core of CXO J133815.6+043255. We note that its $T_\mathrm{B} = 6.3 \times 10^{6}$ K is low compared to the equipartition value of compact relativistic jets (T$_\mathrm{B, eq.} \simeq 5 \times 10^{10}$ K; \citealt{1994ApJ...426...51R}). Given that the Doppler factor is $\sim T_\mathrm{B}/T_\mathrm{B, eq.}$, no significant Doppler-boosting is present in the east radio component. For the west one, only a lower limit on $T_\mathrm{B}$ can be derived given its partial (only major axis) resolved structure; hence the presence of Doppler-boosting cannot be ruled out here. Given that this component is not firmly resolved and has a flatter spectral index ($\alpha \leq -0.6$) than the east one, the radio core is more likely to be located here. For the purposes of investigating the nature of CXO J133815.6+043255, in the next section we consider the flux of the west detection as an upper limit to that of the radio core. \subsection{The nature of the ULX } The resolved jet structure and steep spectra revealed by the VLBA observations suggest that the ULX is not a background blazar, as the radio emission of most blazars is dominated by an unresolved flat-spectrum core. This is in agreement with optical spectroscopic studies, which locate the ULX in NGC 5252, and with the lack of significant radio variability over 3 years (\citealt{2015ApJ...814....8K,2017ApJ...844L..21K}). It should be noted though that, when observed at low frequencies, some blazars can show extended radio emission (e.g. \citealt{2010ApJ...710..764K}). From the radio emission standpoint the detected VLBA emission could be also consistent with that of compact steep spectrum (CSS) sources, which are young radio sources with steep-spectrum small-scale jets (\citealt{1998PASP..110..493O}). However, CSS sources have typically strong optical emission lines (e.g. \citealt{1997A&A...326..130M}; \citealt{2016A&ARv..24...10T}), hence the nature of CXO J133815.6+043255 as a CSS is ruled out by its optical spectrum showing that it belongs to NGC 5252 (\citealt{2015ApJ...814....8K}). NGC 5252 seems to have undergone a past interaction, as evidenced by the finding of a small-scale half-spiral of dust near the nucleus of the galaxy and of a kinematical decoupling between the stars and the gas (\citealt{1998ApJ...505..159M}; \citealt{2015AJ....149..155K}). This, together with the size of the optical counterpart of CXO J133815.6+043255 ($\lesssim$ 46 pc; \citealt{2015ApJ...814....8K}) being consistent with that of ultracompact dwarf galaxies, suggests that CXO J133815.6+043255 is the nucleus of a dwarf galaxy that was accreted by NGC 5252 (\citealt{2015ApJ...814....8K,2017ApJ...844L..21K}). In this scenario, the ULX could be either an AGN powered by an IMBH or a low-luminosity AGN (LLAGN). LLAGN host supermassive BHs with masses $> 10^{6}$ M$_{\odot}$, have typically X-ray luminosities $\sim10^{40-41}$ erg s$^{-1}$ (\citealt{2008ARA&A..46..475H}) and their nuclear radio emission is usually associated with unresolved cores (e.g., \citealt{2001ApJ...558..561U}; \citealt{2005A&A...435..521N}; \citealt{2018MNRAS.476.3478B}; \citealt{2018arXiv180506696S}). CXO J133815.6+043255 shows [OIII] and X-ray luminosities of 10$^{39.7}$ and 10$^{40.2}$ erg s$^{-1}$, respectively (\citealt{2015ApJ...814....8K}), consistent with those of LLAGN. Because of this and based on the finding of compact radio emission with a flat spectrum, \cite{2017MNRAS.464L..70Y} argued that CXO J133815.6+043255 is a LLAGN. However, when observed with sufficient angular resolution and sensitivity, LLAGN can show resolved pc-scale radio emission (e.g., \citealt{2014ApJ...787...62M}; \citealt{2018MNRAS.476.3478B}). This could be the case of CXO J133815.6+043255, for which we find that the pc-scale radio jet is resolved into two components. Given their size and radio spectral index, we ascribe the east component to a radio lobe and the west one to the radio core. To probe the nature of CXO J133815.6+043255, we compute the $R_\mathrm{X}$ ratio of 5 GHz radio luminosity to 2-10 keV X-ray luminosity (\citealt{2003ApJ...583..145T}). LLAGN have typically $-3.8 <$ log $R_\mathrm{X} < -2.8$, X-ray binaries log $R_\mathrm{X} < -5.3$, supernova remnants log $R_\mathrm{X} \sim -2$, and IMBHs $-5.3 <$ log $R_\mathrm{X} < -3.8$ (\citealt{2013MNRAS.436.1546M,2013MNRAS.436.2454M}). Using the 4.4 GHz flux density of the western component we find log $R_\mathrm{X}$ = -3, consistent with LLAGN. Note though that since the western component is slightly resolved, its flux density should be taken as an upper limit to the core radio emission and so the value of $R_\mathrm{X}$. The same result would be obtained when considering the flux of the east component as an upper limit to the core emission. Those BHs accreting at sub-Eddington rates and in a low/hard X-ray state are found to follow an empirical correlation, supported by theoretical models of accretion, that relates their nuclear X-ray luminosity with their core radio luminosity and BH mass (e.g., \citealt{2004A&A...414..895F}; \citealt{2006A&A...456..439K}; \citealt{2009ApJ...706..404G}; \citealt{2012MNRAS.419..267P}; \citealt{2018arXiv180506696S}; see \citealt{2018MNRAS.474.1342M} for a brief review). Using this fundamental plane of BH accretion, \cite{2017MNRAS.464L..70Y} estimated a BH mass for CXO J133815.6+043255 of $\sim10^{9}$ M$_{\odot}$, which is unreasonably large for an AGN in a dwarf galaxy. The finding that their EVN radio emission is resolved when using higher-resolution VLBA observations indicates that the BH mass of $\sim10^{9}$ M$_{\odot}$ should be taken as a very rough upper limit to the ULX BH mass. From the 4.4 GHz radio luminosity of the western VLBA component ($L_\mathrm{R} = 1.5 \times 10^{37}$ erg s$^{-1}$), which we consider as an upper limit to the core radio luminosity, and the ULX 2-10 keV X-ray luminosity ($L_\mathrm{X} = 1.2 \times 10^{40}$ erg s$^{-1}$; \citealt{2015ApJ...814....8K}), we estimate an upper limit on the ULX BH mass of $M_\mathrm{BH}\lesssim 2 \times 10^{6}$ M$_{\odot}$ when using the most recent and refined version of the fundamental plane of BH accretion (\citealt{2018arXiv180506696S}):\\ \begin{equation} \begin{split} \mathrm{log} L_\mathrm{R} = (0.48 \pm 0.04) \mathrm{log} L_\mathrm{X} + (0.79 \pm 0.03) \mathrm{log} M_\mathrm{BH} + 11.71 \end{split} \end{equation} The fundamental plane of \cite{2018arXiv180506696S} has been derived from an homogeneous sample of LLAGN with core radio emission based on uniform resolution and sensitivity VLA observations at 15 GHz, while previous correlations were derived from radio flux densities that could possibly trace nuclear jet luminosities and not only core luminosities (\citealt{2018arXiv180506696S}). Using the \cite{2009ApJ...706..404G} correlation we obtain $M_\mathrm{BH}\lesssim 10^{7}$ M$_{\odot}$, while using those of \cite{2006A&A...456..439K} and \cite{2012MNRAS.419..267P} we estimate $M_\mathrm{BH} \lesssim 6 \times 10^{8}$ M$_{\odot}$. Similar upper limits would be obtained when considering the flux of the east component as an upper limit to the core emission. The upper limit on the ULX BH mass of $\lesssim2 \times 10^{6}$ M$_{\odot}$ is in agreement with the dynamical mass $M_\mathrm{dyn} = 4 \times 10^{7}$ M$_{\odot}$ derived by \cite{2017ApJ...844L..21K} and with the lower limit of $10^{3.5}$ M$_{\odot}$ obtained assuming that the bolometric luminosity does not exceed the Eddington luminosity (\citealt{2015ApJ...814....8K}). It is also consistent with the $M_\mathrm{BH}\lesssim 10^{6}$ M$_{\odot}$ of most of the low-mass AGN found in dwarf galaxies (e.g., \citealt{2003ApJ...588L..13F}; \citealt{2004ApJ...607...90B}; \citealt{2004ApJ...610..722G,2007ApJ...670...92G}; \citealt{2013ApJ...775..116R,2014ApJ...787L..30R}; \citealt{2015ApJ...809L..14B}; \citealt{2015ApJ...798...38S}; \citealt{2017ApJ...836..237N}; \citealt{2016ApJ...817...20M,2018MNRAS.478.2576M}), suggesting that the ULX hosts also a low-mass AGN. Low-mass AGN in dwarf galaxies are typically found to accrete at near-Eddington rates. Detections of sub-Eddington accreting AGN in dwarf galaxies showing radio jets are scarce (e.g., NGC 4395, \citealt{2006ApJ...646L..95W}; NGC 404, \citealt{2012ApJ...753..103N}; only 1 out of 19 objects in \citealt{2006ApJ...636...56G}; 3 out of 40 sources in \citealt{2018MNRAS.478.2576M}); hence the finding of an additional source such as CXO J133815.6+043255 is very significant. Assuming a BH mass of $2 \times 10^{6}$ M$_{\odot}$ and a bolometric luminosity $L_\mathrm{bol} = 8 \times 10^{41}$ erg s$^{-1}$ (derived from the [OIII] luminosity adopting the conversion $L_\mathrm{bol}/L_\mathrm{[OIII]}\sim162$; \citealt{2012MNRAS.426.2703S}; \citealt{2015ApJ...814....8K}), we derive an Eddington ratio for CXO J133815.6+043255 of $3 \times 10^{-3}$. Using the higher BH mass limit of $10^{7}$ M$_{\odot}$ or $6 \times 10^{8}$ M$_{\odot}$ derived from other fundamental correlations the ULX would have an Eddington ratio $\sim 10^{-7}-10^{-5}$. This would produce a ionization parameter for the optical emission lines typical of low-ionization nuclear emission-line regions (LINERs), which have [OIII]/H${\beta} < 3$ (\citealt{1997ApJS..112..315H}). Instead, CXO J133815.6+043255 is observed to have [OIII]/H${\beta} \sim 9$ typical of Seyfert galaxies (\citealt{2015ApJ...814....8K}). We thus conclude that the ULX is most likely to host an IMBH with $10^{3.5} < M_\mathrm{BH}$ (M$_{\odot}$) $\lesssim 2 \times 10^{6}$ rather than a supermassive BH. \section{Conclusions and open issues} \label{conclusions} IMBHs are thought to be the local relics of the early Universe supermassive BH progenitors. Finding observational evidence of their existence is thus of paramount importance for understanding how supermassive BHs grow. Most IMBH candidates are found as low-mass AGN with no radio emission and near- to super-Eddington accretion rates, as expected from simulations in which early BH growth proceeds through short high-accretion-rate phases (e.g., \citealt{2005ApJ...633..624V}; \citealt{2016MNRAS.458.3047P}; \citealt{2017MNRAS.472L.109A}). The finding of low-mass AGN with extended radio jets and sub-Eddington accretion rates such as CXO J133815.6+043255 has thus important implications for cosmological models. Unlike most low-mass AGN, CXO J133815.6+043255 is a ULX located 10 kpc away from the center of its host galaxy. This suggests that it is the nucleus of a dwarf galaxy that was stripped in the course of a minor merger. Numerical simulations show that BH growth in dwarf galaxies can be triggered by minor mergers; however, evidence of AGN-triggered activity in a dwarf galaxy is scarce (\citealt{2013MNRAS.435.2335B}; \citealt{2017ApJ...836..183S}). CXO J133815.6+043255 could be a new case of a BH that became active in the course of a minor merger event. The ULX CXO J133815.6+043255 is also one of the few low-mass AGN known able to ionize the gas that surrounds it (\citealt{2017ApJ...844L..21K}). This incidence of AGN feedback in dwarf galaxies is an issue of major debate. Most numerical simulations maintain that supernova feedback hampers BH growth and thus the impact of AGN feedback in low-mass galaxies (e.g., \citealt{2015MNRAS.452.1502D}; \citealt{2017MNRAS.465...32B}; \citealt{2017MNRAS.472L.109A}; \citealt{2017MNRAS.468.3935H}), others that AGN feedback is the one that has the biggest impact on the stellar populations of dwarf galaxies (\citealt{2016MNRAS.463.2986S}; \citealt{2018MNRAS.473.5698D}). Observationally the results are also controversial: \cite{2018ApJ...855L..20M} find support for the supernova feedback scenario, while \cite{2018MNRAS.476..979P} report that AGN feedback could regulate star formation in dwarf galaxies. CXO J133815.6+043255 seems to bolster this possibility. Further observational studies are needed to clarify the roles that supernova and AGN feedback play in regulating BH growth in dwarf galaxies. \section*{Acknowledgments} The authors thank the anonymous referee for insightful comments. M.M. acknowledges support from the Spanish Juan de la Cierva program (IJCI-2015-23944). M.K. was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT \& Future Planning (No. NRF-2017R1C1B2002879). L.C.H. was supported by the National Key R\&D Program of China (2016YFA0400702) and the National Science Foundation of China (11473002, 11721303). \bibliographystyle{mnras}
2,869,038,155,778
arxiv
\section{Introduction:} Among the spin driven multiferroic materials, the orthorhombic (\emph{o}-) perovskite manganites \emph{o}-$\emph{R}$MnO$_{3}$ ($\emph{R}$ = Gd, Tb, and Dy) have been studied extensively \cite{kimura,spaldin,cheong,yamasaki,kimura anual review,tokura}. In GdMnO$_3$, the cycloidal spins that break the inversion symmetry, are in the \emph{ab} plane and the polarization points along the \emph{a} direction of the orthorhombic (\emph{Pbnm}) structure \cite{noda}. In the manganites with \emph{R} = Tb and Dy, the cycloidal spins are in the \emph{bc} plane and the polarization points along the \emph{c} direction \cite{noda,kimura,goto}. In the case of \emph{o}-$\emph{R}$Mn$O_{3}$ with smaller rare-earths ($\emph{R}$ = Ho, Er, Tm), a collinear magnetic ordering (E-type) gives polarization along \emph{c}-direction which is substantially higher than that of the $\emph{bc}$ cycloidal phase \cite{ivan,picozzi,lee1,feng}. The existence of different mechanisms of polarization in the $\emph{o}$-$\emph{R}$MnO$_3$ with different \emph{R}-ions indicate that the radius of \emph{R}-ion determines the ferroelectric properties by controlling the competing nearest neighbor ferromagnetic and next-nearest neighbour antiferromagnetic interactions \cite{goto}. Several studies have been carried out on mixed rare-earth manganites $\emph{R}$$_{1-x}$$\emph{R}$$'$$_{x}$MnO$_{3}$ with $\emph{R}$= Sm, Eu, Tb and $\emph{R}$$'$ = Y and Gd, where multiferroic phases of cycloidal and E-type collinear magnetic structure are found as a function of average radius of rare-earth ions \cite{ishiwata,flynn,tgoto}. Based on magnetic field effects on polarization in Sm$_{0.5}$Y$_{0.5}$MnO$_3$, coexistence of polarization in two different directions has been suggested \cite{fina}. In the case of Dy$_{1-x}$Ho$_x$MnO$_3$, a transition from $\emph{bc}$ cycloidal to $\emph{E}$-type antiferromagnetic phase occurs and coexistence of these two phases is found in a wide compositional range\cite{nzhang}. Application of external pressure in TbMnO$_{3}$ leads to change of $\emph{bc}$ cycloidal ordering to $\emph{E}$-type ordering with a large polarization ($\approx$ 1.0 $\mu$C/cm$^{2}$) \cite{aoyama}. Here, we report direct evidences for the occurrence of the two cycloidal phases ($\emph{ab}$ and $\emph{bc}$) in two mixed rare-earth manganites, Eu$_{0.5}$Dy$_{0.5}$MnO$_{3}$ (EDMO) and Gd$_{0.5}$Dy$_{0.5}$MnO$_{3}$ (GDMO) and their extraordinary magnetoelectric properties. Contrary to TbMnO$_3$ (TMO), the mixed rare-earth compounds show a large magnetocapacitance accompanied by a change of sign from negative to positive, as a function of temperature. Further, they exhibit large enhancement of electric polarization under applied magnetic field. Surprisingly, they show switching of polarization while ramping up the magnetic field from 0 to 80 kOe. We also demonstrate that the ferroelectric domain state is memorized not only below incommensurate magnetic ordering but also in the paraelectric and paramagnetic region as well. Details of sample preparation and physical measurements are given in supplemental material {\cite{supp}. Fig. 1 (a, b and c) shows specific heat divided by temperature (C/T) (left-axis) and magnetization (M) data, measured under field cooled warming (100 Oe) condition (right-axis), with temperature for EDMO, GDMO and TMO, respectively. Since these samples are polycrystalline and the rare-earth moments are higher than the Mn$^{3+}$ moment, we do not observe magnetization anomalies associated with the ordering of Mn$^{3+}$ ions. On the other hand, the (C/T) data clearly show the incommensurate sinusoidal antiferromagnetic ordering (T$_N$), commensurate cycloidal ordering (T$_C$) and $\emph{R}$$^{3+}$ ordering \cite{kimura}, except that the rare-earth moments in EDMO do not order down to 2 K \cite{ordering}. The center panels (d, e and f) show dielectric constant (left-axis) and loss data (right-axis) measured at 50 kHz with different external magnetic fields (0, 40 and 80 kOe). For EDMO and GDMO, the zero field dielectric and loss data show a broad doublet peak which becomes a single peak with a slight positive shift of temperature under applied magnetic field. On the other hand, only a single peak in dielectric and loss data is observed for TMO which becomes broad in presence of magnetic field. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Fig1.eps} \caption{\label{fig:sine} First, second and third column represents the data of Eu$_{0.5}$Dy$_{0.5}$MnO$_{3}$, Gd$_{0.5}$Dy$_{0.5}$MnO$_{3}$ and TbMnO$_{3}$, respectively where (Top row) left-axis in a, b and c shows heat capacity divided by temperature and the right axis shows magnetization \emph{vs.} temperature data, (Middle row) the left-axis in d, e and f shows dielectric constant and the right-axis shows loss \emph{vs.} temperature data and (bottom row) g, h and i shows pyroelectric current \emph{vs.} temperature data.} \end{figure} Pyrocurrent data of the samples recorded at 4 K/min, as reported earlier \cite{de}, from 10 to 30 K at 0, and 80 kOe fields, after poling the samples with an electric field (E$_P$ $=$ 8 kV/cm, E$_P$ $\perp$ H$_P$) from 35 to 10 K, are displayed in bottom panels (g, h and i). The warming rate dependent of pyrocurrent confirms the intrinsic ferroelectric nature of the sample \cite{supp}. From these data, we infer that the ferroelectric transition temperatures ($T_C$) for EDMO, GDMO and TMO are 26, 18 and 27 K, respectively. This is in agreement with the heat capacity and dielectric anomaly. It is interesting to note that the zero field pyrocurrent data for both EDMO and GDMO show two peak feature but a single peak for TMO, similar to that observed in dielectric and loss data. From the following discussion, we suggest that the two peak feature in dielectric and pyrocurrent data indicates the presence of $\emph{ab}$ and $\emph{bc}$-cycloidal phases. It is possible that these two cycloidal phases can coexist or there could be a temperature dependent reorientation of cycloidal spins. Considering the average radius of rare earth ions in EDMO and GDMO and phase diagram of temperature versus radius of R-ions, we infer that these two compounds are at the phase boundary between the $\emph{ab}$ and $\emph{bc}$ cycloidal phases \cite{hemberger}. In agreement with the earlier report \cite{nabe1}, thermal hysteresis observed around the dielectric anomalies confirms that the phase transition between these two cycloidal phases is first order which indicates possible coexistence of these phases \cite{supp}. However, it requires a single crystal study to confirm the phase coexistence. Based on the theoretically obtained magnetoelectric phase diagram of temperature versus $\emph{J}$$_2$ (Next nearest neighbour exchange interaction), we attribute the low temperature (LT) peak to $\emph{ab}$ and high temperature (HT) peak to $\emph{bc}$ cycloidal ordering \cite{mochizuki}. Under an applied magnetic field (80 kOe), the two peak feature disappears and becomes a single peak with enhanced pyrocurrent and significant increase of T$_C$. The disappearance of LT peak indicates conversion of $\emph{ab}$ cycloidal into $\emph{bc}$ cyclodial phase which is consistent with the enhanced pyroelectric current. In contrast, the magnitude of the single peak observed for TMO is decreased with magnetic field without any significant change in T$_C$ [Fig.1 (i)]. The broad nature of the current at 80 kOe may indicate the partial conversion of \emph{bc} cycloidal phase into \emph{ab} cycloidal. It should be noted that the polarization in GdMnO$_3$ is suppressed strongly with applied magnetic field through weakening of the commensurate cycloidal ordering \cite{zhang,supp}. \begin{figure}[b!] \centering \includegraphics[width=\columnwidth]{Fig2.eps} \caption{\label{fig:sine} Magnetocapacitance data of (a) Eu$_{0.5}$Dy$_{0.5}$MnO$_{3}$, (b) Gd$_{0.5}$Dy$_{0.5}$MnO$_{3}$ and (c) TbMnO$_{3}$ measured at 50 kHz at various temperatures.} \end{figure} Fig. 2 (a, b and c) shows magnetocapacitance (MC) data measured at 50 kHz while sweeping the magnetic field from -70 kOe to +70 kOe at the rate of 100 Oe/sec at different temperatures for all the three samples. The data measured at 500 Hz and 2 MHz are shown in supplemental material \cite{supp}. It is intriguing to note that the behavior of MC in EDMO and GDMO is quite different from TMO. Above T$_C$, all the three samples exhibit positive MC and it reaches a maximum around T$_C$. Below T$_C$, the MC in TMO remains positive and at 10 K it levels off above 40 kOe as shown in Fig. 2c. In contrast, the mixed rare earth samples show a crossover from a positive to negative MC on decreasing the temperature from paraelectric to ferroelectric state through an intermediate temperature range (between the two pyrocurrent peaks) where the positive MC shows a broad maximum, corresponding to a critical field, which shifts to lower field with decreasing temperature. The existence of such critical field indicates the change in direction of polarization \cite{goto}. Below certain temperature, MC becomes completely negative. This behaviour is consistent with the fact that MC is positive below the cycloidal ordering temperature in TbMnO$_3$ and negative in GdMnO$_3$ \cite{supp}. It is also important to note the large MC (12\%) observed in wide temperature range for the mixed rare-earth compounds compared to that in TMO (0.6\%). Fig. 3(a, b and c) shows temperature dependent polarization data, at different magnetic fields, obtained by integrating the pyrocurrent (inset), recorded after poling the samples with E$_P$ $=$ 8 kV/cm $\perp$ H$_P$. In TMO, it is known that the polarization changes its direction from $\emph{c}$- to $\emph{a}$-axis when magnetic field is applied along the $\emph{b}$-direction \cite{kimura,kimuragoto}. However, the observed polarization along $\emph{a}$-direction is smaller because of the lack of complete flipping of $\emph{bc}$ cycloidal phase due to high domain wall formation energy which is determined by the competition between Zeeman energy and magnetic anisotropy\cite{murakawa,kagawa,abe}. In the present case, the polycrstalline TMO shows only a small decrease in polarization ($\Delta P$ $\approx$ $-$12\% at 80 kOe) with magnetic field as shown in inset of Fig. 3c. In contrast, a dramatic change of polarization is observed in the two mixed rare-earth manganites, as shown in the insets of FIg. 3a and 3b. The magnitude of the polarization at zero field is fairly large (almost four times that of TMO) in EDMO and it increases with magnetic field drastically as shown in the inset of the fig. 3a. Remarkably, the effect of magnetic field on polarization in GDMO is very large ($>$140\% at 80 kOe) (inset of fig. 3b ) although its zero field value is comparable to TMO. The evolution of pyrocurrent peak upon applied magnetic field is shown in right hand side insets. It is clear from these figures that the two peak nature gradually becomes a single peak both in EDMO and GDMO indicating the conversion of LT cycloidal phase into HT cycloidal phase. It is also interesting to note that the magnitude of pyrocurrent and T$_C$ increases with magnetic field. We suggest that the large enhancement of polarization is due to the coexistence of $\emph{ab}$ and $\emph{bc}$-cycloidal phase at 0 kOe and change of $\emph{ab}$ cycloidal into $\emph{bc}$ cycloidal phase in applied magnetic field. At zero field, we observe a net polarization of the two components i.e. along $\emph{a}$ and $\emph{c}$ directions. Besides, we propose that the $\emph{bc}$-cycloidal regions can act as seed for changing the rotation plane of $\emph{ab}$ cycloidal so that the rotation of cycloidal plane becomes easier. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Fig3.eps} \caption{\label{fig:sine} Polarization data of (a) Eu$_{0.5}$Dy$_{0.5}$MnO$_3$, (b) Gd$_{0.5}$Dy$_{0.5}$MnO$_3$ and (c) TbMnO$_3$ measured at various magnetic field. Inset (right side) shows pyrocurrent data. Inset (left side) shows normalized $\bigtriangleup$P $=$ [(P(H)$-$P(0)/P(0))$*$100] change of polarization $\emph{vs.}$ magnetic field data.} \end{figure} \begin{figure}[b!] \centering \includegraphics[width=\columnwidth]{Fig4.eps} \caption{\label{fig:sine} Periodic change of polarization (left-axis ) and magnetic field (right-axis) with time recorded at 7 K after poling the sample from 30 to 7 K with $-$8 kV/cm and 80 kOe in GDMO.} \end{figure} Fig. 4 shows the polarization obtained by integrating the zero bias magnetoelectric current measured isothermally at 7 K by sweeping the magnetic field at 100 Oe/sec from 80 to 0 to 80 kOe for six cycle after magnetoelectric poling with E$_P$ $=$ $-$8 kV/cm $\perp$ H$_P$ $=$ 80 kOe from 30 to 7 K for GDMO. We see a sequential flipping of polarization without any decay from positive to negative and negative to positive upon ramping up the magnetic field from 0 to 80 kOe and then ramping down to 0 kOe. Though similar switching behavior is reported in other multiferroics \cite{lee,hur,Kouji,Yamasaki}, it is not known in {\it R}MnO$_3$. The observed polarization reversal in the present case is explained by the sequential conversion of the cycloidal phases. As we poled the sample with H$_P$ $=$ 80 kOe, the polarization at 7 K should be directed along $\emph{c}$-direction (only $\emph{bc}$ cycloidal phase). Upon ramping down the field to zero, $\emph{ab}$ cycloidal phase grows and the polarization flips 90$\degree$ ($\emph{a}$-direction) and the net polarization decreases to zero at H $\sim$40 kOe where the oppositely aligned domains (along $\emph{c}$-direction) are equally populated. When the field is further ramped down to zero, the polarization goes to the opposite (negative) direction. A complete switching of polarization with equal magnitude is obtained for a field range of 0 to 80 kOe. The actual mechanism of switching of polarization depends on the orientation of neighbouring \emph{ab} and \emph{bc} cycloidal phases. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Fig5.eps} \caption{\label{fig:sine} Polarization data after integrating the pyrocurrent (inset), recorded after poling the sample (EDMO) from 30 to 10 K with 8 kV/cm, 0 kOe and ramp the temperature from 10 to T$_R$ to 10 K with zero bias.} \end{figure} Finally, we present the observation of memory effect, the retention of polarized state when the sample is warmed to the paraelectric state \cite{fina,taniguchi,finger}. In this memory experiment, first the sample was poled (E = 8 kV/cm) from 30 K to 10 K. After poling the sample, the electrode wires were short-circuited and the sample was warmed up to a temperatures, called ramping temperature (T$_R$) and again cooled down to 10 K. After this ramping treatment, the pyrocurrent was recorded from 10 to 35 K at 4 K/min rate (see the inset of Fig. 5). Fig. 5a shows the integrated polarization versus temperature for T$_R$ $<$ T$_C$ (T$_R$ $=$ 15, 20, 23 K). As expected the polarization is decreased gradually with increasing T$_R$. Fig 5 (b) shows the polarization for T$_C$ $<$ T$_R$ $<$ T$_N$ (T$_R$ $=$ 27.5, 30, 35 and 40 K). Though the polarization decreases with increasing T$_R$, it is interesting to note that the polarization survives even after warming the sample above T$_C$. Fig 5(c) shows the polarization data obtained from similar measurement protocol for T$_R$ $>$ T$_N$ (T$_R$ $=$ 50, 60, 70, 80 and 90 K). Surprisingly, we see that the polarization is memorized up to 80 K and vanishes at 90 K. To observe the memory effect above T$_N$, it is very important to pole the sample just above the cycloidal ordering temperature to avoid the formation of internal electric field \cite{de}. In fact, we have shown that if we pole the sample from slightly high temperature (50 K) we do not see the memory effect above T$_N$ \cite{supp}. It is because of the poling temperature, the memory effect is not seen in Sm$_{0.5}$Y$_{0.5}$MnO$_3$ \cite{fina}. The variation of polarization obtained at 10 K as a function of T$_R$ reveals the existence of three different slopes that correspond to temperatures below T$_C$, T$_C$ $\leq$ T $\leq$ T$_N$ and above T$_N$ \cite{supp}. To further confirm the memory effect, we poled ($-$8 kV/cm) the sample only once from 30 to 10 K and measured the pyroelectric current continuously while warming and cooling from 10 K to different T$_R$. In this measurement, we observed depolarization and polarization current in each warming and cooling cycle for T$_R$ up to 80 K\cite{supp}. Similar memory effect is also observed in GDMO \cite{supp}. These results demonstrate the presence of memory effect(cycloidal phase) at temperatures much above the ferroelectric transition. From the inset of Fig. 5, we see that the two shoulders in the pyrocurrent peak, which are indicative of $\emph{ab}$ and $\emph{bc}$ cycloidal phases, gradually becomes one where the LT peak disappears with increasing T$_R$. This result suggests that the $\emph{bc}$ cycloidal phase (HT peak) is responsible for the memory effect. However, we suggest that this finding requires further study of dielectric response to explain how the cycloidal phase exists in the paramagnetic region also \cite{taniguchi}. In conclusion, we have shown the possible coexistence of $\emph{ab}$ and $\emph{bc}$ cycloidal phases in the mixed rare-earth multiferroic manganits, Eu$_{0.5}$Dy$_{0.5}$MnO$_3$ and Gd$_{0.5}$Dy$_{0.5}$MnO$_3$. As a result, these materials exhibit large magnetic tunability of polarization and high magnetocapacitance. More importantly, the electric polarization can be switched by ramping the magnetic field. Further, the electric polarization retains its memory even in the paraelectric and paramagnetic region. We suggest that these effects results from the coexistence of cycloidal phases. The authors acknowledge the Sheikh Saqr Laboratory at the Jawaharlal Nehru Centre for Advanced Scientific Research for experimental facilities.
2,869,038,155,779
arxiv
\section{Introduction} \label{sec:intro} Up-to-date observational data suggest that our universe is mainly driven by a pressure-less or cold dark matter (CDM) and a dark energy (DE) fluid where around 96 per cent ($\sim$\; 28 per cent DM $+$ 68 percent DE) of the total energy budget of the universe is occupied by this joint dark fluid \citep{Aghanim:2018eyx}. The fundamental nature of these fluids, such as, their origin and dynamics are yet to be known even after a series of astronomical missions. Therefore, understanding the dark picture of the universe has remained one of the greatest challenges in cosmology. In order to reveal the physics of the dark sectors, various cosmological models have been proposed and investigated over the last several years \citep{Copeland:2006wr,Sotiriou:2008rp,Cai:2009zp,DeFelice:2010aj,Capozziello:2011et,Clifton:2011jh,Bamba:2012cp,Cai:2015emx,Nojiri:2017ncd,Bahamonde:2021gfp}. The standard cosmological model $\Lambda$CDM is one of the simplest cosmological models which fits excellently to most of the observational probes. However, the physics of the dark fluids is not clear in this model as well $-$ the cosmological constant problem, for instance, is a serious issue \citep{Weinberg:1988cp}. Additionally, in this canonical picture of the universe, several anomalies and tensions between different cosmological probes may indicate a revision of the $\Lambda$CDM cosmology, see refs. \cite{ 2021arXiv210505208P,Schoneberg:2021qvd}. In the $\Lambda$CDM model, we assume the simplest possibility for its ingredients $-$ the independent evolution of DM and DE. As the physics of the dark sector is not yet clear, there should not be any reason to exclude the possibility of an interaction between these components. By allowing the interaction or the energy exchange between DM and DE, one naturally generalizes the non-interacting scenarios. The theory of the dark sector interaction did not appear suddenly in the literature. The limitations or issues of the standard cosmological model at the fundamental level motivated to relax the independent evolution of DM and DE. For instance, an interaction in the dark sector can provide a possible/promising explanation/solution to the cosmic coincidence problem (\cite{delCampo:2008jx,Velten}), $H_0$ tension \citep{DiValentino:2017iww,Kumar:2017dnp, Yang:2018uae,Pan:2019jqh,DiValentino:2019ffd,Lucca:2020zjb,2021PDU....3300862K,2021CQGra..38o3001D,2021JHEAp..32...28A,2021arXiv211205701R,2021PhRvD.104l3512A,2021Univ....7..300T,DiValentino:2021pow} that arises between the CMB measurements by Planck satellite within the $\Lambda$CDM cosmology \citep{Aghanim:2018eyx} and SH0ES \citep{Riess:2019cxk,2021arXiv211204510R}, and $S_8$ tension \citep{Pourtsidou:2016ico,An:2017crg,Kumar:2019wfs,2021PhRvD.104j4057D,2021PDU....3400899L,2022MNRAS.509.2994A} that arises between the Planck and weak lensing measurements \citep{S8_tension}. Additionally, an interaction in the dark sector could explain the phantom phase of the DE without any need of a scalar field with negative correction \citep{Wang:2005jx,Sadjadi:2006qb,Pan:2014afa,Bonilla_1,Bonilla_2,2021JCAP...10..008Y}. See \cite{Bolotin:2013jpa,Wang:2016lxa} for a comprehensive reading on interacting dark energy models. Therefore, based on such appealing outcomes, it is indeed desirable to consider a wider picture of our universe by including the interaction between DM and DE, and allow the observational data to favor or reject this possibility. In a standard approach, the interaction between DM and DE is investigated through the inclusion of some phenomenological coupling function to describe the DM and DE dynamics intuitively. However, let us recall that some action formalisms, i.e., construction of the DE-DM interaction models from the first principle, including the Noether symmetry approach, have also been developed in the literature, see e.g. \cite{2020PDU....2700444P,Gleyzes:2015pma,Boehmer:2015kta,Amico:2016qft,Kase:2019hor,2018arXiv180900556V,Pan:2020zza}. On the other hand, given the great interest of the community in this theoretical framework, accomplishing a model independent analysis becomes a necessary task. In principle, one may do it using cosmographic approach, wherein a series expansion is performed around $z = 0$ for a cosmological observable, and then the data are used to constrain the kinematic parameters. This procedure works fine for lower values of $z$, but may not be good enough for larger values of $z$, see \cite{2021PhRvD.104l3518L}. An interesting and robust alternative could be to consider a Gaussian process (GP) to reconstruct the cosmological parameters in a model-independent way \citep{GP_01,GP_02,GP_03,GP_04,GP_05,GP_06,GP_07,GP_08,Jesus2020,GP_09,GP_10,2021arXiv211014950M,2021ApJ...915..123S,Bernardo:2021cxi,Dialektopoulos:2021wde,Bengaly:2021wgc,Avila:2022xad} or to fix a class of cosmological models \citep{2020CQGra..38e5007B,2021JCAP...09..014B,2021JCAP...07..048R,2021arXiv210401077E,2021PDU....3200812R}. The GP and other alternative approaches have been applied to reconstruct an interaction between DM and DE in a minimally model-dependent way in various works with different data sets and approximations \citep{Yang2015, Wang2015, GP_IDE_01, GP_IDE_02, GP_IDE_03, GP_IDE_04, GP_IDE_05, GP_IDE_06, GP_IDE_07}. In this work, we employ the GP to carry out a joint analysis by using some geometrical cosmological probes, viz., Cosmic chronometers (CC), Supernova Type Ia (SN), Baryon Acoustic Oscillations (BAO), and the H0LiCOW lenses sample to constrain/reconstruct the interaction in the dark sector of the universe in two different frameworks, namely, the one where the EoS of DE mimics the vacuum energy (known as an interacting vacuum energy scenario) and secondly a general coupling scenario where DE is allowed to assume a dynamical character via its equation of state (EoS). This latter possibility has not been studied much in the literature, viz., most of the works are carried out with only the constant or linear approximation of the EoS parameter of DE. Moreover, to our knowledge of the current literature, the reconstruction of the interaction in the dark sector has not been performed using a joint analysis. In addition, we also simulate a catalogue of 1000 standard siren events from binary neutron star mergers, within the sensitivity predicted for the third generation of the ground GW detector called the Einstein Telescope (ET), and we use these mock data to improve the reconstruction of the coupling function from the SN, BAO, CC and H0LiCOW data. A model-independent joint analysis from above-mentioned data sets, including a forecast analysis with the simulated data for optimizing the covariance function (or kernel in GP language), as we present here, to our knowledge is new and not previously investigated in the literature. Indeed, a joint analysis with several observational probes is helpful to obtain tight constraints on the cosmological parameters. In this work, we develop this methodology to obtain an accurate and robust reconstruction of a possible interaction between DM and DE. The paper is structured as follows. In Section \ref{sec-method-data-theory}, we describe the GP, the observational data sets and the theoretical framework used in this work for model-independent inference of the dark sector coupling. In Section \ref{sec-results}, we present and discuss our results on the reconstruction of the coupling function between DM and DE following the model-independent approach, wherein the subsections \ref{sec-ivs} and \ref{sec-ide} describe two different reconstructed scenarios. Further, in Section \ref{sec-gw}, we use the mock gravitational waves data in order to get a more deeper understanding on the evolution of the coupling function. Finally, in Section \ref{sec-conclu}, we conclude our work with a brief summary of the entire study. \section{Methodology, data sets and the theoretical background} \label{sec-method-data-theory} This section is divided into the following three parts: the Gaussian process, the observational data, and a basic framework of the theory, which we are going to test in this article using the observational data following the model-independent Gaussian approach. \subsection{Gaussian process} \label{sec-gaussian} In a nutshell, the GP in cosmology allows us, given an observational data set $f(z)\pm \sigma_f$, to obtain a function $f(z)$ without the need to assume a parametrization or physical model about the dark nature of the main components of the universe. The GP method adequately describes the observed data based on a distribution over functions. The reconstructed function $f(z)$ (and its derivatives $f'(z)'$, $f''(z)$,..., etc) have a Gaussian distribution with mean and Gaussian error at each data point $z$. The functions at different points $z$ and $z'$ are related by a covariance function $k(z,z')$, which only depends on a set of kernels with hyperparameters $l$ and $\sigma_f$, describing the strength and extent of the correlations among the reconstructed data points, respectively. Thus, $l$ gives a measure of the coherence length of the correlation in the x-direction, and $\sigma_f$ denotes the overall amplitude of the correlation in the y-direction. In general, the hyperparameters are constant since their values point to a good fit of the function rather than a model that mimics this behavior, which means that the GP optimizes both concerning the observed data. Finally, the GP method is model-independent in a physical model and assumes a particular statistical kernel that determines the correlation between the reconstructed data points. The entire methodology used in this work is described in detail in section II of Ref.~\cite{Bonilla:2020wbn}. \subsection{Observational data sets} \label{sec-data} In this section we shall describe the geometrical probes in detail that we have used to trace the interaction in the dark sector. \begin{itemize} \item Cosmic Chronometers (CC): The CC approach is very powerful to detect the expansion history of the universe that comes through the measurements of the Hubble parameter. Here we take into consideration 30 measurements of the Hubble parameter distributed over a redshift interval $0 < z < 2$ as in Ref. \cite{Moresco16}. \\ \item Supernovae Type Ia (SNs): The first astronomical data probing the accelerating expansion of our universe are SNs. Certainly, SNs are very important astronomical probes in analysing the properties of DE and the expansion history of the universe. The latest compilation of SN data (Pantheon sample) that we have used in this work, consists of 1048 SN data points in the redshift range $0.01 < z < 2.3$ \citep{Scolnic18}. In the context of a universe with zero curvature, the entire Pantheon sample can be summarized in terms of six model-independent $E(z)^{-1}$ data points \citep{Riess18}. Here we use the six data points as reported in ref. \cite{Haridasu18} in the form of $E(z)$ taking into account the theoretical and statistical considerations for its implementation. \\ \item Baryon Acoustic Oscillations (BAO): Another important cosmological probe is the BAO data. With the use of BAO, the expanding spherical wave produced by baryonic perturbations of acoustic oscillations in the recombination epoch can be traced through the correlation function of the large-scale structure displaying a peak around 150$h^{-1} {\rm Mpc}$. Here we have used BAO measurements from various astronomical surveys: (i) measurements from the Sloan Digital Sky Survey (SDSS) III DR-12 which report three effective binned redshifts $z = 0.38, 0.51$ and $0.61$ \citep{Alam17}, (ii) measurements from the clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample reporting four effective binned redshifts $z = 0.98, 1.23, 1,52$ and $1.94$, as in \cite{Zhao19}, (iii) measurements from the high-redshift Lyman-$\alpha$ survey reporting two effective binned redshifts at $z = 2.33$ \cite{du_Mas20} and $z = 2.4$ \cite{du_Mas17}. All the measurements are presented in terms of $H(z) \times (r_d/r_{d,fid})$ km s$^{-1}$Mpc$^{-1}$, where $r_d$ denotes the co-moving sound horizon and $r_{d,fid}$ is the fiducial input value provided in the above surveys. \\ \item H0LiCOW sample: Finally, we use the sample from the $H_0$ Lenses in COSMOGRAIL's Wellspring program \footnote{\url{www.h0licow.org}}, another geometrical probe in this list which measures the Hubble constant in a direct way (without assuming any model in the background). The H0LiCOW collaboration has measured six lens systems, determined by measurements of time-delay distances, $D_{\Delta t}$, between multiple images of strong gravitational lens systems due to elliptical galaxies \citep{H0LiCOW}. The entire information is encapsulated in the time-delay distance $D_{\Delta t}$. Along with these six systems of strongly lensed quasars, the angular diameter distance to the lens $D_l$ also offers some additional information in terms of four more data points. Therefore, in total, one can employ 10 data points and we have used them in this work (we refer to \cite{Birrer2019,Pandey2020} for more details in this context). \end{itemize} \subsection{Theoretical framework} \label{sec-theory} For a model-independent theoretical description of the dark sectors' interaction, in this work, we follow the similar methodology as in ref. \cite{Yang2015}. In the context of a Friedmann$-$Lema\^{i}tre$-$Robertson$-$Walker universe, we assume that the total energy density of the universe is comprised by DE and DM only where both of them are coupled through a non-gravitational interaction. Thus, the conservation equations for DM and DE are modified as \begin{eqnarray} &&\dot{\rho}_{\rm DM} +3 H \rho_{\rm DM} = -Q (t)~,\label{cont1}\\ &&\dot{\rho}_{\rm DE} + 3 H \rho_{\rm DE} (1+w)= Q (t)~,\label{cont2} \end{eqnarray} where $w = p_{\rm DE}/\rho_{\rm DE}$ is the equation of state of DE ($p_{\rm DE}$ denotes the pressure of the DE fluid), $H=\dot{a}/a$ is the expansion rate of the universe and it is related to the total energy density of the universe as $3H^2 = \rho_{\rm DM} + \rho_{\rm DE}$ (in the units where $8 \pi G = 1$). The function $Q (t)$ describes the interaction between DM and DE, and usually it is taken to be a function of the energy densities of DM and DE. For $Q (t) = 0$ with $w =-1$, the standard $\Lambda$CDM cosmology is recovered. Now, combining the conservation equations (\ref{cont1}) and (\ref{cont2}) with expansion rate of the universe $H(z)$, we obtain \citep{Yang2015}: \begin{eqnarray} \label{eqn:WqE} -wq &=& 2 \Big(E E'^2 + E^2 E'' - \frac{w'}{w} E^2 E' \Big) (1+z)^2\nonumber \\ &&- \Big[ 2(5 + 3 w)E^2 E' - 3 \frac{w'}{w} E^3\Big](1+z)\nonumber \\ &&+ 9(1 + w)E^3, \end{eqnarray} where for convenience, we have used a dimension-less variable $q = Q (t)/H^3_0$ to characterize the interaction, $E(z)=H(z)/H_0$ is the normalized Hubble rate and the prime denotes the differentiation with respect to the redshift $z$. Let us note that the symbol $q$ is usually used to represent the deceleration parameter in the literature, but here this symbol has a different meaning as defined above. The detailed derivation of equation (\ref{eqn:WqE}) is given in Appendix \ref{sec-appendix}. Now, using the normalized co-moving distance, \begin{eqnarray} \label{eqn:D} D = \frac{H_0}{c} \left(\frac{1}{1+z} \right) d_L(z), \end{eqnarray} where $d_L(z)$ represents the luminosity distance at redshift $z$, eq.(\ref{eqn:WqE}) can be expressed alternatively as \begin{eqnarray} \label{eqn:WqD} -wq &=& 2 \Big(\frac{3 D''^2}{D'^5} - \frac{D'''}{D'^4} + \frac{w' D''}{w D'^4} \Big) (1+z)^2\nonumber \\ && + \Big[2(5 + 3w)\frac{D''}{D'^4} + \frac{3 w'}{w D'^3}\Big](1+z)\nonumber\\ && + \frac{9(1 + w)}{D'^3}. \end{eqnarray} The above methodology represents a general framework to reconstruct the coupling function with minimal assumption. The only assumption of fact is the validity of the cosmological principle, and a possible coupling between DM and DE has been assumed as a theoretical prior, where this second assumption must be tested with the observational data. In what follows, we will test this theoretical framework. \begin{figure*} \begin{center} \includegraphics[width=3.1in]{q_1.pdf} \,\,\,\, \includegraphics[width=3.1in]{q_2.pdf} \caption{Left-hand panel: Reconstructed coupling function $\delta (z)$ at $1\sigma$ and $2\sigma$ CL in the interacting vacuum energy scenario from CC+SN+BAO (Orange) and CC+SN+BAO+H0LiCOW (Blue) data. Right-hand panel: The same as in left-hand panel, but restricted to the range $z \in [0, 0.5]$. The dashed black curve corresponds to the canonical $\Lambda$CDM prediction and the solid curves represent the GP mean. } \label{IVCDM_results01} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=3.1in]{w_q_1.pdf} \,\,\,\, \includegraphics[width=3.1in]{W_q_2.pdf} \caption{Left-hand panel: Reconstructed coupling function $\delta (z)$ at $1\sigma$ and $2\sigma$ CL in the general interacting scenario of the dark sectors' from CC+SN+BAO (Green) and CC+SN+BAO+H0LiCOW (Yellow) data. Right-hand panel: The same as in left-hand panel, but restricted to the range $z \in [0, 0.5]$. The dashed black curve corresponds to canonical $\Lambda$CDM prediction and the solid curves stand for the GP mean. } \label{Geral_results} \end{center} \end{figure*} \section{Results and Discussions} \label{sec-results} In this section, we present and discuss the results of our analyses considering two separate cases. First, we fix the EoS of DE to $w=-1$, and reconstruct the interaction function $q = Q (t)/H_0^3$. This possibility characterizes a very well-known sub-class of the interaction scenario in the dark sector, known by interacting vacuum energy. Secondly, we consider a very general possibility assuming $w$ as a free and dynamical function, and similarly reconstruct the interaction function $q$. Thus, having both the possibilities, we explore a very general description of the dark coupling in a model-independent approach. Before we enter into the main results, we rescale the function $q$ (see eq.(\ref{eqn:WqD})) to $\delta (z) = q(1+z)^{-6}$. Such a pre-factor is just considered as a scale transformation with respect to $z$, introduced to better expose the results in the graphical description. Thus, here onwards the function $\delta (z)$ will characterize the coupling function. We proceed further considering the above two scenarios of coupling in the dark sector. To reconstruct $\delta (z)$, we use $M_{9/2}$ kernel in all the analyses performed in this work. In the case where we assume $w(z)$ to be a free function, we follow the same methodology as presented in \cite{Bonilla:2020wbn}. For this purpose, we have used modified versions of some numerical routines available in the public GAPP (Gaussian Processes in Python) code \cite{GP_01}. In all of our analyses, we employ GP to perform a joint analysis using the minimal data set combination CC+SN+BAO, which to our knowledge has not been investigated previously in the literature. We now present and discuss our main results. \subsection{Interacting Vacuum Energy} \label{sec-ivs} In Fig. \ref{IVCDM_results01}, we have shown the reconstruction of $\delta(z)$ using the data combinations CC+SN+BAO and CC+SN+BAO+H0LiCOW. In both analyses, we note that for $z > 0.5$ the dynamical coupling function $\delta (z)$ between the dark components is statistically well compatible with $\delta (z) =0$. It is interesting to note that the GP mean predicts a possible oscillation in $\delta (z)$, where we can note an oscillation between positive and negative values in the analysed range of $z$. This result strengthens some earlier interaction models having a sign changeable property, see for instance \cite{Pan:2019jqh,Pan:2020bur,2021PhRvD.103h3520Y}. For the present scenario, at late cosmic time, i.e. for $z < 0.5$ ($z < 0.25$), we find a trend towards $\delta < 0$ for CC+SN+BAO+H0LiCOW (CC+SN+BAO) data. When evaluated at present moment, we find $\delta(z=0) = -0.37 \pm 0.24$ ($-0.76 \pm 0.12$) at $1\sigma$ CL from CC+SN+BAO (CC+SN+BAO+H0LiCOW) data. This suggests an interaction in the dark sector at more than $3\sigma$ CL from the CC+SN+BAO+H0LiCOW joint analysis. It is important to emphasize that these constraints on $\delta(z)$ are subject to the condition $w=-1$. Also, we notice that the combined analysis with several data sets offers a more stringent bound on the interaction function compared to \cite{Yang2015}, where only the SN Ia Union 2.1 data set~\cite{Suzuki:2011hu} was employed. \subsection{General interaction scenario in the dark sector} \label{sec-ide} In the previous subsection, we analysed a particular interaction case, namely the interacting vacuum-energy ($w=-1$) to get the constrains on $\delta(z)$. As a second round of analysis, we relax these conditions by assuming $w(z)$ to be a free function. This possibility allows us to reconstruct the coupling in the dark sector in a general way, because in this case, no physical assumption is considered on the EoS of DE. In Fig. \ref{Geral_results}, we show the reconstruction of $\delta(z)$ from CC+SN+BAO and CC+SN+BAO+H0LiCOW data combinations. Since in this scenario, we have an additional free parameter $w$ to propagate errors compared to the previous case with $w =-1$, it is expected that large error bars might be imposed on the reconstructed $\delta (z)$ compared to the case with $w=-1$. As a general feature of the GP mean, we can note a flux of energy from DM to the DE at high $z$, and as cosmic time evolves up to approximately $z < 0.5$, the coupling function $\delta (z)$ reverses its sign, at low $z$ (late time). This again goes in support of some phenomenological models of the interaction \citep{Pan:2019jqh,Pan:2020bur}. In this general framework, we find $\delta(z=0) = -0.31 \pm 0.77$ at $1\sigma$ CL from CC+SN+BAO data and $\delta(z=0) = -0.64 \pm 0.43$ at $1\sigma$ CL from CC+SN+BAO+H0LiCOW data. These predictions are compatible with the $\Lambda$CDM cosmology, i.e., $\delta =0$. \begin{figure*} \begin{center} \includegraphics[width=3.1in]{q_3.pdf} \,\,\,\, \includegraphics[width=3.1in]{q_4.pdf} \caption{Left-hand panel: Reconstructed coupling function $\delta (z)$ at $1\sigma$ and $2\sigma$ CL from CC+SN+BAO+GW (Blue) and CC+SN+BAO+H0LiCOW+GW (Red) data, in the interacting vacuum energy scenario. Right-hand panel: The same as in left-hand panel, but restricted to the range $z \in [0, 0.5]$. The dashed black curve corresponds to the canonical $\Lambda$CDM prediction and the solid curves are for the GP mean.} \label{results_mock_data} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=3.1in]{w_q_3.pdf} \,\,\,\, \includegraphics[width=3.1in]{W_q_4.pdf} \caption{Left-hand panel: Reconstructed coupling function $\delta (z)$ at $1\sigma$ and $2\sigma$ CL in the general interacting scenario of dark sectors' from CC+SN+BAO+GW (Black) and CC+SN+BAO+H0LiCOW+GW (Purple) data. Right-hand panel: The same as in left-hand panel, but restricted to the range $z \in [0, 0.5]$. The dashed black curve corresponds to the canonical $\Lambda$CDM prediction and the solid curves stand for the GP mean.} \label{results_mock_data_w} \end{center} \end{figure*} \section{Forecast from gravitational wave standard sirens} \label{sec-gw} To impose more robust and accurate constraints on the $\delta (z)$ function, we optimize the covariance function using mock gravitational waves (GW) data generated by assuming the $\Lambda$CDM model as a fiducial one. As argued in \cite{Seikel2013}, for non-parametric regression method such as GP, we aim to generate confidence limits such that the true function is trapped appropriately. But it can be problematic when evaluating functions as $w$ and $\delta (z)$ because they are quantities that depend on the second and third order derivatives of the cosmological observable. We can avoid the problem in identifying an appropriate covariance function which can reproduce expected models accurately by adding simulated data. To accomplish this object, we create a standard sirens mock catalogue, the gravitational wave analogue of the astronomical standard candles, which can provide powerful information about the dynamics of the universe. For a given GW strain signal, $h(t) = A(t) \cos [\Phi(t)]$, the stationary-phase approximation can be used for the orbital phase of inspiraling binary system to obtain its Fourier transform $\tilde{h}(f)$. For a coalescing binary system of masses $m_1$ and $m_2$, \begin{equation} \label{waveform} \tilde{h}(f) = Q \mathcal{A} f^{-7/6} e^{i\Phi(f)}. \end{equation} Here $\mathcal{A} \propto 1/d_L$ is the luminosity distance to the merger's redshift, and $\Phi(f)$ is the binary system's inspiral phase. For more details on the post-Newtonian coefficients and waveforms, one may refer to \cite{Agostino_Nunes2019} and Appendix A therein. After defining the GW signal, for a high enough signal-to-noise ratio (SNR), one may obtain upper bounds on the free parameters of the GW signal $\tilde{h}(f)$ by using the Fisher information analysis. Estimating $d_L(z)$ from GW standard sirens mock data is well established approach, see \cite{Agostino_Nunes2019} and references therein. In what follows, we briefly describe our methodology that is used to generate the standard sirens mock catalogue. In order to generate the mock standard siren catalogue, we consider the ET power spectral density noises. The ET is a third-generation ground detector, and covers frequencies in the range $1-10^4$ Hz. The ET is sensitive to signal amplitude, which is expected to be ten times larger than the current advanced ground-based detectors. The ET conceptual design study predicts BNS detection of an order of $10^3-10^7$ per year. However, only a small fraction ($\sim 10^{-3}$) of them is expected to be accompanied by a short $\gamma$-ray burst observation. Assuming a detection rate of $\mathcal{O}(10^5)$, the events with short $\gamma$-ray bursts will be $\mathcal{O}(10^2)$ per year. In our simulations, 1000 BNS mock GW standard sirens merger events up to $z = 2$ are considered. In the mock catalogue, we have used the input values $H_0 = 67.4$ $km$ $s^{-1}$ $Mpc^{-1}$ and $\Omega_{m0} = 0.31$ for the Hubble constant and matter density parameter, respectively, in agreement with the most recent Planck CMB data (within the $\Lambda$CDM paradigm) \cite{Aghanim:2018eyx}. We have estimated the measurement error on the luminosity distance for each event using the Fisher matrix analysis on the waveforms (see ref.~\cite{Agostino_Nunes2019} for details). We have calculated the SNR of each event, and confirmed that it is a GW detection provided SNR $> 8$. In what follows, we describe the evolution of the interaction function with the inclusion of the mock GW standard sirens with the standard cosmological probes. In Fig.~\ref{results_mock_data}, we show the reconstructed interaction function $\delta(z)$ from CC+SN+BAO+GW and CC+SN+BAO+H0LiCOW+GW data combinations for the simple interaction scenario with $w = -1$. When evaluated at the present moment, we find $\delta(z=0) = -0.70 \pm 0.14$ at $1\sigma$ CL for CC+SN+BAO, and $\delta(z=0) = -0.833 \pm 0.016$ at $1\sigma$ CL for CC+SN+BAO+H0LiCOW under the interacting vacuum-energy assumption. Analysing the behavior of the $\delta$ function in the range $z \in [0, 2.5]$, we find an evidence for a sign transition in $\delta$ that quantifies the interaction between the dark components. We clearly notice a preference for $\delta < 0$ at late times. In Fig.~\ref{results_mock_data_w}, we show the reconstructed interaction function $\delta(z)$ from CC+SN+BAO+GW and CC+SN+BAO+H0LiCOW+GW data combinations, under the general assumption where $w(z)$ is a free function of $z$. In this case, we find $\delta(z=0) = -0.49 \pm 0.69$ at $1\sigma$ CL from CC+SN+BAO+GW and $\delta(z=0) = -0.705 \pm 0.066$ at $1\sigma$ CL from CC+SN+BAO+H0LiCOW+GW. In our analysis, even including GW mock data in CC+SN+BAO, we note that $\delta$ is compatible with $\Lambda$CDM. On the other hand, from CC+SN+BAO+H0LiCOW+GW data, we can notice a prediction for $\delta < 0$ at late times. It is important to note that the presence of GWs mock catalogue (assuming a fiducial $\Lambda$CDM model) is used for the purpose of optimizing the covariance function, as argued previously. The results summarized in Fig.~\ref{results_mock_data_w} are the most realistic, being the ones for the most general case. \section{Conclusions} \label{sec-conclu} In this work, we have presented some generalized aspects of dark sectors' interaction that may be of interest to the community: (i) We have investigated the case where DE can assume a dynamical character through its equation of state along with the simplest vacuum-energy case $w =-1$. (ii) We have studied joint analyses of the dark sector interaction with several geometrical probes following a minimally model-dependent way of GP. (iii) We have optimized the covariance function using mock GW standard sirens to better reconstruct the function $\delta(z)$. In short, all these pieces of investigation have led to more general and robust results which could be helpful in order to have a deeper understanding of the physics of the dark sector. Our observations are as follows: We find, for both interacting vacuum and general scenario, that, $\delta (z)$ exhibits a transient nature according to the analyses from CC+SN+BAO and CC+SN+BAO+H0LiCOW data, and at very late time, $\delta (z)$ enters into the negative region (see Figs. \ref{IVCDM_results01} and \ref{Geral_results}). This conclusion remains unaltered when we include the 1000 mock GW standard sirens to the above two combined data sets (see Figs. \ref{results_mock_data} and \ref{results_mock_data_w}). However, the indication of a late interaction is strongly pronounced in the context of the interacting vacuum scenario where we find that for the CC+SN+BAO data, $\delta (z = 0) \neq 0$ at more than $1\sigma$ CL, but $\delta (z = 0) \neq 0$ remains valid at more than $3\sigma$ CL for CC+SN+BAO+H0LiCOW data. This is an interesting result in this work because the transfer of energy among the dark sector components, which has been observed in some phenomenological models, is not ruled out in light of the model-independent analysis, and also for the combined analyses that we have performed during the reconstruction. However, concerning the general interacting picture, we see that $\delta (z =0) =0$ is compatible within $1\sigma$ for both CC+SN+BAO and CC+SN+BAO+H0LiCOW data. When the GW standard sirens enter into the analysis, we find that for the interacting vacuum scenario, again $\delta (z =0) \neq 0$ at several standard deviations, for both CC+SN+BAO+GW and CC+SN+BAO+H0LiCOW+GW data. Whilst for the general scenario, even if for CC+SN+BAO+GW data, $\delta (z= 0)=0$ is compatible within $1\sigma$ but for CC+SN+BAO+H0LiCOW+GW data, we find a strong preference of an interaction at several standard deviations. Summarizing the results, we find that the model-independent analyses indicate for a possible interaction in the dark sector which is strongly preferred for the scenario with $w =-1$. Based on the findings of this study, we believe that it will be worthwhile to investigate, in future communications, various statistical techniques for reconstructing the function $\delta (z)$, such as the use of neural networks, principal component analysis, and others, which may provide statistical improvements over the standard GP method that we use in the study of cosmological parameters. \\ \section*{Acknowledgements} \noindent The authors thank the referee for some useful comments that improved the manuscript. SK gratefully acknowledges the support from Science and Engineering Research Board (SERB), Govt. of India (File No. CRG/2021/004658). RCN would like to thank the agency FAPESP for financial support under the project No. 2018/18036-5. SP acknowledges the Mathematical Research Impact-Centric Support Scheme (File No. MTR/2018/000940) of SERB, Govt. of India. \section*{Data Availability} The observational data used in this article will be shared on reasonable request to the corresponding author.
2,869,038,155,780
arxiv
\section{Introduction} \label{sec:intro} Video frame interpolation (VFI) (also known as Video temporal super-resolution) is a significant video enhancement problem which aims to synthesize one or more visually coherent frames between two consecutive frames in a video, i.e., to up-scale the number of video frames. Such an up-scaling method finds its usage in numerous video-based applications such as slow-motion video generation (e.g., in sports and TV commercials), video compression-decompression framework \cite{bframe}, generating short videos from GIF images \cite{gif2vid}, novel view synthesis \cite{flynn2016deepstereo} and medical imaging \cite{karargyris2010three,zinger2011view}. Earlier methods \cite{dvf,superslomo,toflow,featureflow,park2020bmbc} in this domain rely on estimating optical flow between interpolated frame to source frames (i.e., neighboring frames). Once the optical flow is estimated, the interpolated frame can be synthesized by a simple warp operation from source images. However, estimating an accurate optical flow between video frames is a hard problem in itself. Thus, some methods \cite{adaconv, sepconv, lee2019learning} relied on estimating per-pixel interpolation kernels to smoothly blend source frames to produce the interpolated frame. Further, some hybrid methods \cite{memc,dain} were also proposed to integrate optical flow and interpolation kernel based approaches, exhibiting better performance than earlier class of methods. Most state-of-the-art interpolation algorithms take two neighboring frames as input to produce the intermediate frame. As a result, only a linear motion can be modeled between the frames either explicitly or implicitly. However, objects often follow complex, non-linear trajectories. To this end, researchers recently focused on leveraging information from more than two neighboring frames \cite{xu2019quadratic, mprn, all_at_once, tridirectional, kalluri2020flavr}. 3D convolutional neural networks have gained success in many important computer vision tasks such as action recognition \cite{hara2018can, ji20123d,tran2018closer, hara2017learning}, object recognition \cite{maturana2015voxnet}, video object segmentation \cite{hou2019efficient} and biomedical volumetric image segmentation \cite{3dunet}. However, the application of 3D CNNs in VFI task is largely unexplored. Recently, Kalluri et al. \cite{kalluri2020flavr} use a 3D UNet to directly synthesize interpolated frames. However, hallucinating pixel values from scratch can lead to blurry results and simply copying pixels from nearby frames can produce better results \cite{dvf}. In this work, we propose a novel frame interpolation method. First, we compute bi-directional flow and occlusion maps from four neighboring frames and predict a non-linear flow model with the help of a 3D CNN. In this regard, we formulate a novel 3D CNN architecture namely ``GridNet-3D'' inspired from \cite{yuanchen2021gridnet} for efficient multi-scale feature aggregation. Further, the predicted non-linear flow model is used as coefficients in a quadratic formulation of inter-frame motion. The idea is that such an approach can adaptively select between linear and quadratic models by estimating suitable values for the coefficients. Intermediate backward flows are produced through flow reversal and motion refinement. Finally, two neighboring frames are warped and combined using a blending mask to synthesize the interpolated frame. Our algorithm demonstrates state-of-the-art performance over existing approaches on multiple datasets. Our main contributions are summarized as follows: \begin{itemize} \setlength\itemsep{0.2em} \item We introduce a novel frame interpolation algorithm that utilizes both flow and occlusion maps between four input frames to estimate an automatically adaptable pixel-wise non-linear motion model to interpolate the frames. \item We propose a parameter and runtime-efficient 3D CNN named ``GridNet-3D'' to aggregate multi-scale features efficiently. \item Through a set of comprehensive experiments on four publicly available datasets (Vimeo, DAVIS, HD and GoPro), we demonstrate that our method achieves state-of-the-art performance. \end{itemize} Rest of the paper is organized as follows: Section \ref{sec:relatedworks} discusses some significant prior works in Video Frame Interpolation, Section \ref{proposed} describes our algorithm, Section \ref{expt} contains experiments along with some ablation studies and finally Section \ref{sec:conclusion} summarizes limitations of our approach and discusses possible future directions. \section{Related work} \label{sec:relatedworks} In this section, we briefly describe the methods that are relevant to this paper. \vspace{-0.3cm} \paragraph{Video-frame interpolation (VFI):} Based on the type of motion cues used, VFI methods can be mainly classified into three categories: \textbf{1) Optical flow based approaches:} In this class of methods, optical flow \cite{flownet2,meister2018unflow,pwcnet} is predominantly used as a motion cue for interpolating frames. Recent state-of-the-art methods use fully convolutional networks (FCN) \cite{dvf}, 2D UNets \cite{superslomo, unet}, multi-scale architectures \cite{toflow,spynet}, or bilateral cost volume \cite{park2020bmbc} to predict backward optical flows to warp the neighboring frames and estimate the interpolated frame. In some methods, forward optical flow is also utilized \cite{ctxsyn, softsplat}. However, these class of methods rely on estimating the optical flow based on 2D CNNs at the frame-level that might not capture motion features well. Our work belongs to this category with a key difference in that we use a 3D CNN to capture a large spatiotemporal stride to better estimate per-pixel non-linear motion. \textbf{2) Phase-based approaches:} Estimating accurate optical flow is a hard problem, especially when it involves large motion, illumination variations and motion blur. An alternative cue to optical flow is to use phase-based modification of pixels. These methods estimate low-level features such as per-pixel phase \cite{phase2015}, fourier decomposition of images using steerable pyramids \cite{phasenet}, phase and amplitude features using one-dimensional separable Gabor filters \cite{zhou2019frame} to estimate the interpolated frame. \textbf{3) Kernel-based approaches:} Different from optical flow and phase-based methods, kernel-based methods strive to estimate per-pixel kernels to blend patches from neighborhood frames. Some of these methods employ adaptive convolution \cite{adaconv}, adaptive separable convolution \cite{sepconv} or adaptive deformable convolutions \cite{lee2019learning}. Apart from these categories, some methods proposed a hybrid approach to utilize the advantage from multiple cues. For instance, a combination of interpolation kernels and optical flow \cite{memc} or optical flow and depth \cite{dain} are employed to obtain complementary features. \vspace{-0.3cm} \paragraph{Multi-frame VFI approaches:} Recent methods have started using multiple frames to capture complex motion dynamics between frames. For instance, Choi et al. \cite{tridirectional} utilize three frames and bi-directional optical flow between them to generate the intermediate flows and use warping and frame generation module to estimate the final interpolated frame. Chi et al. \cite{all_at_once} use cubic modeling and a pyramid style network to produce seven intermediate frames. Similarly, Xu et al. \cite{xu2019quadratic} use four frames to model a quadratic motion between frames. They estimate quadratic motion parameters in terms of an analytical solution involving optical flow. Our method uses four frames and estimates non-linear (quadratic) motion model similar to \cite{xu2019quadratic}. However, we show that using a powerful 3D CNN to estimate the motion parameters instead of an analytical solution significantly performs better (ref. Section \ref{sec:sota_comparison}). \vspace{-0.3cm} \paragraph{3D CNN models:} 3D CNNs are prevalent in Computer Vision tasks involving a spatio-temporal input (video-based tasks) such as action recognition \cite{hara2018can, ji20123d,tran2018closer, hara2017learning}, video object segmentation \cite{hou2019efficient} and video captioning \cite{chen2019temporal,aafaq2019spatio}. Related to VFI task, Zhang et al. \cite{mprn} developed a Multi-frame Pyramid Refinement (MPR) scheme using 3D UNet to estimate intermediate flow maps from four input frames. Kalluri et al. \cite{kalluri2020flavr} utilize a 3D encoder-decoder architecture to directly synthesize interpolated frames from four input frames. Differing from these methods, we use 3D CNN over optical flow and occlusion maps to predict non-linear motion coefficients. \begin{figure*}[!ht] \centering \includegraphics[width=0.9\textwidth]{main_diagram_v4.pdf} \vspace{-0.3cm} \caption{Overview of our interpolation algorithm. Non-linear motion estimation module produces forward flow ($ F_{0\rightarrow t}, F_{1\rightarrow t}$), which is used to generate backward flow ($F_{t\rightarrow 0},F_{t\rightarrow 1}$). This backward flow is refined using a motion refinement module. Finally, a blending mask $M$ is estimated that is used to fuse the warped frames to generate interpolated frame $I_t$.} \label{overview} \vspace{-0.3cm} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=0.9\textwidth]{NME_v2.pdf} \vspace{-0.3cm} \caption{Non-linear motion estimation module. First, bi-directional flow and occlusion maps are estimated which are fed to a 3D UNet to generate flow representation $\alpha, \beta$. This flow representation is used to produce forward intermediate flow using quadratic formulation.} \label{NME} \vspace{-0.5cm} \end{figure*} \section{Space-time convolution network for non-linear motion estimation} \label{proposed} Determining the motion trajectory of pixels is essential to determine the transition of pixel values from one frame to the next. Traditional methods use optical flow to achieve this goal with the assumption of brightness constancy and velocity smoothness constraint and use a linear model for interpolation. While some methods recently have used a quadratic model for flow estimation with improved results, such a model is not applicable in certain scenarios such as motion discontinuities and occlusions. In this work, we opt to use a 3D CNN encoder-decoder architecture to estimate per-pixel non-linear motion that can easily switch between a linear and quadratic model. Specifically, the 3D CNN takes a set of bi-directional optical flows and occlusion maps between consecutive video frames \{$I_{-1}, I_0, I_1, I_2$\} to estimate the non-linear motion model that is utilized by other modules to predict an interpolated frame $I_t$, where $t \in (0,1)$. i.e., the output frame $I_t$ needs to be coherent in terms of appearance and motion between $I_0$ and $I_1$. An overview of our framework is shown in Figure \ref{overview}. The framework consists of the five modules namely: 1) Non-linear motion estimation (NME) module, 2) Backward flow estimation (BFE) module, 3) Motion refinement (MR) module, 4) Blending mask estimation (BME) module, and 5) Frame synthesis. The details of each module are described in the following sections. \subsection{Non-linear motion estimation (NME) module} \label{sec:nme_module} \textcolor{blue}{} Recent methods attempt to overcome linear motion assumption by modeling a non-linear motion. Xu et al. \cite{xu2019quadratic} proposed to model a quadratic motion model in terms of time $t$. i.e., with an assumption that pixel motion follows a quadratic motion of form $\alpha t + \beta t^2$. They estimate the motion model parameters $\alpha, \beta$ by an analytical formula derived using per-pixel optical flow. However, such a quadratic assumption cannot be applied to the pixels involving unreliable optical flow estimates (\eg occluded pixels). Using such unreliable optical flow estimates may lead to inaccurate intermediate flow estimation and may end up with erroneous interpolation results. Instead of directly estimating quadratic motion parameters from optical flow, we attempt to estimate $\alpha, \beta$ through a 3D CNN model. To learn suitable $\alpha$ and $\beta$ in the non-linear motion model, given the input frames \{$I_{-1},I_0, I_1, I_2$\}, we first estimate bi-directional flow and occlusion maps between neighboring frames using a pre-trained \textit{PWCNet-Bi-Occ} network \cite{irr}. Architecture-wise, \textit{PWCNet-Bi-Occ} is based on state-of-the-art optical flow network \textit{PWCNet} \cite{pwcnet}. It takes two frames \{$I_x, I_{y}$\} as input and extracts multi-scale feature maps of each frame. At each scale, a correlation volume is computed between corresponding feature maps of the frames. Then, bidirectional optical flows \{$F_{x\rightarrow{y}}, F_{y\rightarrow{x}}$\} and occlusion maps \{$O_{x\rightarrow{y}}, O_{y\rightarrow{x}}$\} are obtained as output at each level by following a coarse-to-fine strategy. We use the optical flow and occlusion map outputs from the finer level in our work. The bi-directional optical flows $\{F_{i\rightarrow{(i+1)}}, F_{{(i+1)}\rightarrow i}\}_{i=-1}^2$ and occlusion maps $\{O_{i\rightarrow{(i+1)}}, O_{{(i+1)}\rightarrow i}\}_{i=-1}^2$ are arranged in temporal order and results in a 5D tensor of size $B \times 6 \times \text{\#frames} \times H \times W$. Here $B, H, W$ denote batch size, height and width respectively, and the 6 channels belong to bi-directional optical flows and occlusion maps. This tensor is passed through a 3D CNN model to estimate a representation of dimension $B\times 4 \times 2 \times H \times W$. The temporal dimension of 2 corresponds to $t=0$ and $t=1$. In each temporal slice, we predict two coefficient maps $\alpha$ and $\beta$, each of 2-dimensions. We refer these coefficients $\alpha, \beta$ as the flow representation. Now the per-pixel non-linear motion $F_{0\rightarrow t}$ of frame $I_0$ towards the interpolated frame $I_t$ is given by: \begin{equation} F_{0\rightarrow t} = \alpha_0 \times t + \beta_0 \times t^{2} \end{equation} Similarly, $F_{1\rightarrow t}$ is given by: \begin{equation} F_{1\rightarrow t} = \alpha_1 \times (1-t) + \beta_1 \times (1-t)^{2} \end{equation} Estimating the coefficients $\alpha_0, \beta_0, \alpha_1$ and $\beta_1$ through a neural network instead of an analytical solution \cite{xu2019quadratic} offers the following advantages: 1) The network can flexibly choose between linear and non-linear motion. For pixels to follow a linear motion, the network may predict $\beta = 0$; 2) Unlike \cite{xu2019quadratic}, learned estimates of $\alpha$'s, $\beta$'s are better equipped to handle occlusion by utilizing the occlusion maps, 3) Having access to large temporal receptive field of 4 frames, the non-linear motion coefficients estimated through a 3D CNN can determine more accurate motion than \cite{xu2019quadratic} which rely on optical flow to estimate the coefficients. Figure \ref{NME} shows the pipeline of the non-linear motion estimation module. \textbf{Network specification:} We formulate the NME module to predict $\alpha, \beta$ with two crucial design choices in mind: 1) to capture spatiotemporal features and 2) to incorporate multi-scale features efficiently. 3D CNN networks are the natural choices to capture spatiotemporal features among video frames. However, the existing architectures for pixel-wise tasks (\textit{e.g.}, UNet-3D \cite{kalluri2020flavr}) adopt a single-stream Encoder-Decoder style architecture that aggregates multi-scale features by the process of sequential downsampling and skip-connection which may result in information loss \cite{fourure2017residual}. Inspired by the success of GridNet \cite{gridnet,ctxsyn} in efficiently incorporating multi-resolution features, we formulate a novel 3D version of GridNet namely ``GridNet-3D'' by replacing its 2D convolutional filters with 3D convolutional filters. \textit{GridNet-3D} consists of three parallel streams to capture features with different resolutions and each stream has five convolutional blocks arranged in a sequence as shown in Fig. \ref{3dgridnet}. Each convolutional block is made up of two conv-3D layers with a residual connection. The three parallel streams have channel dimensions of 16, 32 and 64 respectively. The communication between the streams are handled by a set of \textit{downsampling} and \textit{upsampling} blocks. The \textit{downsampling} block consists of spatial max pooling of stride 2 followed by one conv-3D layer, whereas the \textit{upsampling} block consists of one bilinear upsampling layer followed by two conv-3D layers. \begin{figure} \centering \includegraphics[width=0.5\textwidth,keepaspectratio]{gridnet3d-new} \vspace{-0.3cm} \caption{Novel GridNet-3D architecture for efficient multi-scale feature aggregation inspired from \cite{yuanchen2021gridnet}. It consists of three parallel streams operating in different feature resolutions and the communication between streams is handled by \textit{downsampling} and \textit{upsampling} blocks (refer Sec. \ref{sec:nme_module}).} \vspace{-0.7cm} \label{3dgridnet} \end{figure} We perform a set of comparative studies with prevalent architectures such as UNet-3D \cite{kalluri2020flavr}, UNet-2D \cite{superslomo} and demonstrate the results in the Sec. \ref{modelconfig}. \subsection{Backward flow estimation (BFE) module} The non-linear motions ($F_{0\rightarrow t}, F_{1\rightarrow t}$) estimated in NME module are forward intermediate flows. To make use of backward warping operation \cite{stn} on the frames $I_0$ and $I_1$, we require the backward intermediate flows ($F_{t \rightarrow 0}$ and $F_{t \rightarrow 1}$) to be determined. To achieve this, we use a differentiable flow reversal layer proposed by \cite{xu2019quadratic} to obtain $F_{t \rightarrow 0}$ and $F_{t \rightarrow 1}$ from $F_{0 \rightarrow t}$ and $F_{1 \rightarrow t}$ respectively. Backward flow at a pixel position $\textbf{x}$ is formulated as weighted average of forward flows of all pixels $\textbf{p}$ that fall into neighborhood of pixel $\textbf{x}$. $F_{t \rightarrow 0}$ at pixel position $\textbf{x} = (x,y)$ is given by, \small{ \begin{equation} F_{t \rightarrow 0}(\textbf{x}) = \frac{ \sum_{\textbf{p}+ F_{0 \rightarrow t}(\textbf{p}) \in N(\textbf{x})} w(\textbf{x}, \textbf{p} + F_{0 \rightarrow t}(\textbf{p}))(-F_{0 \rightarrow t}(\textbf{p}))}{\sum_{\textbf{p}+ F_{0 \rightarrow t}(\textbf{p} + F_{0 \rightarrow t}(\textbf{p})) \in N(\textbf{x})} w(\textbf{x}, \textbf{p})} \label{flow_reversal} \end{equation} } where $N(\textbf{x})$ denotes a $2\times2$ neighborhood around $\textbf{x}$ and $w(.,.)$ is a weighting function given by: \begin{equation} w(\textbf{a},\textbf{b}) = e^{-||\textbf{a} -\textbf{b}||^2_2} \end{equation} Following similar procedure in Equation \ref{flow_reversal}, $F_{t\rightarrow 1}$ is computed from $F_{1 \rightarrow t}$. \subsection{Motion refinement (MR) module} To further refine the estimated backward flows ($F_{t \rightarrow 0}$ and $F_{t \rightarrow 1}$), we use a learning based motion refinement approach \cite{xu2019quadratic}. To this end, the refinement network takes concatenated source frames, warped frames and flow maps as input and applies a fully convolutional network to generate per-pixel offset $(\Delta x, \Delta y)$ and residuals ($r(x, y)$). Refined optical flow, $F^r_{t\rightarrow 0}$ at pixel $(x,y)$ is given by: \begin{equation} F^r_{t\rightarrow 0}(x,y) = F_{t\rightarrow 0}(x+\Delta x, y+\Delta y) + r(x,y) \end{equation} $F_{t\rightarrow 1}$ is refined in a similar manner to obtain $F^r_{t\rightarrow 1}$. We try with two types of motion refinement network in this work namely: 1) UNet-2D \cite{unet,xu2019quadratic}, and 2) GridNet-2D \cite{gridnet, ctxsyn, dutta2021efficient}. Finally, we choose GridNet-2D as the motion refinement network due to its superior performance (ref. Section \ref{ablation_sec}). \subsection{Blending mask estimation (BME) module} The refined backward motions $F^r_{t \rightarrow 0}$ and $F^r_{t \rightarrow 1}$ are used to warp images $I_0$ and $I_1$ to yield two estimates $I_{t0}, I_{t1}$ for interpolated frame $I_t$. However, merging these two estimates is not straight-forward. The naive approach of averaging the two estimates and using it as interpolated frame $I_t$ gives sub-par results. To improve the quality of interpolated frame, we use a learnable CNN that takes input as the stack of warped frames and intermediate feature maps from previous step to output a soft blending mask $M$. The BME module consists of three convolutional layers followed by a sigmoid activation function \cite{xu2019quadratic} to generate the mask $M$. \subsection{Frame synthesis} We linearly blend the warped frame using blending mask \cite{superslomo} computed from the BME module. The final interpolated frame $I_t$ is given by: \small{ \begin{equation} \hat{I}_{t} = \frac{ (1-t) \times M \odot bw(I_0, F^r_{t\rightarrow0}) + t \times (1-M) \odot bw(I_1, F^r_{t\rightarrow1})} {(1-t) \times M + t \times (1-M)} \label{hr_syn} \end{equation} } where $bw(.,.)$ denotes the backward warping function. \section{Datasets, Experiments, Results } \label{expt} \subsection{Datasets} We have used the following datasets of different image resolutions in our experiments. \textbf{Vimeo Septuplet dataset:} Vimeo Septuplet dataset \cite{toflow} consists of 72,436 frame-septuplets of resolution $256 \times 448$. This dataset is divided into a training subset of 64,612 septuplets and a test subset of 7,824 septuplets. We use 1\textsuperscript{st}, 3\textsuperscript{rd}, 5\textsuperscript{th} and 7\textsuperscript{th} frame from the septuplets as input frames and predict the 4\textsuperscript{th} frame as interpolation ground truth. We use training subset of this dataset for training and evaluate the model on other datasets without fine-tuning. \textbf{DAVIS dataset:} DAVIS-2017 TrainVal dataset \cite{davis} contains 90 video clips with diverse scenes and complex motions. We utilize its 480p counterpart for evaluation purposes. We extract 2,849 quintuplets from the provided video sequences. \textbf{HD dataset:} Bao et al. \cite{memc} collected 11 HD videos consisting of four 544p, three 720p and four 1080p videos. We extract 456 quintuplets from these videos and discard 8 quintuplets with blank frames and scene changes. Finally, we use 448 quintuplets for evaluation. \textbf{GoPro dataset:} GoPro dataset proposed by Nah et al. \cite{nah2017deep} contains 33 720p videos captured at 720 FPS. We extract 1,500 sets of 25 images from the test split consisting of 11 videos. We use 1\textsuperscript{st}, 9\textsuperscript{th}, 17\textsuperscript{th} and 25\textsuperscript{th} frames as input frames and 13\textsuperscript{th} frame is used as the interpolation target. \subsection{Training Details} We develop our models using the Pytorch \cite{pytorch} framework. During training, we optimize the network using Adam optimizer \cite{adam} with the following hyper-parameters: batch size = 64, $\beta_1=0.9$ and $\beta_2=0.999$, input frame size = random crop of $256\times256$. The learning rate is initially set to $2\times10^{-4}$ and is divided by a factor of 10 when the loss plateaus. The \textit{PWCNet-Bi-Occ} network \cite{irr} is fixed until the learning rate reaches the value $2\times10^{-6}$ and then, it is fine-tuned with the whole network. The model takes around 16 epochs to converge. Code will be released in Github upon acceptance. \begin{table*}[!ht] \caption{Effect of different CNN architectures used in NME module. Best and second best scores are colred in \textcolor{red}{{red}} and \textcolor{blue}{{blue}} respectively.} \vspace{-0.3cm} \small \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}CNN used \\ in NME\end{tabular}} & \multicolumn{2}{c|}{Vimeo Septuplet} & \multicolumn{2}{c|}{DAVIS} & \multicolumn{2}{c|}{HD} & \multicolumn{2}{c|}{GoPro} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Params\\ (M)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Runtime\\ (s)\end{tabular}} \\ \cline{2-9} & \textbf{PSNR} & \textbf{SSIM} & \textbf{PSNR} & \textbf{SSIM} & \textbf{PSNR} & \textbf{SSIM} & \textbf{PSNR} & \textbf{SSIM} & & \\ \hline UNet-2D & 34.76 & 0.9537 & 27.34 & 0.8254 & 31.21 & 0.8971 & 28.90 & 0.8793 & \textcolor{blue}{38.30} & \textcolor{red}{0.18} \\ \hline UNet-3D & \textcolor{blue}{34.96} & \textcolor{red}{0.9545} & \textcolor{blue}{27.46} & \textcolor{blue}{0.8278} & \textcolor{blue}{31.31} & \textcolor{blue}{0.8976} & \textcolor{blue}{29.01} & \textcolor{blue}{0.8826} & 60.55 & 0.37 \\ \hline GridNet-3D & \textcolor{red}{34.99} & \textcolor{blue}{0.9544} & \textcolor{red}{27.53} & \textcolor{red}{0.8281} & \textcolor{red}{31.49} & \textcolor{red}{0.9000} & \textcolor{red}{29.08} & \textcolor{red}{0.8826} & \textcolor{red}{20.92} & \textcolor{blue}{0.32} \\ \hline \end{tabular} \label{nme_abl_tab} \end{table*} \begin{figure*}[!ht] \centering \includegraphics[width=0.65\textwidth]{nme_abl.pdf} \vspace{-10px} \caption{Qualitative comparison between different CNN architectures used in NME module.} \label{nme_abl_fig} \end{figure*} \begin{table*}[!ht] \caption{Quantitative comparison between UNet and GridNet as MR module.} \vspace{-0.3cm} \centering \small \begin{tabular}{|c|cc|cc|cc|cc|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Motion Refinement\\ module\end{tabular}} & \multicolumn{2}{c|}{Vimeo Septuplet} & \multicolumn{2}{c|}{DAVIS} & \multicolumn{2}{c|}{HD} & \multicolumn{2}{c|}{GoPro} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Params\\ (M)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Runtime\\ (s)\end{tabular}} \\ \cline{2-9} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & & \\ \hline UNet-2D & \multicolumn{1}{c|}{34.70} & 0.9532 & \multicolumn{1}{c|}{27.32} & 0.8260 & \multicolumn{1}{c|}{31.02} & 0.8944 & \multicolumn{1}{c|}{28.81} & 0.8798 & 78.11 & \textbf{0.37} \\ \hline GridNet-2D & \multicolumn{1}{c|}{\textbf{34.96}} & \textbf{0.9475} & \multicolumn{1}{c|}{\textbf{27.46}} & \textbf{0.8278} & \multicolumn{1}{c|}{\textbf{31.31}} & \textbf{0.8976} & \multicolumn{1}{c|}{\textbf{29.01}} & \textbf{0.8826} & \textbf{60.55} & \textbf{0.37} \\ \hline \end{tabular} \centering \label{flow_ref} \vspace{-0.3cm} \end{table*} \begin{figure*}[!ht] \centering \includegraphics[width=0.7\textwidth]{mr_ablation_new.pdf} \vspace{-10px} \caption{Qualitative comparison between different MR modules.} \label{fig_ref} \vspace{-0.3cm} \end{figure*} \subsection{Objective Functions} Following prior work \cite{superslomo}, we use following loss functions to train our model. \textbf{Reconstruction Loss:} We use $L_1$ loss to capture the reconstruction quality of predicted intermediate frames. Reconstruction loss $\mathcal{L}_r$ is given by, \begin{equation} \mathcal{L}_r = \left\|\hat{{I}_{t}}-I_{t}\right\|_{1} \end{equation} Here, $\hat{{I}_{t}}$ and $I_{t}$ refer to the predicted interpolated RGB frame, ground-truth RGB frame respectively. \textbf{Perceptual Loss:} Difference between features extracted from initial layers of a pre-trained image classification network can help to generate images of higher perceptual quality \cite{johnson2016perceptual}. Perceptual loss $\mathcal{L}_p$ is given by: \begin{equation} \mathcal{L}_p = \left\|\phi(\hat{{I}_{t}})-\phi(I_{t})\right\|_{2} \end{equation} where $\phi(.)$ denotes function that generates features from conv4\_3 layer of pretrained VGGNet-16. \textbf{Warping Loss:} $L_1$ loss between warped frames and ground truth intermediate frames is used as warping loss because better flow predictions mean warped frames will be closer to ground truth intermediate frame. \begin{equation} \mathcal{L}_w = \left\|I_t - bw(I_0, F^r_{t\rightarrow0})\right\|_1 + \left\|I_t - bw(I_1, F^r_{t\rightarrow1})\right\|_1 \end{equation} Here \textit{bw(.,.)} denotes the backward warping function. \textbf{Smoothness Loss:} Total variation (TV) loss is used as smoothness loss to ensure smoothness in intermediate optical flow prediction. \begin{equation} \mathcal{L}_s = \left\| \nabla F^r_{t\rightarrow0}\right\|_1 + \left\| \nabla F^r_{t\rightarrow1}\right\|_1 \end{equation} Our final loss is a linear combination of all the loss functions described above. \begin{equation} \mathcal{L} = \lambda_r \mathcal{L}_r + \lambda_p \mathcal{L}_p + \lambda_w \mathcal{L}_w + \lambda_s \mathcal{L}_s \end{equation} We choose $\lambda_r = 204$, $\lambda_p = 0.005$, $\lambda_w = 102$ and $\lambda_s=1$ similar to unofficial SuperSloMo repository \cite{ssm_unoff}. When the model is trained with low learning rate at later phase, we turn off warping loss and smoothness loss by setting $\lambda_w$ and $\lambda_s$ to 0. This helps network to focus on improving the final reconstruction quality of interpolated frame. \begin{table*}[!htb] \centering \caption{Quantitative comparison with state-of-the-art methods. Best and second best scores are colred in \textcolor{red}{{red}} and \textcolor{blue}{{blue}} respectively. * - TOFlow \cite{toflow} was trained on Vimeo-Triplet dataset, all other methods are trained in Vimeo-Septuplet dataset.} \label{sota_comp} \small \begin{tabular}{|c|c|cc|cc|cc|cc|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Input\\ frames\end{tabular}} & \multicolumn{2}{c|}{Vimeo Septuplet} & \multicolumn{2}{c|}{DAVIS} & \multicolumn{2}{c|}{HD} & \multicolumn{2}{c|}{GoPro} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Params\\ (M)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Runtime\\ (s)\end{tabular}} \\ \cline{3-10} & & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & & \\ \hline TOFlow* \cite{toflow} & 2 & \multicolumn{1}{c|}{33.46} & 0.9399 & \multicolumn{1}{c|}{25.49} & 0.7577 & \multicolumn{1}{c|}{\textcolor{blue}{30.94}} & 0.8854 & \multicolumn{1}{c|}{27.08} & 0.8286 & \textcolor{red}{1.07} & 0.10 \\ \hline SepConv \cite{sepconv} & 2 & \multicolumn{1}{c|}{33.04} & 0.9334 & \multicolumn{1}{c|}{25.38} & 0.7428 & \multicolumn{1}{c|}{30.24} & 0.8784 & \multicolumn{1}{c|}{26.88} & 0.8166 & 21.6 & \textcolor{blue}{0.024} \\ \hline SuperSloMo \cite{superslomo} & 2 & \multicolumn{1}{c|}{33.46} & 0.9423 & \multicolumn{1}{c|}{25.84} & 0.7765 & \multicolumn{1}{c|}{30.37} & 0.8834 & \multicolumn{1}{c|}{27.31} & 0.8367 & 39.61 & 0.025 \\ \hline CAIN \cite{cain} & 2 & \multicolumn{1}{c|}{31.70} & 0.9106 & \multicolumn{1}{c|}{24.89} & 0.7235 & \multicolumn{1}{c|}{29.22} & 0.8523 & \multicolumn{1}{c|}{26.81} & 0.8076 & 42.78 & \textcolor{red}{0.02} \\ \hline BMBC\footnotemark \cite{park2020bmbc} & 2 & \multicolumn{1}{c|}{31.34} & 0.9054 & \multicolumn{1}{c|}{23.50} & 0.6697 & \multicolumn{1}{c|}{-} & - & \multicolumn{1}{c|}{24.62} & 0.7399 & \textcolor{blue}{11.0} & 0.41 \\ \hline Tridirectional \cite{tridirectional} & 3 & \multicolumn{1}{c|}{32.73} & 0.9331 & \multicolumn{1}{c|}{25.24} & 0.7476 & \multicolumn{1}{c|}{29.84} & 0.8692 & \multicolumn{1}{c|}{26.80} & 0.8180 & 10.40 & 0.19 \\ \hline QVI \cite{xu2019quadratic} & 4 & \multicolumn{1}{c|}{\textcolor{blue}{{34.50}}} & \textcolor{blue}{{0.9521}} & \multicolumn{1}{c|}{\textcolor{blue}{{27.36}}} & \textcolor{red}{{0.8298}} & \multicolumn{1}{c|}{30.92} & \textcolor{blue}{{0.8971}} & \multicolumn{1}{c|}{\textcolor{blue}{{28.80} }} & \textcolor{blue}{{0.8781}} & 29.22 & 0.10 \\ \hline FLAVR \cite{kalluri2020flavr} & 4 & \multicolumn{1}{c|}{33.56} & 0.9372 & \multicolumn{1}{c|}{25.74} & 0.7589 & \multicolumn{1}{c|}{29.96} & 0.8758 & \multicolumn{1}{c|}{27.76} & 0.8436 & 42.06 & 0.20 \\ \hline Ours & 4 & \multicolumn{1}{c|}{\textcolor{red}{{34.99}}} & \textcolor{red}{{0.9544}} & \multicolumn{1}{c|}{\textcolor{red}{{27.53}}} & \textcolor{blue}{{0.8281}} & \multicolumn{1}{c|}{\textcolor{red}{{31.49}}} & \textcolor{red}{{0.9000}} & \multicolumn{1}{c|}{\textcolor{red}{{29.08}}} & \textcolor{red}{{0.8826}} & 20.92 & 0.32 \\ \hline \end{tabular} \end{table*} \footnotetext{BMBC encountered out-of-memory error when tested on HD dataset.} \begin{figure*}[!ht] \centering \includegraphics[width=0.85\textwidth]{new_sota_comp_short.pdf} \vspace{-10px} \caption{Qualitative comparison of our method with other state-of-the-art algorithms.} \vspace{-0.5cm} \label{sota_comp_fig1} \end{figure*} \subsection{Experiments on model configurations} \label{modelconfig} In this section, we perform comparative studies among the different choices available for NME (UNet-2D \cite{superslomo}, UNet-3D \cite{kalluri2020flavr}, GridNet-3D) and MR (UNet-2D \cite{unet}, GridNet-2D \cite{gridnet}) modules to determine the best performing configuration. \textbf{Choice of NME module:} We experiment with three different architectures for NME module: 1) UNet-2D \cite{superslomo}, 2) UNet-3D \cite{kalluri2020flavr}, and 3) novel GridNet-3D proposed in this paper. We illustrate the quantitative performance with different NME modules in Table \ref{nme_abl_tab} along with number of parameters and runtimes. We observe that 3D-CNN version of NME modules perform superior to UNet-2D in general. Further, GridNet-3D performs better than UNet-3D in DAVIS, HD and GoPro datasets while having less parameters and runtime. \textbf{Choice of MR modules:} We experiment with two types of motion refinement modules: UNet-2D \cite{unet} and GridNet-2D \cite{gridnet}. We use a standard encoder-decoder architecture with skip connections for UNet-2D. In GridNet-2D, encoder and decoder blocks are laid out in a grid-like fashion to carry through multi-scale feature maps till the final layer. Quantitative comparison in Table \ref{flow_ref} shows that using GridNet-2D as MR module performs significantly better than UNet-2D. Qualitative comparison in Figure \ref{fig_ref} illustrates that GridNet-2D reduces the smudge effect in interpolated frame compared to UNet-2D. From Table \ref{flow_ref}, we can also infer that using GridNet-2D as MR module reduces total number of parameters of the model while the runtime remains constant. Based on these experiments, we use GridNet-3D in NME module and GridNet-2D as MR module in state-of-the-art comparisons and in ablation studies unless specified otherwise. \subsection{Comparison with state-of-the-arts \label{sec:sota_comparison}} We compare our model with multiple state-of-the-art methods: TOFlow \cite{toflow}, Sepconv-$\mathcal{L}_1$ \cite{sepconv}, SuperSloMo \cite{superslomo}, CAIN \cite{cain}, BMBC \cite{park2020bmbc}, QVI \cite{xu2019quadratic}, Tridirectional \cite{tridirectional} and FLAVR \cite{kalluri2020flavr}. For comparison with TOFlow, official pretrained model \cite{toflow_repo} is used. For all other models, we train them on Vimeo-Septuplet train set with same learning rate schedule and batch size as ours for fair comparison. We use unofficial repositories of SuperSloMo \cite{ssm_unoff} and Sepconv \cite{sepconv_unoff} to train the corresponding models. Please note, official pretrained models of other methods might produce different results due to difference in training data and training settings. During evaluation, Peak Signal-to-Noise ratio (PSNR) and Structural Similarity (SSIM) \cite{ssim} are used as evaluation metric to compare performances. Quantitative comparisons with state-of-the-art methods on Vimeo, DAVIS, HD and GoPro datasets are shown in Table \ref{sota_comp}. Number of parameters and average runtime to produce a frame of resolution $256 \times 448$ on NVIDIA 1080Ti GPU for each model is also reported. Our method achieves best PSNR and SSIM scores in Vimeo, HD and GoPro datasets. Our method performs best in PSNR and second best in SSIM metric on DAVIS dataset. Qualitative comparison with other methods is shown in Figure \ref{sota_comp_fig1}. \begin{table*}[!ht] \vspace{-0.3cm} \caption{Effect of different input features to 3D CNN.} \vspace{-0.3cm} \centering \small \begin{tabular}{|c|cc|cc|cc|cc|c|c|} \hline \multirow{2}{*}{Input} & \multicolumn{2}{c|}{Vimeo Septuplet} & \multicolumn{2}{c|}{DAVIS} & \multicolumn{2}{c|}{HD} & \multicolumn{2}{c|}{GoPro} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Params\\ (M)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Runtime\\ (s)\end{tabular}} \\ \cline{2-9} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & & \\ \hline RGB & \multicolumn{1}{c|}{34.12} & 0.9474 & \multicolumn{1}{c|}{26.34} & 0.7883 & \multicolumn{1}{c|}{30.80} & 0.8854 & \multicolumn{1}{c|}{28.34} & 0.8642 & \textbf{61.89} & \textbf{0.23} \\ \hline Flow + Occlusion & \multicolumn{1}{c|}{\textbf{34.70}} & \textbf{0.9532} & \multicolumn{1}{c|}{\textbf{27.32}} & \textbf{0.8260} & \multicolumn{1}{c|}{\textbf{31.02}} & \textbf{0.8944} & \multicolumn{1}{c|}{\textbf{28.81}} & \textbf{0.8798} & 78.11 & 0.37 \\ \hline \end{tabular} \label{input_abl} \vspace{-0.3cm} \end{table*} \begin{figure*}[!ht] \centering \includegraphics[width=0.65\textwidth]{rgb.png} \vspace{-10px} \caption{Qualitative comparison between RGB and Flow+Occlusion as input to 3D CNN.} \label{rgb_fig} \vspace{-0.1cm} \end{figure*} \begin{table*}[!ht] \caption{Quantitative significance of BFE, MR and BME modules.} \vspace{-0.3cm} \centering \small \begin{tabular}{|c|cc|cc|cc|cc|c|c|} \hline \multirow{2}{*}{} & \multicolumn{2}{c|}{Vimeo Septuplet} & \multicolumn{2}{c|}{DAVIS} & \multicolumn{2}{c|}{HD} & \multicolumn{2}{c|}{GoPro} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Params\\ (M)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Runtime\\ (s)\end{tabular}} \\ \cline{2-9} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & \multicolumn{1}{c|}{\textbf{PSNR}} & \textbf{SSIM} & & \\ \hline without BFE, MR and BME & \multicolumn{1}{c|}{33.91} & 0.9443 & \multicolumn{1}{c|}{26.05} & 0.7686 & \multicolumn{1}{c|}{30.72} & 0.8811 & \multicolumn{1}{c|}{28.12} & 0.8583 & \textbf{42.07} & \textbf{0.20} \\ \hline with BFE, MR and BME & \multicolumn{1}{c|}{\textbf{34.12}} & \textbf{0.9474} & \multicolumn{1}{c|}{\textbf{26.34}} & \textbf{0.7883} & \multicolumn{1}{c|}{\textbf{30.80}} & \textbf{0.8854} & \multicolumn{1}{c|}{\textbf{28.34}} & \textbf{0.8642} & 61.89 & 0.23 \\ \hline \end{tabular} \label{rev_abl_tab} \vspace{-0.3cm} \centering \end{table*} \begin{figure*}[!ht] \centering \includegraphics[width=0.65\textwidth]{rev_v2.pdf} \vspace{-10px} \caption{Qualitative comparison between intermediate flowmap and blending mask estimation with and without BFE, MR and BME modules.} \label{rev_fig} \vspace{-0.5cm} \end{figure*} \begin{figure*}[!ht] \centering \includegraphics[width=0.8\textwidth]{flovis_v9.pdf} \vspace{-0.8cm} \caption{Intermediate flow visualization between QVI and our approach.} \vspace{-0.5cm} \label{flovis_abl} \end{figure*} \subsection{Ablation studies} \label{ablation_sec} \textbf{Choice of input features (RGB vs. Flow+Occlusion):} To demonstrate the importance of flow and occlusion maps, we perform an experiment where we use RGB frames as input to the 3D CNN. Quantitative comparison between these two approaches are shown in Table \ref{input_abl} with number of parameters and runtimes. Both experiments in Table \ref{input_abl} use UNet-2D as MR module. We observe that Flow+Occlusion maps as input performs better than RGB frames. Qualitative comparison in Figure \ref{rgb_fig} shows that interpolated results are more accurate when Flow+Occlusion maps are used compared to RGB. Note that, our model with RGB input already performs better than FLAVR \cite{kalluri2020flavr} (refer to Table \ref{sota_comp}). This signifies that frame generation by hallucinating pixels from scratch \cite{kalluri2020flavr} is hard for neural networks to achieve than frame generation by warping neighborhood frames. \textbf{Importance of BFE, MR and BME modules: } To understand the importance of BFE, MR and BME modules, we re-purpose the NME module to directly predict non-linear backward flows $F_{t\rightarrow0}$, $F_{t\rightarrow1}$ and blending mask $M$. In this experiment, we use RGB frames as input to NME module. The quantitative comparison in Table \ref{rev_abl_tab} illustrates that the direct estimation of backward flows, mask (without BFE, MR and BME) performs sub-par to estimating them with BFE, MR (UNet-2D) and BME modules. Added to this, qualitative comparisons in Figure \ref{rev_fig} shows that the direct estimation of backward flows may lead to ghosting artifacts due to inaccurate flow estimation near motion boundaries. \textbf{Intermediate flow visualizations:} We visualize the backward flow $F^r_{t\rightarrow0}$ estimated by QVI \cite{xu2019quadratic} and our approach in Figure \ref{flovis_abl}. We notice that erroneous results in QVI's \cite{xu2019quadratic} interpolated frame is caused by incorrect estimation of the backward flow. However, our method remedies this by accurately estimating the backward flow as visualized in the absolute flow difference map in Figure \ref{flovis_abl}. \vspace{-0.3cm} \section{Conclusion} \label{sec:conclusion} In this paper, we presented a 3D CNN based frame interpolation algorithm in which the bi-directional flow and occlusion maps between neighboring frames are passed as input to a 3D CNN to predict per-pixel non-linear motion. This makes our network flexible to choose between linear and quadratic motion models instead of a fixed motion model as used in prior work. Our method achieves state-of-the-art results in multiple datasets. Since flow and occlusion estimates from \textit{PWCNet-Bi-Occ} are often not accurate and hence can create a performance bottleneck in interpolation task, further research can explore whether inclusion of RGB frames as input to 3D CNN can improve the performance. Finally, flow representation estimation for cubic modeling can also be investigated in future. {\small \bibliographystyle{ieee_fullname}
2,869,038,155,781
arxiv
\section{Introduction} With the proliferation of online social networks, the problem of optimally influencing the opinions of individuals in a population has garnered tremendous attention \cite{Domingos-01, Richardson-01, Kempe-01}. The prevailing paradigm treats marketing as a viral process, whereby the advertiser is given a budget of seed infections and chooses the subset of individuals to infect such that the spread of the ensuing contagion is maximized. The development of algorithmic methods for influence maximization under the viral paradigm has been the subject of vigorous study, resulting in a number of efficient techniques for identifying meaningful marketing strategies in real-world settings \cite{Mossel-01,Goyal-01, Gomez-Rodriguez-01}. While the viral paradigm accurately describes out-of-equilibrium phenomena, such as the introduction of new ideas or products to a system, these models fail to capture reverberant opinion dynamics wherein repeated interactions between individuals in the network give rise to complex macroscopic opinion patterns, as, for example, is the case in the formation of political opinions \cite{Galam-03,Isenberg-01,Mas-01,Moussaid-01}. In this context, rather than maximizing the spread of a viral advertisement, the marketer is interested in optimally shifting the equilibrium opinions of individuals in the network. To describe complex macroscopic opinion patterns resulting from repeated microscopic interactions, we naturally employ the language of statistical mechanics, treating individual opinions as spins in an Ising system at dynamic equilibrium and modeling marketing as the addition of an external magnetic field. The resulting problem, which we call \textit{Ising influence maximization (IIM)}, has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external field. While a number of models have been proposed for describing reverberant opinion dynamics \cite{De-01}, our use of the Ising model follows a vibrant interdisciplinary literature \cite{Montanari-01,Castellano-01}, and is closely related to models in game theory \cite{Blume-01,McKelvey-01} and sociophysics \cite{Galam-02,Sznajd-01}. Furthermore, complex Ising models have found widespread use in machine learning, and our model is formally equivalent to a pair-wise Markov random field or a Boltzmann machine \cite{Kindermann-01, Tanaka-01, Nishimori-01}. Our main contributions are as follows: \begin{enumerate} \item We formalize the influence maximization problem in the context of the Ising model, which we call the \textit{Ising influence maximization (IIM)} problem. We also propose the \textit{mean-field Ising influence maximization (MF-IIM)} problem as an approximation to IIM (Section 2). \item We find sufficient conditions under which the MF-IIM objective is smooth and concave, and we present a gradient ascent algorithm that guarantees an $\epsilon$-approximation to MF-IIM (Section 4). \item We present numerical simulations that probe the structure and performance of MF optimal marketing strategies. We find that at high temperatures, it is optimal to focus influence on high-degree individuals, while at low temperatures, it is optimal to spread influence among low-degree individuals (Sections 5 and 6). \item Throughout the paper we present a number of novel results concerning the structure of steady-states in the ferromagnetic MF Ising model on general (weighted, directed) strongly-connected graphs, which are of independent interest. We name two highlights: \begin{itemize} \item The well-known pitchfork bifurcation structure for the ferromagnetic MF Ising model on a lattice extends exactly to general strongly-connected graphs, and the critical temperature is equal to the spectral radius of the adjacency matrix (Theorem 3). \item There can exist at most one stable steady-state with non-negative (non-positive) components, and it is smooth and concave (convex) in the external field (Theorem 4). \end{itemize} \end{enumerate} \section{The Ising influence maximization problem} We consider a weighted, directed social network consisting of a set of individuals $N=\{1,\hdots,n\}$, each of which is assigned an opinion $\sigma_i\in\{\pm 1\}$ that captures its current state. By analogy with the Ising model, we refer to $\bm{\sigma}=(\sigma_i)$ as a spin configuration of the system. Individuals in the network interact via a non-negative weighted coupling matrix $J\in \mathbb{R}^{n\times n}_{\ge 0}$, where $J_{ij}\ge 0$ represents the amount of influence that individual $j$ holds over the opinion of individual $i$, and the non-negativity of $J$ represents the assumption that opinions of neighboring individuals tend to align, known in physics as a ferromagnetic interaction. Each individual also interacts with forces external to the network via an external field $\bm{h}\in\mathbb{R}^n$. For example, if the spins represent the political opinions of individuals in a social network, then $J_{ij}$ represents the influence that $j$ holds over $i$'s opinion and $h_i$ represents the political bias of node $i$ due to external forces such as campaign advertisements and news articles. The opinions of individuals in the network evolve according to asynchronous Glauber dynamics. At each time $t$, an individual $i$ is selected uniformly at random and her opinion is updated in response to the external field $\bm{h}$ and the opinions of others in the network $\bm{\sigma}(t)$ by sampling from \begin{equation} \label{Glauber} P\left(\sigma_i(t+1) = 1|\bm{\sigma}(t)\right) = \frac{e^{\beta\left(\sum_j J_{ij}\sigma_j(t) + h_i\right)}}{\sum_{\sigma'_i=\pm 1} e^{\beta \sigma'_i\left(\sum_j J_{ij}\sigma_j(t) + h_i\right)}}, \end{equation} where $\beta$ is the inverse temperature, which we refer to as the \textit{interaction strength}, and unless otherwise specified, sums are assumed over $N$. Together, the quadruple $(N,J, \bm{h},\beta)$ defines our system. We refer to the total expected opinion, $M=\sum_i\left< \sigma_i\right>$, as the \textit{magnetization}, where $\left<\cdot\right>$ denotes an average over the dynamics in Eq. (\ref{Glauber}), and we often consider the magnetization as a function of the external field, denoted $M(\bm{h})$. Another important concept is the \textit{susceptibility} matrix, $\chi_{ij} = \frac{\partial \left<\sigma_i\right>}{\partial h_j}$, which quantifies the response of individual $i$ to a change in the external field on node $j$. We study the problem of maximizing the magnetization of an Ising system with respect to the external field. We assume that an external field $\bm{h}$ can be added to the system, subject to the constraints $\bm{h}\ge 0$ and $\sum_i h_i\le H$, where $H>0$ is the \textit{external field budget}, and we denote the set of feasible external fields by $\mathcal{F}_H= \left\{\bm{h}\in\mathbb{R}^n : \bm{h}\ge 0, \sum_i h_i = H \right\}$. In general, we also assume that the system experiences an initial external field $\bm{b}\in \mathbb{R}^n$, which cannot be controlled. \textbf{Definition 1.} (\textit{Ising influence maximization (IIM)}) Given a system $(N, J, \bm{b},\beta)$ and a budget $H$, find a feasible external field $\bm{h}\in \mathcal{F}_H$ that maximizes the magnetization; that is, find an optimal external field $\bm{h}^*$ such that \begin{equation} \bm{h}^* = arg\, max_{\bm{h}\in\mathcal{F}_H} M(\bm{b}+\bm{h}). \end{equation} \paragraph{Notation.} Unless otherwise specified, bold symbols represent column vectors with the appropriate number of components, while non-bold symbols with subscripts represent individual components. We often abuse notation and write relations such as $\bm{m}\ge 0$ to mean $m_i\ge 0$ for all components $i$. \subsection{The mean-field approximation} In general, calculating expectations over the dynamics in Eq. (\ref{Glauber}) requires Monte-Carlo simulations or other numerical approximation techniques. To make analytic progress, we employ the variational mean-field approximation, which has roots in statistical physics and has long been used to tackle inference problems in Boltzmann machines and Markov random fields \cite{Yedidia-01,Jordan-01,Opper-01,Saul-01}. The mean-field approximation replaces the intractable task of calculating exact averages over Eq. (\ref{Glauber}) with the problem of solving the following set of self-consistency equations: \begin{equation} \label{MF} m_i = \tanh\left[\beta\left(\sum_j J_{ij}m_j + h_i\right)\right], \end{equation} for all $i\in N$, where $m_i$ approximates $\left<\sigma_i\right>$. We refer to the right-hand side of Eq. (\ref{MF}) as the \textit{mean-field map}, $\bm{f}(\bm{m})=\tanh\left[\beta(J\bm{m}+\bm{h})\right]$, where $\tanh(\cdot)$ is applied component-wise. In this way, a fixed point of the mean-field map is a solution to Eq. (\ref{MF}), which we call a \textit{steady-state}. In general, there may be many solutions to Eq. (\ref{MF}), and we denote by $\mathcal{M}_{\bm{h}}$ the set of steady-states for a system $(N,J,\bm{h},\beta)$. We say that a steady-state $\bm{m}$ is \textit{stable} if $\rho(\bm{f}'(\bm{m}))<1$, where $\rho(\cdot)$ denotes the spectral radius and \begin{equation} \bm{f}'(\bm{m})_{ij} = \left.\frac{\partial f_i}{\partial m_j}\right\vert_{\bm{m}} = \beta\left(1-m_i^2\right)J_{ij} \quad \Rightarrow \quad \bm{f}'(\bm{m}) = \beta D(\bm{m}) J, \end{equation} where $D(\bm{m})_{ij} = (1-m_i^2)\delta_{ij}$. Furthermore, under the mean-field approximation, given a stable steady-state $\bm{m}$, the susceptibility has a particularly nice form: \begin{equation} \chi_{ij}^{MF} = \beta\left(1-m_i^2\right)\left(\sum_k J_{ik}\chi_{kj} + \delta_{ij}\right) \quad \Rightarrow \quad \chi^{MF} = \beta\left(I-\beta D(\bm{m})J\right)^{-1}D(\bm{m}), \end{equation} where $I$ is the $n\times n$ identity matrix. For the purpose of uniquely defining our objective, we optimistically choose to maximize the maximum magnetization among the set of steady-states, defined by \begin{equation} M^{MF}(\bm{h}) = \max_{\bm{m}\in\mathcal{M}_{\bm{h}}} \sum_i m_i(\bm{h}). \end{equation} We note that the pessimistic framework of maximizing the minimum magnetization yields an equally valid objective. We also note that simply choosing a steady-state to optimize does not yield a well-defined objective since, as $\bm{h}$ increases, steady-states can pop in and out of existence. \textbf{Definition 2.} (\textit{Mean-field Ising influence maximization (MF-IIM)}) Given a system $(N, J, \bm{b},\beta)$ and a budget $H$, find an optimal external field $\bm{h}^*$ such that \begin{equation} \bm{h}^* = arg\, max_{\bm{h}\in\mathcal{F}_H} M^{MF}(\bm{b}+\bm{h}). \end{equation} \section{The structure of steady-states in the MF Ising model} Before proceeding further, we must prove an important result concerning the existence and structure of solutions to Eq. (\ref{MF}), for if there exists a system that does not admit a steady-state, then our objective is ill-defined. Furthermore, if there exists a unique steady-state $\bm{m}$, then $M^{MF}=\sum_i m_i$, and there is no ambiguity in our choice of objective. Theorem 3 establishes that every system admits a steady-state and that the well-known pitchfork bifurcation structure for steady-states of the ferromagnetic MF Ising model on a lattice extends exactly to general (weighted, directed) strongly-connected graphs. In particular, for any strongly-connected graph described by $J$, there is a \textit{critical interaction strength} $\beta_c$ below which there exists a unique and stable steady-state. For $\bm{h}=0$, as $\beta$ crosses $\beta_c$ from below, two new stable steady-states appear, one with all-positive components and one with all-negative components. Interestingly, the critical interaction strength is equal to the inverse of the spectral radius of $J$, denoted $\beta_c=1/\rho(J)$. \textbf{Theorem 3.} \textit{Any system $(N,J,\bm{h},\beta)$ exhibits a steady-state. Furthermore, if its network is strongly-connected, then, for $\beta< \beta_c$, there exists a unique and stable steady-state. For $\bm{h}=0$, as $\beta$ crosses $\beta_c$ from below, the unique steady-state gives rise to two stable steady-states, one with all-positive components and one with all-negative components.} \textit{Proof sketch.} The existence of a steady-state follows directly by applying Brouwer's fixed-point theorem to $\bm{f}$. For $\beta<\beta_c$, it can be shown that $\bm{f}$ is a contraction mapping, and hence admits a unique and stable steady-state by Banach's fixed point theorem. For $\bm{h}=0$ and $\beta<\beta_c$, $\bm{m}=0$ is the unique steady-state and $\bm{f}'(\bm{m})=\beta J$. Because $J$ is strongly-connected, the Perron-Frobenius theorem guarantees a simple eigenvalue equal to $\rho(J)$ and a corresponding all-positive eigenvector. Thus, when $\beta$ crosses $1/\rho(J)$ from below, the Perron-Frobenius eigenvalue of $\bm{f}'(\bm{m})$ crosses 1 from below, giving rise to a supercritical pitchfork bifurcation with two new stable steady-states corresponding to the Perron-Frobenius eigenvector. \textit{Remark.} Some of our results assume $J$ is strongly-connected in order to use the Perron-Frobenius theorem. We note that this assumption is not restrictive, since any graph can be efficiently decomposed into strongly-connected components on which our results apply independently. Theorem 3 shows that the objective $M^{MF}(\bm{b}+\bm{h})$ is well-defined. Furthermore, for $\beta<\beta_c$, Theorem 3 guarantees a unique and stable steady-state $\bm{m}$ for all $\bm{b}+\bm{h}$. In this case, MF-IIM reduces to maximizing $M^{MF}=\sum_i m_i$, and because $\bm{m}$ is stable, $M^{MF}(\bm{b}+\bm{h})$ is smooth for all $\bm{h}$ by the implicit function theorem. Thus, for $\beta<\beta_c$, we can use standard gradient ascent techniques to efficiently calculate locally-optimal solutions to MF-IIM. In general, $M^{MF}$ is not necessarily smooth in $\bm{h}$ since the topological structure of steady-states may change as $\bm{h}$ varies. However, in the next section we show that if there exists a stable and entry-wise non-negative steady-state, and if $J$ is strongly-connected, then $M^{MF}(\bm{b}+\bm{h})$ is both smooth and concave in $\bm{h}$, regardless of the interaction strength. \section{Sufficient conditions for when MF-IIM is concave} We consider conditions for which MF-IIM is smooth and concave, and hence exactly solvable by efficient techniques. The case under consideration is when $J$ is strongly-connected and there exists a stable non-negative steady-state. \textbf{Theorem 4.} \textit{Let $(N,J,\bm{b},\beta)$ describe a system with a strongly-connected graph for which there exists a stable non-negative steady-state $\bm{m}(\bm{b})$. Then, for any $H$, $M^{MF}(\bm{b}+\bm{h})=\sum_i m_i(\bm{b}+\bm{h})$, $M^{MF}(\bm{b}+\bm{h})$ is smooth in $\bm{h}$, and $M^{MF}(\bm{b}+\bm{h})$ is concave in $\bm{h}$ for all $\bm{h}\in\mathcal{F}_H$.} \textit{Proof sketch.} Our argument follows in three steps. We first show that $\bm{m}(\bm{b})$ is the unique stable non-negative steady-state and that it attains the maximum total opinion among steady-states. This guarantees that $M^{MF}(\bm{b})=\sum_i m_i(\bm{b})$. Furthermore, $\bm{m}(\bm{b})$ gives rise to a unique and smooth branch of stable non-negative steady-states for additional $\bm{h}$, and hence $M^{MF}(\bm{b}+\bm{h})=\sum_i m_i(\bm{b}+\bm{h})$ for all $\bm{h}>0$. Finally, one can directly show that $M^{MF}(\bm{b}+\bm{h})$ is concave in $\bm{h}$. \textit{Remark.} By arguments similar to those in Theorem 4, it can be shown that any stable non-positive steady-state is unique, attains the minimum total opinion among steady-states, and is smooth and convex for decreasing $\bm{h}$. The above result paints a significantly simplified picture of the MF-IIM problem when $J$ is strongly-connected and there exists a stable non-negative steady-state $\bm{m}(\bm{b})$. Given a budget $H$, for any feasible marketing strategy $\bm{h}\in \mathcal{F}_H$, $\bm{m}(\bm{b}+\bm{h})$ is the unique stable non-negative steady-state, attains the maximum total opinion among steady-states, and is smooth in $\bm{h}$. Thus, the objective $M^{MF}(\bm{b}+\bm{h})=\sum_i m_i(\bm{b}+\bm{h})$ is smooth, allowing us to write down a gradient ascent algorithm that approximates a local maximum. Furthermore, since $M^{MF}(\bm{b}+\bm{h})$ is concave in $\bm{h}$, any local maximum of $M^{MF}$ on $\mathcal{F}_H$ is a global maximum, and we can apply efficient gradient ascent techniques to solve MF-IIM. Our algorithm, summarized in Algorithm 1, is initialized at a feasible external field. At each iteration, we calculate the susceptibility of the system, namely $\frac{\partial M^{MF}}{\partial h_j} = \sum_i \chi^{MF}_{ij}$, and project this gradient onto $\mathcal{F}_H$ (the projection operator $P_{\mathcal{F}_H}$ is well-defined since $\mathcal{F}_H$ is convex). Stepping along the direction of the projected gradient with step size $\alpha\in (0,\frac{1}{L})$, where $L$ is a Lipschitz constant of $M^{MF}$, Algorithm 1 converges to an $\epsilon$-approximation to MF-IIM in $O(1/\epsilon)$ iterations \cite{Teboulle-01}. \begin{algorithm}[t] \KwIn{System $(N,J,\bm{b},\beta)$ for which there exists a stable non-negative steady-state, budget $H$, accuracy parameter $\epsilon>0$} \KwOut{External field $\bm{h}$ that approximates a MF optimal external field $\bm{h}^*$} $t=0$; $\bm{h}(0)\in\mathcal{F}_H$; $\alpha \in (0,\frac{1}{L})$ \; \Repeat{$M^{MF}(\bm{b}+\bm{h}^*) - M^{MF}(\bm{b}+\bm{h}(t)) \le \epsilon$}{ $\frac{\partial M^{MF}(\bm{b}+\bm{h}(t))}{\partial h_j} = \sum_i \chi_{ij}^{MF}(\bm{b}+\bm{h}(t))$\; $\bm{h}(t+1) = P_{\mathcal{F}_H}\left[\bm{h}(t) + \alpha \triangledown_{\bm{h}}M^{MF}(\bm{b}+\bm{h}(t))\right]$\; $t$++\; } $\bm{h}= \bm{h}(t)$\; \caption{An $\epsilon$-approximation to MF-IIM} \label{alg:one} \end{algorithm} \subsection{Sufficient conditions for the existence of a stable non-negative steady-state} In the previous section we found that MF-IIM is efficiently solvable if there exists a stable non-negative steady-state. While this assumption may seem restrictive, we show, to the contrary, that the appearance of a stable non-negative steady-state is a fairly general phenomenon. We first show, for $J$ strongly-connected, that the existence of a stable non-negative steady-state is robust to increases in $\bm{h}$ and that the existence of a stable positive steady-state is robust to increases in $\beta$. \textbf{Theorem 5.} \textit{Let $(N,J,\bm{h},\beta)$ describe a system with a strongly-connected graph for which there exists a stable non-negative steady-state $\bm{m}$. If $\bm{m}\ge 0$, then as $\bm{h}$ increases, $\bm{m}$ gives rise to a unique and smooth branch of stable non-negative steady-states. If $\bm{m}> 0$, then as $\beta$ increases, $\bm{m}$ gives rise to a unique and smooth branch of stable positive steady-states.} \textit{Proof sketch.} By the implicit function theorem, any stable steady-state can be locally defined as a function of both $\bm{h}$ and $\beta$. Using the susceptibility, one can directly show that any stable non-negative steady-state remains stable and non-negative as $\bm{h}$ increases and that any stable positive steady-state remains stable and positive as $\beta$ increases. The intuition behind Theorem 5 is that increasing the external field will never destroy a steady-state in which all of the opinions are already non-positive. Furthermore, as the interaction strength increases, each individual reacts more strongly to the positive influence of her neighbors, creating a positive feedback loop that results in an even more positive magnetization. We conclude by showing for $J$ strongly-connected that if $\bm{h}\ge 0$, then there exists a stable non-negative steady-state. \textbf{Theorem 6.} \textit{Let $(N,J,\bm{h},\beta)$ describe any system with a strongly-connected network. If $\bm{h}\ge 0$, then there exists a stable non-negative steady-state.} \textit{Proof sketch.} For $\bm{h}>0$ and $\beta<\beta_c$, it can be shown that the unique steady-state is positive, and hence Theorem 5 guarantees the result for all $\beta'>\beta$. For $\bm{h}=0$, Theorem 3 provides the result. All together, the results of this section provide a number of sufficient conditions under which MF-IIM is exactly and efficiently solvable by Algorithm 1. \section{A shift in the structure of solutions to MF-IIM} The structure of solutions to MF-IIM is of fundamental theoretical and practical interest. We demonstrate, remarkably, that solutions to MF-IIM shift from focusing on nodes of high degree at low interaction strengths to focusing on nodes of low degree at high interaction strengths. Consider an Ising system described by $(N,J,\bm{h},\beta)$ in the limit $\beta\ll \beta_c$. To first-order in $\beta$, the self-consistency equations (\ref{MF}) take the form: \begin{equation} \bm{m} = \beta\left(J\bm{m}+\bm{h}\right) \quad \Rightarrow\quad \bm{m} = \beta(I-\beta J)^{-1}\bm{h}. \end{equation} Since $\beta<\beta_c$, we have $\rho(\beta J) < 1$, allowing us to expand $(I-\beta J)^{-1}$ in a geometric series: \begin{equation} \bm{m} = \beta\bm{h} + \beta^2J\bm{h} + O(\beta^3)\quad \Rightarrow \quad M^{MF}(\bm{h}) = \beta\sum_i h_i + \beta^2\sum_i d^{out}_i h_i + O(\beta^3), \end{equation} where $d^{out}_i = \sum_j J_{ji}$ is the out-degree of node $i$. Thus, for low interaction strengths, the MF magnetization is maximized by focusing the external field on the nodes of highest out-degree in the network, independent of $\bm{b}$ and $H$. To study the structure of solutions to MF-IIM at high interaction strengths, we make the simplifying assumptions that $J$ is strongly-connected and $\bm{b}\ge 0$ so that Theorem 6 guarantees a stable non-negative steady state $\bm{m}$. For large $\beta$ and an additional external field $\bm{h}\in\mathcal{F}_H$, $\bm{m}$ takes the form \begin{equation} m_i \approx \tanh\left[\beta\left(\sum_j J_{ij} + b_i + h_i\right)\right] \approx 1 - 2e^{-2\beta(d^{in}_i + b_i + h_i)}, \end{equation} where $d^{in}_i=\sum_j J_{ij}$ is the in-degree of node $i$. Thus, in the high-$\beta$ limit, we have: \begin{equation} M^{MF}(\bm{b}+\bm{h}) \approx \sum_i \left( 1 - 2e^{-2\beta(d^{in}_i + b_i + h_i)}\right) \approx n - 2e^{-2\beta(d^{in}_{i^*} + h_{i^*}^{(0)} + h_{i^*})}, \end{equation} where $i^*=\argmin_i(d^{in}_i + b_i + h_i)$. Thus, for high interaction strengths, the solutions to MF-IIM for an external field budget $H$ are given by: \begin{equation} \label{lowT} \bm{h}^* = arg\, max_{\bm{h}\in\mathcal{F}_H} \left(n - 2e^{-2\beta(d^{in}_{i^*} + h_{i^*}^{(0)} + h_{i^*})}\right) \equiv arg\, max_{\bm{h}\in\mathcal{F}_H} \min_i\left( d^{in}_i + b_i + h_i \right). \end{equation} Eq. (\ref{lowT}) reveals that the high-$\beta$ solutions to MF-IIM focus on the nodes for which $ d^{in}_i + b_i + h_i$ is smallest. Thus, if $\bm{b}$ is uniform, the MF magnetization is maximized by focusing the external field on the nodes of smallest in-degree in the network. We emphasize the strength and novelty of the above results. In the context of reverberant opinion dynamics, the optimal control strategy has a highly non-trivial dependence on the strength of interactions in the system, a feature not captured by viral models. Thus, when controlling a social system, accurately determining the strength of interactions is of critical importance. \section{Numerical simulations} We present numerical experiments to probe the structure and performance of MF optimal external fields. We verify that the solutions to MF-IIM undergo a shift from focusing on high-degree nodes at low interaction strengths to focusing on low-degree nodes at high interaction strengths. We also find that for sufficiently high and low interaction strengths, the MF optimal external field achieves the maximum exact magnetization, while admitting performance losses near $\beta_c$. However, even at $\beta_c$, we demonstrate that solutions to MF-IIM significantly outperform common node-selection heuristics based on node degree and centrality. We first consider an undirected hub-and-spoke network, shown in Figure 1, where $J_{ij}\in \{0,1\}$ and we set $\bm{b}=0$ for simplicity. Since $\bm{b}\ge 0$, Algorithm 1 is guaranteed to achieve a globally optimal MF magnetization. Furthermore, because the network is small, we can calculate exact solutions to IIM by brute force search. The left plot in Figure 1 compares the average degree of the MF and exact optimal external fields over a range of temperatures for an external field budget $H=1$, verifying that the solution to MF-IIM shifts from focusing on high-degree nodes at low interaction strengths to low-degree nodes at high interaction strengths. Furthermore, we find that the shift in the MF optimal external field occurs near the critical interaction strength $\beta_c = .5$. The performance of the MF optimal strategy (measured as the ratio of the magnetization achieved by the MF solution to that achieved by the exact solution) is shown in the right plot in Figure 1. For low and high interaction strengths, the MF optimal external field achieves the maximum magnetization, while near $\beta_c$, it incurs significant performance losses, a phenomenon well-studied in the literature \cite{Yedidia-01}. \begin{figure}[t] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{HS_Combo2.png} \end{subfigure}% \begin{subfigure}{0.05\textwidth} \includegraphics[width=\linewidth]{SPACE.png} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{HS_plot_Ms_comp.jpg} \end{subfigure} \caption{Left: A comparison of the structure of the MF and exact optimal external fields, denoted $\bm{h}^*_{MF}$ and $\bm{h}^*$, in a hub-and-spoke network. Right: The relative performance of $\bm{h}^*_{MF}$ compared to $\bm{h}^*$; i.e., $M(\bm{h}^*_{MF})/M(\bm{h}^*_{MF})$, where $M$ denotes the exact magnetization.} \end{figure} We now consider a stochastic block network consisting of 100 nodes split into two blocks of 50 nodes each, shown in Figure 2. An undirected edge of weight 1 is placed between each pair of nodes in Block 1 with probability .2, between each pair in Block 2 with probability .05, and between nodes in different blocks with probability .05, resulting in a highly-connected community (Block 1) surrounded by a sparsely-connected community (Block 2). For $\bm{b}=0$ and $H=20$, the center plot in Figure 2 demonstrates that the solution to MF-IIM shifts from focusing on Block 1 at low $\beta$ to focusing on Block 2 at high $\beta$ and that the shift occurs near $\beta_c$. \begin{figure}[t] \centering \begin{subfigure}{0.2\textwidth} \includegraphics[width=\linewidth]{BlockNetwork.png} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{Block_degs.jpg} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{Block_plot_Tc_True.jpg} \end{subfigure} \caption{Left: A stochastic block network consisting of a highly-connected community (Block 1) and a sparsely-connected community (Block2). Center: The solution to MF-IIM shifts from focusing on Block 1 to Block 2 as $\beta$ increases. Right: Even at $\beta_c$, the MF solution outperforms common node-selection heuristics.} \end{figure} The stochastic block network is sufficiently large that exact calculation of the optimal external fields is infeasible. Thus, we resort to comparing the MF solutions with three node-selection heuristics: one that distributes the budget in amounts proportional to nodes' degrees, one that distributes the budget proportional to nodes' centralities (the inverse of a node's average shortest path length to all other nodes), and one that distributes the budget randomly. The magnetizations are approximated via Monte Carlo simulations of the Glauber dynamics, and we consider the system at $\beta=\beta_c$ to represent the worst-case scenario for the MF optimal external fields. The right plot in Figure 2 shows that, even at $\beta_c$, the solutions to MF-IIM outperform common node-selection heuristics. We consider a real-world collaboration network (Figure 3) composed of 904 individuals, where each edge is unweighted and represents the co-authorship of a paper on the arXiv \cite{SNAP-01}. We note that co-authorship networks are known to capture many of the key structural features of social networks \cite{Newman-01}. For $\bm{b}=0$ and $H=40$, the center plot in Figure 3 illustrates the sharp shift in the solution to MF-IIM at $\beta_c= 0.05$ from high- to low-degree nodes. Furthermore, the right plot in Figure 3 compares the performance of the MF optimal external field with the node-selection heuristics described above, where we again consider the system at $\beta_c$ as a worst-case scenario, demonstrating that Algorithm 1 is scalable and performs well on real-world networks. \begin{figure}[t] \centering \begin{subfigure}{0.2\textwidth} \includegraphics[width=\linewidth]{CollaborationNetwork.png} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{plot_Degs.jpg} \end{subfigure}% \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{plot_Tc_True.jpg} \end{subfigure} \caption{Left: A collaboration network of 904 physicists where each edge represents the co-authorship of a paper on the arXiv. Center: The solution to MF-IIM shifts from high- to low-degree nodes as $\beta$ increases. Right: The MF solution out-performs common node-selection heuristics, even at $\beta_c$.} \end{figure} \section{Conclusions} We study influence maximization, one of the fundamental problems in network science, in the context of the Ising model, wherein repeated interactions between individuals give rise to complex macroscopic patterns. The resulting problem, which we call Ising influence maximization, has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Under the mean-field approximation, we develop a number of sufficient conditions for when the problem is concave, and we provide a gradient ascent algorithm that uses the susceptibility to efficiently calculate locally-optimal external fields. Furthermore, we demonstrate that the MF optimal external fields shift from focusing on high-degree individuals at low interaction strengths to focusing on low-degree individuals at high interaction strengths, a phenomenon not observed in viral models. We apply our algorithm on random and real-world networks, numerically demonstrating shifts in the solution structure and showing that our algorithm out-performs common node-selection heuristics. It would be interesting to study the exact Ising model on an undirected network, in which case the spin statistics are governed by the Boltzmann distribution. Using this elegant steady-state description, one might be able to derive analytic results for the exact IIM problem. Our work establishes a fruitful connection between influence maximization and statistical physics, paving the way for exciting cross-disciplinary research. For example, one could apply advanced mean-field techniques, such as those in \cite{Yedidia-01}, to generate efficient algorithms of increasing accuracy. Furthermore, because our model is equivalent to a Boltzmann machine, one could propose a framework for data-based influence maximization based on well-known Boltzmann machine learning techniques. \textbf{Acknowledgements.} We thank Michael Kearns and Eric Horsley for enlightening discussions, and we acknowledge support from the U.S. National Science Foundation, the Air Force Office of Scientific Research, and the Department of Transportation. \bibliographystyle{unsrt} \section{Appendix A: Preliminaries} We establish a number of preliminary results that aid the proofs of the theorems in the main text. \subsection{Perron-Frobenius} Many of the results in the paper rely on the Perron-Frobenius theorem, which we state here. \textbf{Theorem 7. (Perron-Frobenius)} \textit{Let $J$ be an irreducible non-negative matrix with spectral radius $\rho(J)=r$. Then the following statements hold: \begin{enumerate} \item $J$ has a real, positive, and simple eigenvalue equal to $r$. \item The corresponding eigenvector of $J$ has all-positive components. \item If $0\le J \le A$, for some matrix $A$, then $r_J\le r_A$. \end{enumerate} } It is known that the adjacency matrix of a strongly-connected graph is irreducible, and hence, all of the results of the Perron-Frobenius theorem carry over. \subsection{The existence of a unique and stable steady-state for $\beta<\beta_c$} We first show that if our network is strongly-connected, then for $\beta < 1/\rho(J)$, the system exhibits a unique steady-state that is stable under $\bm{f}$. This result will aid in the proof of Theorem 3 and similar arguments are used in the proof of Theorem 4. The proof in this section is based on Banach's fixed point theorem, but instead of directly showing that $\bm{f}$ is a contraction mapping on $X=[-1,1]^n$, we use the spectral properties of $\bm{f}'$. The following two lemmas relate the contraction mapping property to the spectral radius of $\bm{f}'$. We note that throughout the proofs, we use the variable $\bm{x}$ instead of $\bm{m}$ to indicate a point in $X$ that is not necessarily a steady-state of $\bm{f}$. \textbf{Lemma 8.} \textit{Let $X$ be a convex subset of Euclidean space and let the function $\bm{f}:X\rightarrow X$ have continuous partial derivatives on $X$. If the Jacobian satisfies \begin{equation} |\bm{f}'(\bm{x})|<1, \end{equation} for all $\bm{x}\in X$ and some matrix norm $|\cdot|$, then $\bm{f}$ satisfies the contraction mapping property on $X$.} \textbf{Lemma 9.} \textit{Given a square matrix $A$ and $\epsilon > 0$, there exists a matrix norm $|\cdot|$ such that \begin{equation} |A|\le \rho(A) + \epsilon. \end{equation}} We are now prepared to show that if $J$ is strongly-connected, then for $\beta<1/\rho(J)$, $\bm{f}$ is a contraction mapping on $X$, and hence $\bm{f}$ admits a unique and stable fixed point on $X$. \textbf{Lemma 10.} \textit{Let $(N,J,\bm{h},\beta)$ describe a system with a strongly-connected graph. For $\beta<\beta_c$, there exists a unique and stable steady-state that can be found by iteratively applying $\bm{f}$ to any point $\bm{x}\in X$.} \textit{Proof.} Consider the Jacobian, \begin{equation} \bm{f}'(\bm{m})_{ij} = \frac{\partial}{\partial m_j} \tanh\left[\beta(J\bm{m}+\bm{h})\right]_i = \beta \text{sech}^2 \left[\beta(J\bm{m}+\bm{h})\right]_i J_{ij}. \end{equation} Since $|\text{sech}(\cdot)|\le 1$, $|\bm{f}'(\bm{m})_{ij}|\le \beta J_{ij}$ for all $i,j\in N$. For $\beta<1/\rho(J)$ and for all $\bm{m}\in X$, we have \begin{equation} \rho(\bm{f}'(\bm{m})) \le \rho(\beta J) = \beta \rho(J) < 1, \end{equation} where the first inequality follows from the Perron-Frobenius theorem and the equality follows from the linearity of $\rho(\cdot)$. Because the above inequality is strict, there exists an $\epsilon>0$ such that $\rho(\bm{f}'(\bm{x}))+\epsilon < 1$ for all $\bm{x}\in X$. By Lemma 9 we can choose a matrix norm $|\cdot|$ such that \begin{equation} |\bm{f}'(\bm{m})|\le \rho(\bm{f}'(\bm{m})) + \epsilon < 1. \end{equation} Since $X$ is a convex subset of Euclidean space, Lemma 8 implies that $\bm{f}$ satisfies the contraction mapping property on $X$. Since $X$ is a closed and bounded it is also a compact metric space and we can apply Banach's theorem on compact metric spaces to attain the desired result. \hfill $\square$ \subsection{The smoothness of stable non-negative steady-states for increasing $h$} We show that any stable non-negative steady-state $\bm{m}$ gives rise to a unique and stable branch of steady-states that is smooth and non-decreasing as $\bm{h}$ increases. We note that this result represents half of the progress toward proving Theorem 5. \textbf{Lemma 11.} \textit{Let $(N,J,\bm{h},\beta)$ describe a system with a strongly-connected graph for which there exists a stable steady-state $\bm{m}$. If $\bm{m}\ge 0$, then as $\bm{h}$ increases, $\bm{m}$ gives rise to a unique and smooth branch of stable non-negative steady-states.} \textit{Proof.} We first show that any stable steady-state is locally non-decreasing in $\bm{h}$. Consider the susceptibility from Sec. 2: \begin{equation} \chi^{MF} = \beta(I-\beta D(\bm{m})J)^{-1}D(\bm{m}) = \beta(I-\bm{f}'(\bm{m}))^{-1}D(\bm{m}). \end{equation} Since $\rho\left(\bm{f}'(\bm{m})\right) < 1$, Theorem 4.3 of \cite{Fielder-01} guarantees that the matrix $(I-\bm{f}'(\bm{m}))^{-1}$ is non-negative. Furthermore, since $D(\bm{m})$ is non-negative, we find that $\chi^{MF}$ is non-negative, and hence $\frac{\partial m_i}{\partial h_j}\ge 0$ for all $i,j\in N$. We now argue that $\bm{m}$ remains stable as $\bm{h}$ increases which, by the implicit function theorem, guarantees that $\bm{m}$ branches uniquely and smoothly. It is sufficient to show that $\rho(\bm{f}'(\bm{m})) = \rho(\beta D(\bm{m})J)$ is non-increasing in $\bm{h}$. Since $\bm{m}$ is non-decreasing in $\bm{h}$, we find that $\bm{f}'(\bm{m})$ is entry-wise non-increasing in $\bm{h}$, and hence Perron-Frobenius guarantees that $\rho(\bm{f}'(\bm{m}))$ is non-increasing. \hfill $\square$ \section{Appendix B: Proofs} We now present proofs of the theorems in the main text, noting that we do not present the results in the order that they appear in the text since some earlier results depend on later ones. \subsection{Theorem 4} We split Theorem 4 into three separate results. We first show that any stable non-negative steady-state is the unique stable non-negative steady state and that it attains the maximum total opinion among all steady-states. Secondly, we note that Lemma 11 guarantees that any stable non-negative steady-state is smooth and remains stable and non-negative for increasing $\bm{h}$. Finally, we show that any stable non-negative steady-state is concave in $\bm{h}$. Together, these results prove Theorem 4. \subsubsection{Uniqueness of stable non-negative steady-states} We show that any stable non-negative steady-state is the unique stable non-negative steady-state and achieves the largest total opinion among steady-states. First consider the following lemma. \textbf{Lemma 12.} \textit{Let $(N,J,\bm{h},\beta)$ describe an arbitrary system and consider any point $\bm{x}\in X$. Then \begin{equation} \bm{f}^{\ell}(\bm{1}) \ge \bm{f}^{\ell}(\bm{x}), \end{equation} for any positive integer $\ell$, where $\bm{f}^{\ell}(\cdot)$ denotes the $\ell^{th}$ iterative application of $\bm{f}$ and $\bm{1}$ is the vector of ones of length $n$.} \textit{Proof.} We proceed by induction. The base case is trivially satisfied. For the inductive step, assume $f^{\ell}(\bm{1})_i\ge f^{\ell}(\bm{x})_i$ for some $\ell$ and all $i\in N$. Since $J\ge 0$, $\beta\ge 0$ and $\tanh(\cdot)$ is increasing, we have \begin{equation} f^{\ell+1}(\bm{1})_i = \tanh\left[\beta\left(Jf^{\ell}(\bm{1})+\bm{h}\right)\right]_i \ge \tanh\left[\beta\left(Jf^{\ell}(\bm{x})+\bm{h}\right)\right]_i = f^{\ell+1}(\bm{x})_i, \end{equation} for all $\bm{x}\in X$ and all $i\in N$. \hfill $\square$ We now establish the uniqueness of stable non-negative steady-states. \textbf{Lemma 13.} \textit{Let $(N,J,\bm{h},\beta)$ describe a system with a strongly-connected network for which there exists a stable non-negative steady-state $\bm{m}$. Then $\bm{m}$ is the unique stable steady-state and can be found by iteratively applying $\bm{f}$ to $\bm{1}$.} \textit{Proof.} Assume there exists a stable non-negative steady-state $\bm{m}$. By Lemma 12, \begin{equation} \bm{f}^{\ell}(\bm{1}) \ge \bm{f}^{\ell}(\bm{m}) = \bm{m}, \end{equation} for any $\ell$. This indicates that the sequence $\{ \bm{f}^{\ell}(\bm{1})\}$ is contained in the closed region $X_{\bm{m}}=\{\bm{x}\in X : \bm{x}\ge \bm{m}\}$. By an argument similar to that in the proof of Lemma 11, we have $\rho(\bm{f}'(\bm{x}))\le \rho(\bm{f}'(\bm{m})) < 1$ for all $\bm{x}\in X_{\bm{m}}$. By an argument similar to that in the proof of Lemma 10, this indicates that $\bm{f}$ is a contraction mapping on $X_{\bm{m}}$. Following the proof in \cite{Palais-01}, we show that the sequence $\{ \bm{f}^{\ell}(\bm{1})\}$ is Cauchy. Since $|\bm{f}(\bm{x}')-\bm{f}(\bm{x})|< |\bm{x}' - \bm{x}|$ for all $\bm{x},\bm{x}'\in X_{\bm{m}}$, there exists a number $q\in(0,1)$ such that $|\bm{f}(\bm{x}') - \bm{f}(\bm{x})|\le q |\bm{x}' -\bm{x}|$. By the triangle inequality, \begin{equation} |\bm{x}' - \bm{x}| \le |\bm{x}' - \bm{f}(\bm{x}')| + q|\bm{x}' - \bm{x}| + |\bm{f}(\bm{x}) - \bm{x}|, \end{equation} which yields \begin{equation} |\bm{x}' - \bm{x}|\le \frac{|\bm{f}(\bm{x}') - \bm{x}'| + |\bm{f}(\bm{x}) - \bm{x}|}{1-q}. \end{equation} Replacing $\bm{x}$ and $\bm{x}'$ with $\bm{f}^{\ell}(\bm{1})$ and $\bm{f}^k(\bm{1})$, respectively, we find \begin{align} |\bm{f}^k(\bm{1}) - \bm{f}^{\ell}(\bm{1})| &\le \frac{|\bm{f}^{k+1}(\bm{1}) - \bm{f}^k(\bm{1})| + |\bm{f}^{\ell+1}(\bm{1}) - \bm{f}^{\ell}(\bm{1})|}{1-q} \\ &\le \frac{q^k + q^{\ell}}{1-q}|\bm{f}(\bm{1}) - \bm{1}|. \nonumber \end{align} Since $q<1$, the last expression goes to zero as $\ell,k\rightarrow\infty$, proving that $\{ \bm{f}^{\ell}(\bm{1})\}$ is Cauchy and hence converges to a limit $\bm{m}^*\in X_{\bm{m}}$. Furthermore, the limit $\bm{m}^*$ is a fixed point of $\bm{f}$, and hence a steady-state of the system, since \begin{equation} \bm{m}^* = \lim_{\ell\rightarrow\infty} \bm{f}^{\ell}(\bm{1}) = \lim_{\ell\rightarrow\infty} \bm{f}(\bm{f}^{\ell-1}(\bm{1})) = \bm{f}\left(\lim_{\ell\rightarrow\infty} \bm{f}^{\ell-1}(\bm{1})\right) = \bm{f}(\bm{m}^*). \end{equation} Suppose for contradiction that $\bm{m}^*\neq \bm{m}$, and consider the line $(1-t)\bm{m} + t\bm{m}^*$ between $\bm{m}$ and $\bm{m}^*$ for $t\in[0,1]$. All points along this line lie in $X_{\bm{m}}$ and hence $\bm{f}$ is contractive along the line. We have, \begin{align} \left| \bm{f}(\bm{m}^*)-\bm{f}(\bm{m})\right| &= \left| \int_{\bm{m}}^{\bm{m}^*} \bm{f}'(\bm{x})\cdot d\bm{x}\right| \\ &\le \int_0^1\left| \bm{f}'\left((1-t)\bm{m} + t\bm{m}^*\right)\right| \left| \bm{m}^*-\bm{m} \right| dt, \nonumber \end{align} where $|\bm{f}'(\cdot)|$ represents any matrix norm. Because $\bm{f}$ is contractive along the line, we can choose a matrix norm that is strictly less than 1. Thus, \begin{equation} \left| \bm{f}(\bm{m}^*)-\bm{f}(\bm{m})\right|< \int_0^1 \left| \bm{m}^*-\bm{m} \right| dt = \left| \bm{m}^*-\bm{m}\right|, \end{equation} which is a contradiction. Thus $\bm{m}^*=\bm{m}$ and the stable non-negative steady-state is unique. Furthermore, this shows that $\{ \bm{f}^{\ell}(\bm{1})\}$ converges to $\bm{m}$. \hfill $\square$ As a corollary, we find that any stable non-negative steady-state attains the maximum total opinion among all steady-states. \textbf{Corollary 14.} \textit{Let $(N,J,\bm{h},\beta)$ describe a system for which there exists a stable non-negative steady-state $\bm{m}$, and let $\bm{m}'$ be another steady-state. Then $\bm{m}\ge \bm{m}'$.} \textit{Proof.} By Lemmas 12 and 13 we have \begin{equation} \bm{m} = \lim_{\ell\rightarrow\infty} \bm{f}^{\ell}(\bm{1}) \ge \lim_{\ell\rightarrow\infty} \bm{f}^{\ell}(\bm{m}')=\bm{m}', \end{equation} for any steady-state $\bm{m}'$. \hfill $\square$ \subsubsection{The concavity of stable non-negative steady-states in $h$} We show for $J$ strongly-connected that any stable non-negative steady-state is concave in $\bm{h}$. \textbf{Lemma 15.} \textit{Let $(N,J,\bm{h},\beta)$ describe a system with a strongly-connected graph for which there exists a stable non-negative steady-state $\bm{m}$. Then $\bm{m}$ is concave in $\bm{h}$.} \textit{Proof.} We want to show that the Hessian of $m_i$ with respect to $\bm{h}$ is negative semidefinite for all $i\in N$. The Hessian of $m_i$ with respect to $\bm{h}$ is given by \begin{equation} C_{jk}^{(i)}\equiv \frac{\partial^2 m_i}{\partial h_j \partial h_k} = \frac{\partial}{\partial h_k}\chi_{ij}^{MF}. \end{equation} After taking partials and rearranging we are left with \begin{equation} C_{jk}^{(i)} = -2\sum_{\ell\in N} {\chi_{j\ell}^{MF}}^T\left(\frac{m_{\ell}}{(1-m_{\ell}^2)^2}\chi_{i\ell}^{MF}\right)\chi_{\ell k}^{MF} = -\sum_{\ell\in N} Z_{j\ell}^{(i)T} Z_{\ell k}^{(i)}, \end{equation} where $Z_{jk}^{(i)}= \chi_{jk}^{MF}\sqrt{\frac{2m_j}{(1-m_j^2)^2}\chi_{ij}^{MF}}$. Since $m_j,\chi_{ij}^{MF}\ge 0$ for all $i,j\in N$, $Z^{(i)}$ is a real matrix. Thus $C^{(i)}$ is negative semidefinite for all $i\in N$. \hfill $\square$ \subsection{Theorem 5} We show that any stable non-negative steady-state $\bm{m}$ gives rise to a unique and stable branch of steady-states that is smooth and non-decreasing as $\bm{h}$ increases. We also show that if $\bm{m}>0$, then $\bm{m}$ gives rise to a unique and stable branch of steady-states that is smooth and non-decreasing as $\beta$ increases. We note that the first result is given by Lemma 11. To prove the second result, we first show that any stable positive steady-state is locally non-decreasing in $\beta$. \textbf{Lemma 16.} \textit{Let $(N,J,\bm{h},\beta)$ describe any system for which there exists a stable positive steady-state $\bm{m}$. Then $\bm{m}$ is locally non-decreasing in $\beta$.} \textit{Proof.} We want to show $\frac{d m_i}{d \beta}$ is non-negative for all $i\in N$. By assumption, $\rho\left(\bm{f}'(\bm{m})\right) < 1$, allowing us to apply the implicit function theorem, giving \begin{align} \frac{d m_i}{d \beta} &= \sum_{j\in N} \left(\delta_{ji} - \bm{f}'(\bm{m})_{ji}\right)^{-1} \frac{\partial f_j}{\partial \beta} \\ &= \sum_{j\in N} \left(\delta_{ji} - \bm{f}'(\bm{m})_{ji}\right)^{-1} \text{sech}^2\left[\beta(J\bm{m}+\bm{h})\right]_j \left(J\bm{m}+\bm{h}\right)_j. \nonumber \end{align} In vector form, \begin{equation} \label{DmDb} \frac{d \bm{m}}{d\beta} = (I - \bm{f}'(\bm{m}))^{-1}D(\bm{m})(J\bm{m}+\bm{h}). \end{equation} Theorem 4.3 of \cite{Fielder-01} guarantees that the matrix $(I-\bm{f}'(\bm{m}))^{-1}$ is non-negative and $D(\bm{m})$ is also non-negative. Because $\bm{m}=\tanh\left[\beta(J\bm{m}+\bm{h})\right]> 0$, we have $J\bm{m}+\bm{h}> 0$, and hence Eq. (\ref{DmDb}) is non-negative. \hfill $\square$ We now complete the proof of Theorem 5, showing that any stable positive steady-state gives rise to a unique and stable branch of steady-states as $\beta$ increases. \textit{Proof (Theorem 5).} For contradiction, assume that increasing $\beta$ causes $\bm{m}$ to lose stability. Because the network is strongly-connected, $\bm{f}'(\bm{m}) = \beta D(\bm{m})J$ is also strongly-connected. Thus, Perron-Frobenius guarantees that $\bm{f}'(\bm{m})$ has a simple largest eigenvalue equal to its spectral radius. When $\bm{m}$ loses stability, this simple eigenvalue crosses one from below. By the Crandall-Rabinowitz theorem \cite{Crandall-01} and the principle of exchange of stability, the crossing of the simple eigenvalue gives rise to two new sable steady-states. However, Lemma 16 guarantees that $\bm{m}$ remains positive as we increase $\beta$, which necessitates that both of the new stable steady-states are also initially positive, contradicting Theorem 4. Thus, $\bm{m}$ cannot lose stability as $\beta$ increases, and hence $\bm{m}$ gives rise to a unique and smooth branch of stable and positive steady-states. \hfill $\square$ \subsection{Theorem 3} We show that every system exhibits a steady-state and that the well-known pitchfork bifurcation structure for steady-states of the ferromagnetic MF Ising model on a lattice extends exactly to general (weighted, directed) strongly-connected graphs. In particular, for any strongly-connected graph $J$, there is a critical interaction strength $\beta_c = 1/\rho(J)$ below which there exists a unique and stable steady-state. For $\bm{h}=0$, as $\beta$ crosses $\beta_c$ from below, two new stable steady-states appear, one with all-positive components and one with all-negative components. \textit{Proof (Theorem 3).} We first note that the existence of a steady-state is guaranteed for any system by applying Brouwer's fixed point theorem to $\bm{f}$. Furthermore, Lemma 10 establishes that for $\beta<1/\rho(J)$, there is a unique and stable steady-state. In the case $\bm{h}=0$, any system has a steady-state at $\bm{m}^*=0$, which we refer to as the \textit{trivial steady-state}. Lemma 10 guarantees that $\bm{m}^*$ is stable and unique for $\beta<\beta_c$. The implicit function theorem guarantees that we can continue to write $\bm{m}^*$ uniquely as a function of $\beta$ so long as $\rho(\bm{f}'(\bm{m}^*))=\beta\rho(J)<1$. If our network is strongly-connected, then the Perron-Frobenius theorem guarantees that as we increase $\beta$, an eigenvalue of $\bm{f}'$ will first cross 1 when $\beta=1/\rho(J)$. Furthermore, the largest eigenvalue is simple, which, by the Crandall-Rabinowitz theorem \cite{Crandall-01}, guarantees the appearance of two new steady-states. Furthermore, the new solutions locally lie in the subspace spanned by the eigenvector corresponding to the largest eigenvalue of $\bm{f}'$, which by the Perron-Frobenius theorem has all positive components. Thus, at $\beta=\beta_c$, a branch of steady-states appears, giving rise to an all-positive steady-state and an all-negative steady-state. By the principle of exchange of stability, the new steady-states adopt the stability of the trivial steady-state, while the trivial steady-state becomes unstable. As we continue to increase $\beta$, Theorem 5 guarantees that the positive (negative) steady-state remains positive (negative) and stable. \hfill $\square$ \subsection{Theorem 6} We conclude by showing for $J$ strongly-connected that if $\bm{h} \ge 0$, then there exists a stable non-negative steady-state. \textit{Proof (Theorem 6).} We first consider $\bm{h}>0$. Lemma 10 guarantees that for any $\beta<\beta_c$ there exists a unique and stable steady-state $\bm{m}$ and that iterative application of $\bm{f}$ to any $\bm{x}\in X$ converges to $\bm{m}$. For induction, choose $\bm{x}=\bm{1}$ and assume $\bm{f}^{\ell}(\bm{1})>0$. Then \begin{equation} \bm{f}^{\ell+1}(\bm{1})=\tanh\left[\beta(J\bm{f}^{\ell}(\bm{1})+\bm{h})\right]>0. \end{equation} Thus, $\bm{m} = \lim_{\ell\rightarrow\infty} \bm{f}^{\ell}(\bm{1})>0$ at $\beta$. By Theorem 5, the unique branch $\bm{m}(\beta)$ remains stable and positive for all $\beta'>\beta$. To complete the proof, we note that Theorem 3 covers the case $\bm{h}=0$. \hfill $\square$ \bibliographystyle{abbrvnat}
2,869,038,155,782
arxiv
\section{Introduction} \label{sec:intro} Thus far, end-to-end automatic speech recognition (ASR) models, which use neural networks to transduce audio into word sequences, have demonstrated state-of-the-art results compared to conventional hybrid speech recognizers. Specifically, recurrent neural network transducer (RNN-T) originally presented in \cite{graves2012sequence} has shown competitive ASR performance on various benchmarks \cite{chiu2019comparison, li2020comparison, zhang2021benchmarking}. Typically based on token emission latency, we categorize ASR models into: (i) streaming recognizers \cite{sainath2020streaming, mahadeokar2021flexi} that emit hypothesized words in real time, with low latency measured by milliseconds, and (ii) non-streaming models \cite{gulati2020conformer, zhang2020pushing} that only emit word hypotheses after processing the complete speech utterance. Latest streaming recognizers often employ a transformer/conformer encoder \cite{zhang2020transformer, li2021better}, and may use a limited future audio context (also referred to as look-ahead audio frames) \cite{shi2021emformer, shi2022streaming}. Non-streaming recognizer takes the entire speech utterance as input, and scaling up the model size can often improve the model accuracies \cite{zhang2020pushing}. Recently it has been shown favorable to unify the streaming and non-streaming models, either through a single shared encoder \cite{zhang2020transformer, yu2020dual, yao2021wenet}, or through cascaded streaming and non-streaming encoders \cite{li2021better, narayanan2021cascaded}. The efficacy of such unified or cascaded encoders includes that the previously two separate development and deployment workflows can be simplified into one process. Note that in the two-pass cascaded encoders, input acoustic features are typically first processed by a streaming encoder, and a non-streaming encoder processes the streaming encoder outputs and aims to cover the first-pass accuracy loss. While for the unified dual-mode encoder, the non-streaming encoder directly processes the entire utterance and is immune from the accuracy degradation of the streaming encoder; additionally, the accuracy and latency of the streaming encoder can benefit from the weight sharing, or inplace knowledge distillation from the more performant non-streaming encoder \cite{yu2020dual}. This work also focuses on the one-pass dual-mode encoder, while in practice, various streaming ASR models run on devices under more resource constraints, like disk size and memory footprint. In contrast, most non-streaming models run from the server with fewer constraints. Therefore, instead of developing equally sized encoders, it is preferable to jointly build a compact streaming model and a large non-streaming model for real-world ASR applications. We note that even though a single encoder is shared for both modes, we can substantially prune it into a featherweight, e.g., about 30M parameters as a streaming model, and use the original copy as a performant non-streaming encoder. Given the recent progress made in neural network pruning \cite{frankle2018lottery, wu2021dynamic, yang2022omni, ding2021audio}, we can specify a target sparsity level during model training, prune the model weights accordingly before inference, and finally obtain a model of the target model size. Meanwhile, we also aim to maintain the unpruned encoder's performance such that we can keep a copy of the original dense encoder and use it as a competitive non-streaming encoder. Prior work \cite{yang2022omni} has shown success on the ASR training of varying sparsities jointly in a single model, also known as supernet training. A supernet is a shared-weight backbone network, where a subnetwork is extracted given each target sparsity level, and all the subnetworks are jointly optimized during supernet training. While it can facilitate ASR training of various model sizes, each sub-model in \cite{yang2022omni} operates with the same inference latency. Instead, this work focuses on two sparsity levels and two latency conditions: a high sparsity and low latency for the streaming model, and a zero sparsity (i.e., dense or unpruned) and full-utterance latency for the non-streaming model. Thus, in this case, the dual modes refer to the pruned/sparse streaming mode and the other unpruned/dense non-streaming mode. Next, it has been widely shown that the self-supervised acoustic model pre-training based on wav2vec 2.0 \cite{baevski2020wav2vec} can substantially improve large non-streaming models; given sufficient unlabeled data, the potential accuracy gain can be proportional to the growing model size \cite{zhang2020pushing}. Similarly, achieving accuracy gains from pre-training will be difficult given a compact model size. Also, very few works \cite{sainath2022improving} have shown the pre-training efficacy in streaming models. In this paper, we present that by doing the dual-mode supernet training, pre-training is not only able to substantially improve the large non-streaming model, and also to improve the compact sparse streaming model. \section{Supernet training of a dual-mode ASR Model} \label{sec:supernet_asr} \subsection{RNN-T with Emformer encoder} In this work we focus on the RNN-T based ASR models with the efficient memory transformer (Emformer) encoder \cite{shi2021emformer}. \subsubsection{RNN-T} Each speech utterance is parameterized as an input acoustic feature vector sequence $\textbf{x} = \{\textbf{x}_1 \ldots \textbf{x}_T\} = \textbf{x}_{1:T} $, where $\textbf{x}_t \in \mathbb{R}^{d}$ and $T$ is the number of frames. Denote a grapheme set or a wordpiece inventory as $\mathcal{Y}$, and the corresponding output sequence of length $U$ as $\textbf{y} = \{y_1 \ldots y_U\} = \textbf{y}_{1:U} $, where $y_u \in \mathcal{Y}$. We define $\bar{\mathcal{Y}}$ as $ \mathcal{Y} \cup \{ \emptyset \}$, where $\emptyset$ is the blank label. Denote $\bar{\mathcal{Y}}^{*}$ as the set of all sequences over output space $\bar{\mathcal{Y}}$, and the element $\textbf{a} \in \bar{\mathcal{Y}}^*$ as an alignment sequence. Then we have the posterior probability: \begin{equation} P( \textbf{y} | \textbf{x}) = \sum\limits_{ \textbf{a} \in \mathcal{B}^{-1}(\textbf{y} ) } P( \textbf{a} | \textbf{x}) \label{eq:posterior} \end{equation} \noindent where $\mathcal{B}: \bar{\mathcal{Y}}^* \rightarrow \mathcal{Y}^{*} $ is a function that removes blank symbols from an alignment \textbf{a}. A RNN-T model, $f(\textbf{x}; \theta)$, parameterizes the alignment probability $P(\textbf{a} | \textbf{x})$ with an encoder, a prediction network (predictor) and a joint network. The encoder $f^{\text{enc}}$ performs a mapping operation that converts $\textbf{x}$ into another sequence of representations $\textbf{h}^{\text{enc}}_{1:T} = \{\textbf{h}_1^{\text{enc}} \ldots \textbf{h}^{\text{enc}}_{T}\}$: \begin{equation} \textbf{h}^{\text{enc}}_{1:T} = f^{\text{enc}}(\textbf{x}; \theta^{\text{enc}}) \label{eq:encoder} \end{equation} \noindent A prediction network $f^{\text{pred}}$ is to produce the new representation $\textbf{h}^{\text{pred}}_u$: \begin{equation} \textbf{h}^{\text{pred}}_{1:u} = f^{\text{pred}}(y_{0:(u-1)}; \theta^{\text{pred}}) \end{equation} \noindent where $u$ is output label index and $y_0 = \emptyset$. The joint network $f^{\text{join}}$ combines encoder output $\textbf{h}^{\text{enc}}_t$ and prediction network output $\textbf{h}^{\text{pred}}_u$ to compute logits $\textbf{z}_{t,u}$: \begin{equation} \textbf{z}_{t,u} = f^{\text{join}}(\textbf{h}^{\text{enc}}_t, \textbf{h}^{\text{pred}}_u; \theta^{\text{join}}) \end{equation} \begin{equation} \begin{split} P(y_u| \textbf{x}_{1:t}, y_{1:(u-1)}) = \text{Softmax}(\textbf{z}_{t,u}) \end{split} \label{eq:posterior_1} \end{equation} \noindent such that the logits go through a softmax function and produce a posterior distribution of the next output label $y_u$ over $\bar{\mathcal{Y}}$. Note that, the posterior distribution in Eq. \ref{eq:posterior_1} is written as $P(y_u|\textbf{x}_{1:T}, y_{1:(u-1)})$, if it uses a non-streaming encoder and takes each full-context utterance as inputs. \subsubsection{Emformer encoder for streaming ASR} \label{ssec:emformer} Chunk-based methods \cite{chen2021developing, yao2021wenet} have been widely applied for streaming ASR, and in this work, we use the block processing method with transformer encoder layers \cite{shi2021emformer}. The block processing chunks each whole utterance into a sequence of non-overlapping segments, $\textbf{x} = \{\textbf{C}_1 \ldots \textbf{C}_I\}$, where $i$ is the index of a segment. To leverage the context information around each truncated segment, we concatenate a left contextual block $\textbf{L}_i$ (e.g., 20 acoustic frames or 120ms audio) and a respective right context block $\textbf{R}_i$ (look-ahead context, e.g., 1 frame or 60ms) to each center block $\textbf{C}_i$, to form a contextual segment $\hat{\textbf{C}}_i = \{ \textbf{L}_i , \textbf{C}_i , \textbf{R}_i \}$. Then during inference, a transformer encoder sequentially takes each $ \hat{\textbf{C}}_i$ as input, generates an output corresponding to each $\textbf{C}_i$, and forms a sequence of streaming outputs $\textbf{h}^{\text{enc}}_{1:t}$ (Eq. \ref{eq:encoder}). \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth,scale=0.9]{fig1.png} \caption{\it Illustration of the proposed dual-mode ASR supernet training. When the encoder operates in the streaming mode, it is pruned by the binary mask (marked in purple). The predictor is pruned during streaming mode in the similar way, while intact during the non-streaming mode.} \label{fig:fig1} \end{figure} \subsection{Dual-mode ASR training via dynamic Emformer segment sampling} \label{ssec:dual_mode} As in Section \ref{ssec:emformer}, we note that the ASR latency depends on the length of the center block $\textbf{C}_i$, and changing the length of $\textbf{C}_i$ can effectively achieve the target latency. For example, when demanding an ultra-low latency, we can decrease $\textbf{C}_i$ to 100-200ms and use a minimal $\textbf{R}_i$ like 60ms or 0. Instead, to implement non-streaming ASR, we increase $\textbf{C}_i$ to a size as long as the full speech utterance and pad $\textbf{R}_i$ as 0. Thus to learn a dual-mode ASR model with both competitive streaming and non-streaming modes, at each training step, we randomly sample an Emformer segment length $|\textbf{C}_i|$, equally probable between a length of the target latency $\tau_0$ and a length equal to the longest utterance, $\tau_1$. Then the input utterances will be chunked differently based on the varying $|\textbf{C}_i|$. Both modes still use the same shared encoder, and only the query/key/value lengths vary according to $|\textbf{C}_i|$ in multi-head self-attention computations. The RNN-T decoder is also fully shared. This is similar to the domain-specific Emformer segment used in \cite{mahadeokar2021flexi}, where it applies a different segment length to each domain-specific data, though the models of different domains in \cite{mahadeokar2021flexi} are all low-latency streaming. We implement it with the distributed data-parallel training across multiple GPUs \cite{ott2019fairseq}. Thus each GPU has a copy of the model, samples an $|\textbf{C}_i|$ between $\tau_0$ and $\tau_1$, and processes a sub-batch of data, after which gradients are then synchronized between GPUs for each model update, and the model learns both modes simultaneously. \subsection{Dual-mode ASR supernet training} \label{ssec:supernet} As above, prior work \cite{zhang2020transformer, yu2020dual} and Section \ref{ssec:dual_mode} have described the joint training of a streaming and full-context model, in which both modes fully share the same parameters. Next, we aim to jointly learn a sparse streaming encoder and a dense full-context encoder. As in Figure \ref{fig:fig1}, during training both modes still share the same parameters, except that the pruning masks are only applied to the streaming mode. In this case it is a simplified supernet compared to \cite{yang2022omni}, as it contains only one performant sub-model for the streaming encoder. We denote a dense RNN-T model as $f(\textbf{x}; \theta)$, and a sub-model can be derived as $f(\textbf{x}; m \odot \theta )$ with a binary pruning mask $m \in \{0, 1\}^{|\theta|}$, where $\odot$ is the element-wise product. We perform layer-wise pruning \cite{yang2022omni} and prune the encoder Emformer layers and the predictor LSTM layer. A sparsity level $s$ denotes a percentage of the weights in each layer are pruned. We use an iterative magnitude pruning approach similar to \cite{frankle2018lottery}, following the steps: \begin{itemize} \item[(i)] Training a unpruned dense model till a certain number of training updates $t_0$ (optionally with group lasso weight decay introduced in Section \ref{ssec:block_wd} below). As in Section \ref{ssec:dual_mode}, at each training step, we dynamically sample either a streaming or a non-streaming mode, and set the Emformer segment length $|\textbf{C}_i|$ accordingly. \item[(ii)] Starting from $t_0$: \begin{itemize} \item[(a)] in each layer, for every $\Delta T$ training steps (i.e., pruning interval), prune $p$ (e.g., $p = 20\%$) of the weight parameters that have the smallest magnitudes. Pruning is done by setting the corresponding elements in the binary mask $m$ to $0$, and $m$ is updated every pruning interval $\Delta T$. \item[(b)] at each training step, when the streaming mode is sampled, the pruning masks are applied to the model weights during the forward-backward pass - gradients of the masked weights will be zero, and unmasked nonzero. When the non-streaming mode is sampled, pruning masks are not applied. \end{itemize} \item[(iii)] After $n$ pruning intervals, i.e., $t_0 + n\Delta T $ training updates, $(1 - p)^n$ of weight parameters remain. Once the target sparsity level $s$ has been reached, $s = 1 - (1 - p)^n$, the mask $m$ is not updated as in (ii, a) but fixed onward. The dual-mode training proceeds as in (ii, b). \end{itemize} \noindent Note that again the mode sampling (ii, b) is done on each GPU, and the gradients of each sub-batch are aggregated from all machines for each optimization step. Also to obtain the sparsity speed-up from on-device hardware, all this work uses structured pruning, block size $8\times1$ as in \cite{yang2022omni}. \section{Self-pruning via self-supervised learning} \label{ssec:self_pruning} \subsection{Pre-training for ASR supernet} \label{ssec:pretrain} Prior works on self-supervised acoustic model pre-training is mostly focused on pre-training a non-streaming dense encoder with self-supervised criterion, and fine-tuning it with supervised ASR criterion. In this work we examine ways in which the encoder pre-training can improve the dual-mode ASR supernet, and the pruning masks learned during self-supervised training can be effective for the downstream ASR task. We employ the wav2vec 2.0 pre-training criterion. During pre-training we either use a standard non-streaming encoder as in prior works \cite{baevski2020wav2vec, zhang2020pushing}, or use the dual-mode encoder as in Section \ref{ssec:dual_mode}, after which the pretrained model is fine-tuned with RNN-T criterion, and then the encoder is always dual-mode to enable the dual-mode ASR. Note that the encoder pruning, $(t_0, t_0 + n\Delta T)$ in Section \ref{ssec:supernet}, can be performed either during pre-training, or during RNN-T fine-tuning. In practice, we find pruning during RNN-T fine-tuning significantly underperforms pruning during pre-training. Note that the learning rate in RNN-T fine-tuning has to be small to maintain the pre-training effect, and we conjecture it is too small to adapt the encoder to the large sparsity changes. While the predictor is only used in RNN-T training, the LSTM layer is pruned during fine-tuning. \subsection{Pre-training with group lasso weight decay} \label{ssec:block_wd} Given sufficient unlabeled data, it can be helpful to prune from a converged model than pruning from scratch, so we consider increasing $t_0$ in Section \ref{ssec:supernet}. However, the model weights learned during the dense model training may not follow the $8\times1$ block structure as we use for the subsequent structured pruning, which results in performance degradation. Therefore, we particularly develop a block regularization technique below to fit the structured pruning. In $8\times1$ block-wise pruning, essentially we would like the weights in each $8\times1$ block to be pruned or kept together. \emph{Group lasso}~\cite{yuan2006model} is a regularization method which selects grouped variables by penalizing the sum of $\ell_2$-norm of each group. In our case, we define each $8\times1$ block as a group, and specifically add a regularization term to the loss function $\mathcal{L}$: \begin{equation} \min_{W} \mathcal{L} + \sum_{i=1}^l \lambda_i \sum_{g\in\mathcal{G}} \|W_g^{(i)}\|_2, \label{eq:block_lasso} \end{equation} where $l$ is the number of layers, $W_g^{(i)}$ is a certain $8\times1$ block in the $i$-th layer, and $\lambda_i$ is a hyper-parameter of penalty strength. The subgradient with respect to $W_g^{(i)}$ in the block lasso term of Eq.~\ref{eq:block_lasso} is \begin{equation} {\lambda_i \over \|W_g^{(i)}\|_2} W_g^{(i)} \label{eq:block_lasso_grad}, \end{equation} \noindent and the gradient descent direction pushes $W_g^{(i)}$ to zeros as weight decay, with strength $\lambda_i/\|W_g^{(i)}\|_2$. Thus the block regularization can push some weight blocks close to zeros, and keep other blocks almost unchanged. As in many other regularizations, tuning $\lambda_i$ could be nontrivial. We propose to set it dynamically by the average value of the $\ell_2$-norm in $i$-th layer, i.e. $ \lambda_i = \lambda \sum_{g\in\mathcal{G}} \|W_g^{(i)}\|_2 / |\mathcal{G}| $, where $\lambda$ is a global hyper-parameter shared for all layers, e.g., $\lambda=1$. In this way, we can greatly simplify the hyper-parameter tuning for such block regularization. Finally, we apply such group weight decay to the wav2vec 2.0 pre-training between $(0, t_0 + n\Delta T)$ training updates, and turn it off afterwards. \section{Experiments} \label{sec:exp} \subsection{Experimental setup} \label{ssec:setup} \subsubsection{Data} \label{sssec:data} We use the public LibriSpeech (LS) dataset \cite{panayotov2015librispeech} for all the supervised ASR experiments. We apply speed perturbation \cite{ko2015audio} to the LS training data and produce three versions of each audio with speed factors $0.9$, $1.0$ and $1.1$. We use the complete unlabeled Libri-Light dataset \cite{kahn2020libri} for self-supervised pre-training. We do not use the additional LibriSpeech language model (LM) corpus, and LM fusion is not applied in this work. \subsubsection{System implementation details} \label{sssec:system} Input acoustic features are 80-dimensional log-mel filterbank coefficients with 25 ms window size, and with mean and variance normalization. For all supervised ASR training, we use RNN-T criterion with alignment restrictions to improve training throughput \cite{mahadeokar2021alignment}, and apply the frequency and time masking as in SpecAugment \cite{park2019specaugment}. RNN-T output labels consist of a blank label and 4096 wordpieces generated by the unigram language model algorithm from SentencePiece toolkit \cite{kudo2018sentencepiece}, and the joint network has 1024 hidden units, and a softmax layer of 4097 units. RNN-T predictor is a 1-layer LSTM of 512 hidden units, with dropout rate $0.3$. Six 80-dimensional log-mel features are concatenated with stride 6 to form a 480 dimensional vector, followed by a linear layer and mapped to an input to the encoder. For differing RNN-T model sizes, we vary the Emformer encoder parameters as in Table \ref{tab:encoder}. All encoders use relative positional embeddings with clipping distance 64 (3.84s) in self-attention \cite{shaw2018self}, dropout $0.1$, and the hybrid layer norm configurations \footnote{ Following \cite{wang2019transformer}, we find in each transformer layer, including the additional third layer norm - that prevents the features from bypassing the transformer layer entirely - noticeably improves ASR accuracies. } \cite{wang2019transformer}. Given the input feature stride 6, in streaming mode, Emformer left/center/right context lengths are 1.2s, 180ms, 60ms, i.e., $\textbf{L}_i=20, \textbf{C}_i =3, \textbf{R}_i =1$ (Section \ref{ssec:emformer}). In non-streaming mode, we set the center segment length as 36s, longer than any training utterance, to use the full context. For all neural network implementation, we use an in-house extension of PyTorch-based \emph{fairseq} \cite{ott2019fairseq} toolkit. All experiments use multi-GPU training, AdamW optimizer with decoupled weight decay 0.01 \cite{loshchilov2017decoupled}, $\beta_1=0.9$, $\beta_2=0.98$, and tri-stage \cite{park2019specaugment} learning rate schedule. The peak learning rate is 1e-3 for RNN-T training from scratch, 6e-4 for wav2vec 2.0 pre-training, tuned over $\{$2e-5, 5e-5$\}$ for RNN-T fine-tuning. For RNN-T, all ASR training uses global batch size 2560, up to 300 epochs on LibriSpeech. For wav2vec 2.0, pre-training on Libri-Light randomly crops each utterance into a max length of 15s on-the-fly, and the 181M dense models use global batch size 3072, for 300K training updates; since for supernet training, each training step has 50\% probability sampling the sparse sub-model on each GPU, where only a subset of the parameters have nonzero gradients, thus we use a larger global batch size 3840, and a longer training schedule of 400-450K updates. As in Section \ref{ssec:supernet}, we prune all the encoder Emformer and predictor LSTM layers, with the following layer-wise sparsity level $s$ and pruning interval $\Delta T$: \begin{itemize} \item $s=0.67, \Delta T=10K$ for training the 73M RNN-T model, \item $s=0.87, \Delta T=6K$ for training the 181M RNN-T model, \item $s=0.87, \Delta T=6K$ for pre-training the 181M model, \end{itemize} \noindent such that the final sparse models after pruning have about 30M parameters in all cases. In each pruning interval, we prune out $20\%$ remaining weights, $p = 20\%$ as in \cite{ding2021audio}. \setlength{\tabcolsep}{0.16cm} \begin{table}[H] \caption{\label{tab:encoder} {\it Emformer parameters for differing RNN-T model sizes.}} \centering \begin{tabular}{ c | c c c c c c } \hline \hline RNN-T & \# layers & embedding dim & FFN dim & attn heads \\ \hline \hline 35M & 18 & 384 & 1024 & 4 \\ 73M & 20 & 512 & 2048 & 8 \\ 181M & 24 & 768 & 3072 & 8 \\ \hline \hline \end{tabular} \end{table} \setlength{\tabcolsep}{0.01cm} \begin{table}[H] \caption{\label{tab:result1} {\it WER results on LibriSpeech test-other. For system \textbf{D2}, we use a non-streaming encoder for the first 50K updates, and then switch it to the dual-mode encoder afterwards and perform training the same as \textbf{D1}. } } \centerline{ \begin{tabular}{ l | c | c } \hline \hline & unpruned/pruned & unpruned \\ & streaming & non-streaming \\ \hline \textbf{B1} \ 35M, streaming dense & 11.2 & - \\ \textbf{B2} \ 35M, dual-mode dense & 10.9 & 8.7 \\ \cdashline{1-3}[1.0pt/0.5pt] \textbf{C1} \ 73M, streaming sparsity 0.67 & 10.9 & - \\ \textbf{C2} \ 73M, non-streaming dense & - & 6.4 \\ \cdashline{1-3}[1.0pt/0.5pt] \textbf{D1} \ 73M, streaming sparsity 0.67, & \multirow{2}{*}{ 10.6 } & \multirow{2}{*}{ 7.0 } \\ non-streaming dense & & \\ \textbf{D2} \ non-streaming dense, 50K + \textbf{D1} & 10.4 & 6.6 \\ \hline \hline \end{tabular}} \end{table} \setlength{\tabcolsep}{0.127cm} \begin{table*}[h] \caption{\label{tab:result3}{\it WER results of 181M dense models on LibriSpeech (LS) test sets. Pre-training randomly crops each utterance on-the-fly into max length 15s for system \textbf{B1} and \textbf{B2}, 30s for \textbf{B3}. All streaming ASR uses center context 180ms, right context 60ms, and 240ms latency in total (Section \ref{sssec:system}). LM fusion is not used.}} \centerline{ \begin{tabular}{ c | l | c c | c c } \hline \hline \multirow{2}{*}{\textbf{dataset}} & \multirow{2}{*}{\textbf{system} } & test-clean & test-other & test-clean & test-other \\ \cdashline{3-6}[1.0pt/0.5pt] & & \multicolumn{2}{c|}{unpruned streaming} & \multicolumn{2}{c}{unpruned non-streaming} \\ \hline \hline LS & \textbf{B0} \ 181M, dual-mode, dense & 4.1 & 9.7 & 3.1 & 7.1 \\ \hline \multirow{3}{*}{ Libri-Light + LS } & \textbf{B1} \ dual-mode wav2vec, pretrain on 15s segment, dense + \textbf{B0} & 3.3 & 8.3 & 2.3 & 5.3 \\ & \textbf{B2} \ non-streaming wav2vec, pretrain on 15s segment, dense + \textbf{B0} & 3.1 & 7.8 & 2.1 & 4.3 \\ & \textbf{B3} \ non-streaming wav2vec, pretrain on 30s segment, dense + \textbf{B0} & 3.0 & 7.4 & 2.1 & 4.3 \\ \hline \hline \end{tabular}} \end{table*} \setlength{\tabcolsep}{0.1cm} \begin{table*}[h] \caption{\label{tab:result4}{\it WER results of 181M supernet models. Pre-training randomly crops each utterance into max length 15s in all systems below. As Section \ref{ssec:pretrain}, supernet training refers to using sparsity 0.87 to learn a sparse sub-model, and using the unpruned model to learn a dense encoder. All streaming ASR uses center context 180ms, right context 60ms, and has about 32M parameters after pruning. LM fusion is not used.}} \centerline{ \begin{tabular}{ c | l | c c | c c } \hline \hline \multirow{2}{*}{\textbf{dataset}} & \multirow{2}{*}{\textbf{system} } & test-clean & test-other & test-clean & test-other \\ \cdashline{3-6}[1.0pt/0.5pt] & & \multicolumn{2}{c|}{pruned streaming} & \multicolumn{2}{c}{unpruned non-streaming} \\ \hline \hline LS & \textbf{C1} \quad 181M, streaming sparsity 0.87, non-streaming dense & 4.4 & 11.2 & 2.7 & 6.4 \\ \hline \multirow{5}{*}{Libri-Light + LS} & \textbf{C2} \quad non-streaming wav2vec supernet training, 400K updates + \textbf{C1} & 4.6 & 12.1 & 2.5 & 5.3 \\ \cdashline{2-6}[1.0pt/0.5pt] &\textbf{C3.1} \ non-streaming wav2vec with group lasso, 50K updates + &\multirow{2}{*}{ 3.9 } &\multirow{2}{*}{ 10.2 } & \multirow{2}{*}{ 2.3 } & \multirow{2}{*}{ 4.5 } \\ & \qquad \ \ non-streaming wav2vec supernet training, 350K updates + \textbf{C1} & & & & \\ \cdashline{2-6}[1.0pt/0.5pt] & \textbf{C3.2} \ non-streaming wav2vec with group lasso, 150K updates + & \multirow{2}{*}{ 3.9 } & \multirow{2}{*}{ 9.4 } & \multirow{2}{*}{ 2.2 } & \multirow{2}{*}{ 4.3 } \\ & \qquad \ \ non-streaming wav2vec supernet training, 300K updates + \textbf{C1} & & & & \\ \hline \hline \end{tabular}} \end{table*} \subsection{Results of the dual-mode ASR supernet} \label{ssec:results_1} We first build a pair of 35M dense model baselines: a streaming single-mode dense model B1, and a streaming and non-streaming dual-mode model B2. ASR word error rate (WER) results on LibriSpeech \emph{test-other} are shown in Table \ref{tab:result1}. We find B2 moderately improves B1, as observed in \cite{yu2020dual} similarly. Then we build a pair of 73M models: \begin{itemize} \item[(i)] a single-mode sparse streaming model C1 with sparsity 0.67, so after pruning it has about 29M parameters, less than B1 and B2, \item[(ii)] a single-mode dense non-streaming model C2, \end{itemize} \noindent such that respectively, the separate single-mode C1 and C2 use the same number of parameters as the proposed dual-mode supernet model D1. We find the sparse streaming mode of D1 outperforms both dense models B1, B2 and the single-mode C1, but the D1 unpruned non-streaming mode falls behind C2. D1 uses $t_0 = \Delta T = 10K$ above (Section \ref{ssec:supernet}), and we find simply increasing $t_0$ is not helpful. Then we try a two-step approach in system D2: \begin{itemize} \item[1.] increase $t_0=50K$, and use a single-mode non-streaming encoder, i.e., always use the full context between $(0, t_0)$, \item[2.] then after $t_0$, switch it to the dual-mode encoder, and perform training the same as D1. \end{itemize} Then we find D2 to provide non-streaming performance on a par with C2. Overall, we demonstrate the efficacy of jointly learning a sparse streaming sub-model and a dense non-streaming model in a single supernet. \subsection{Results of the pre-training efficacy on dual-mode ASR } \label{ssec:results_2} Then we scale up the model size to 181M, as in Table \ref{tab:result3}\footnote{We note that, training the large transformer/Emformer models like system B0 from scratch - without additional regularization techniques - significantly underperforms. While applying auxiliary training criteria \cite{liu2021improving} would substantially improve baseline B0, they can be applied to other competing systems like B1 and B2 as well, so we will leave it to future work.}, and first examine the pre-training effects on dense models. As in Section \ref{ssec:pretrain}, we perform the wav2vec 2.0 pre-training on Libri-Light, and then afterwards use dual-mode encoder during RNN-T fine-tuning, to enable the dual-mode ASR. We also try using the dual-mode encoder during wav2vec pre-training as well, referred to as the dual-mode wav2vec in B1 (see Table \ref{tab:result3}). However, by comparing B1 and B2, we find pre-training with just the non-streaming encoder instead is much more effective for both non-streaming and streaming ASR. Note that system B1 and B2 are pre-trained on audio segments cropped up to 15s, and we further increase the max segment length to 30s on system B3. We find B3 can produce further better streaming results compared to B2. In all cases above, we present that pre-training can not only substantially improve the non-streaming ASR results as widely shown in prior works, and also noticeably improve streaming ASR performance, as one of the contributions in this work. The proposed dynamic Emformer segment sampling (Section \ref{ssec:dual_mode}) allows for using a non-streaming encoder to maximize the pre-training benefits, and enabling the high-performing dual-mode ASR afterwards. \subsection{Results of supernet training with both self-supervised and supervised criteria} \label{ssec:results_3} Next, as in Table \ref{tab:result4}, we first build a dual-mode supernet model C1 with labeled data only, and then start to use unlabeled data and examine the pre-training effects on both the sparse streaming mode and the dense non-streaming mode. As discussed in Section \ref{ssec:pretrain}, we find any encoder pruning during RNN-T fine-tuning results in severe streaming ASR degradation, significantly falling behind the baseline C1. Thus instead we prune the encoder during pre-training. Note that for the ASR supernet training (Section \ref{ssec:supernet}), we will sample between streaming and non-streaming modes; however, given the result comparison between B1 and B2, we always use non-streaming mode during pre-training - we sample between the sub-model and the whole model (i.e., apply the mask or not), and both operate in the non-streaming mode. Thus the encoder pruning mask is learned completely on the unlabeled data without supervision, and the encoder mask is fixed during RNN-T fine-tuning, so we refer to such process as self-pruning. The predictor is also pruned for streaming ASR, and the predictor mask is learned during RNN-T fine-tuning. Additionally, after such supernet training, the identified sparse sub-model will go through different post-processing and specialized hardware for storage and run-time optimization, therefore, we can choose separate best checkpoints across epochs for the sparse streaming sub-model and the dense non-streaming model respectively, based on the ASR accuracies on LS \emph{dev-other} subset. Following such training algorithm, although the system C2 gives higher non-streaming accuracies than the baseline C1 without pre-training, C2 still trails C1 on the streaming accuracy\footnote{Although by comparing the dense model B2 and B3 (Table \ref{tab:result3}), pretraining on 30s audio segments is more effective for streaming ASR than on 15s, we find such observation does not hold true for the supernet training like system C3.1. We conjecture the explanation that pretraining on longer segments for a highly sparse model results in more difficult neural network optimization problems, e.g., the training will diverge using the same learning rate 6e-4, and we have to use 4e-4. Thus instead, system C2, C3.1 and C3.2 (Table \ref{tab:result4}) are all pre-trained on segments up to 15s.}. Then we note that C2 performs iterative pruning from scratch, i.e., using a small $t_0$, $t_0 = \Delta T = 6K$ updates (Section \ref{sssec:system}). Instead, we can increase $t_0$ and prune a better converged model, assuming that the weights will be better initialized for the pruning criterion (i.e., weight magnitude). However, we find simply increasing $t_0$ can only produce results similar to C2, since as discussed in Section \ref{ssec:block_wd}, weights learned during $(0, t_0)$ do not follow the $8\times1$ block structure, and the structured sparsity may prune out important weights in each block. Therefore, next, we not only increase $t_0$ and also apply the additional group lasso weight decay during $(0, t_0 + n \Delta T)$. We find the resulting system C3.1 with $t_0 = 50K$ outperforms both baseline C1 and C2. Finally, we increase $t_0 = 150K$ in system C3.2, and find (i) compared to the dense model B2 without any sparsity (Table \ref{tab:result3}), C3.2 can match the topline non-streaming performance, and (ii) compared to baseline C1, C3.2 can effectively leverage self-supervised learning and provide a significantly improved sparse streaming model, by 11-16\% WER reductions. \section{Conclusions} Overall, we first present a dynamic Emformer segment sampling framework to enable a dual-mode encoder. We demonstrate that, jointly learning a featherweight sparse streaming ASR model and a large dense non-streaming model - in a single supernet - can provide competitive accuracies compared to learning each individually. Second, the proposed dual-mode encoder can dynamically use the non-streaming mode during the wav2vec 2.0 pre-training and perform dual-mode ASR thereafter, which allows for self-supervised learning equally helpful for the non-streaming mode and also to substantially improve the streaming ASR. Next, we show that the proposed group lasso weight decay can effectively address the block patterns as required in structured pruning, such that the self-supervised pre-training is able to identify a performant and robust sub-model for the downstream task. Finally, we conclude that for both self-supervised and supervised learning, the proposed supernet training of a sparse sub-model and a dense model jointly can provide an equally competitive non-streaming ASR model and also provide a noticeably improved sparse streaming model. \begin{comment} \setlength{\tabcolsep}{0.01cm} \begin{table}[h] \caption{\label{tab:result_} {\it WER results on LibriSpeech test-other. } } \centerline{ \begin{tabular}{ l | c | c } \hline \hline & unpruned/pruned & unpruned \\ & streaming & non-streaming \\ \hline \bold{B0} \ 181M, dual-mode dense & 9.7 & 7.1 \\ \cdashline{1-3}[1.0pt/0.5pt] \bold{B1} \ dual-mode wav2vec on 15s + B0 & 8.3 & 5.3 \\ \bold{B2} \ non-streaming wav2vec on 15s + B0 & 8.2 & 4.4 \\ \bold{B3} \ non-streaming wav2vec on 30s + B0 & 7.3 & 4.5 \\ \hline \hline \end{tabular}} \end{table} \end{comment} \bibliographystyle{IEEEbib}
2,869,038,155,783
arxiv
\section{Introduction} In this article we consider a random walk in a balanced uniformly-elliptic time-dependent random environment on $\Z^d, d\ge 2$. For $x,y\in\Z^d$, we write $x\sim y$ if $|x-y|=1$ and $x\not\sim y$ otherwise. Denote by $\mc P$ the set ({\it of nearest-neighbor transition rates on $\Z^d$}) \[ \mc P:=\left\{v: \Z^d\times\Z^d\to[0,\infty)\bigg|v(x,y)=0 \text{ if }x\nsim y\right\}. \] Equip $\mc P$ with the the product topology and the corresponding Borel $\sigma$-field. We denote by $\Omega\subset \mc P^{\R}$ the set of all measurable functions $\omega_\cdot: \R\to\mc P$ and call every element $\omega\in\Omega$ a time-dependent {\it environment}. Given an environment $\omega$, we define the parabolic difference operator \begin{align*} \mc L_\omega u(x,t) &=\sum_{y:y\sim x}\omega_t(x,y)(u(y,t)-u(x,t))+\partial_t u(x,t) \end{align*} for every bounded function $u:\Z^d\times\R\to\R$ which is differentiable in $t$. Let $(\hat X_t)_{t\ge 0}=(X_t,T_t)_{t\ge 0}$ denote the continuous-time Markov chain on $\Z^d\times\R$ with generator $\mc L_\omega$. Note that almost surely, $T_t=T_0+t$. We say that $(X_t)_{t\ge 0}$ is a {\it continuous-time random walk in the time-dependent environment }$\omega$ and denote by $P_\omega^{x,t}$ the law (called the {\it quenched law}) of the process $\hat X_\cdot$ with initial state $(x,t)\in\Z^d\times\R$. For any $\hat x=(x,t)$ and $\hat y=(y,s)$ in $\Z^d\times\R$ with $s>t$, we also write \begin{equation}\label{kernel} p^\omega(\hat x, \hat y):=P_\omega^{x,t}(X_{s-t}=y). \end{equation} We equip $\Omega\subset\mc P^\R$ with the induced product topology and let $\mb P$ be a probability measure on the Borel $\sigma$-field $\mc B(\Omega)$ of $\Omega$. We say that $\mb P$ is {\it uniformly elliptic} if there exists a constant $\kappa\in(0,1)$ such that for any $t\in\R$ and $x\sim y$, \[ \mb P\left( \omega_t(x,y)\in [\kappa,\tfrac{1}{\kappa}] \right)=1. \] We say that $\mb P$ is {\it balanced} if $\mb P$-almost surely, \[ \sum_{y}\omega_t(x,y)(y-x)=0, \quad \forall t\in\R. \] For each $(x,t)\in\Z^d\times\R$ we define the space-time shift $\theta_{x,t}\omega:\Omega\to\Omega$ by \[ (\theta_{x,t}\omega)_s(y,z):=\omega_{s+t}(y+x,z+x). \] We assume that the law $\mb P$ of the environment is translation-invariant and {\it ergodic} under the space-time shifts $\{\theta_{x,t}:x\in\Z^d, t\ge 0\}$. I.e, $P(A)\in\{0,1\}$ for any $A\in\mc B(\Omega)$ such that $\mb P(A\Delta \theta_{\hat x}^{-1}A)=0$ for all $\hat x\in\Z^d\times[0,\infty)$. Given $\omega$, the process \[ \bar\omega_t:=\theta_{\hat X_t}\omega, \qquad t\ge 0, \] with initial state $\bar\omega_0=\omega$ is a Markov process on $\Omega$, called the {\it environment viewed from the point of view of the particle}. With abuse of notation, we use $P_\omega^{0,0}$ to denote the quenched law of $(\bar\omega_t)_{t\ge 0}$. Note that in this paper, the environment $\omega$ is allowed to depend on both space and time. In the special case when the environment is time-independent, i.e, $\omega_t=\omega_s$ for all $t,s\in\R$ and $\mb P$-almost all $\omega$, we say that the environment is {\it static}. We recall the quenched central limit theorem (QCLT) in \cite{DGR15}. \begin{theorem}\cite[Theorem 1.2]{DGR15}\label{thm:recall} Assume that $\mb P$ is balanced, uniformly elliptic and ergodic with respect to the shifts $\{\theta_{x,t}:x\in\Z^d, t>0\}$. Then \begin{enumerate}[(a)\,] \item there exists a unique invariant measure $\mb Q$ for the process $(\bar\omega_t)_{t\ge 0}$ such that $\mb Q$ is equivalent to $\mb P$ and $(\bar\omega_t)_{t\ge 0}$ is an ergodic flow under $\mb Q\times P_\omega^{0,0}$. Moreover, letting $\rho(\omega):=\mathrm{d}\mb Q/\mathrm{d}\mb P$, we have $\rho>0$, $\mb P$-almost surely, and \begin{equation}\label{rho-moment} E_{\mb P}[\rho^{\tfrac{d+1}{d}}]<\infty;\footnote{At the end of the proof of \cite[Theorem 1.2]{DGR15}, it is shown that $E_\mb Q[g]\le C\norm{g}_{L^{d+1}(\mb P)}$ for any bounded continuous function $g$, which immediately implies \eqref{rho-moment}.} \end{equation} \item\label{item:qclt}(QCLT) For $\mb P$-almost all $\omega$, $P_\omega^{0,0}$-almost surely, $(X_{n^2t}/n)_{t\ge 0}$ converges, as $n\to\infty$, weakly to a Brownian motion with a deterministic non-degenerate covariance matrix $\Sigma={\rm diag}\{E_{\mb Q}[\omega_0(0,e_i)], i=1,\ldots,d\}$. \end{enumerate} \end{theorem} \begin{remark} For balanced random walks in a static, uniformly-elliptic, ergodic random environment on $\Z^d$, the QCLT has been first shown by Lawler \cite{Lawl82}. It is then generalized to static random environment with weaker ellipticity assumptions in \cite{GZ, BD14}. We remark that in $\R^d $, balanced random walk in a static environment corresponds to the diffusion generated by non-divergence form elliptic operators \[ L_\omega f(x)=\sum_{ij=1}^d a_{ij}^\omega(x)\partial_{ij}f(x), \] where $(a_{ij}^\omega(x))_{1\le i,j\le d}$ is a positive-definite symmetric matrix for each $x\in\Z^d$. In this setting, the QCLT is proved by Papanicoula-Varadhan \cite{PV82}. For more general fully non-linear operators on a continuous space, quantitative homogenization estimate (which controls error between solutions of the random operator and the deterministic limiting operator) are obtained in \cite{A-S14} and \cite{Lin15}, in the static and time-dependent settings, respectively. \end{remark} \begin{remark}\label{rm3} For $(x,t)\in\Z^d\times\R$, set \[ \rho_\omega(x,t):=\rho(\theta_{x,t}\omega). \] Note that by the definition of $\Omega$ and that it is equipped with a product $\sigma$-field, for any fixed $\omega\in\Omega$, the map $\R\to\Omega$ defined by $t\mapsto \theta_{0,t}\omega$ is measurable. Hence for almost-all $\omega$, the function $\rho_\omega(x,t)$ is measurable in $t$. Moreover, $\rho_\omega$ possesses the following properties. For $\mb P$-almost all $\omega$, \begin{enumerate}[(i)] \item $\rho_\omega(x,t)\delta_x\mathrm{d} t$ is an invariant measure for the process $\hat X_t$ under $P_\omega$; \item $\rho_\omega(x,t)>0$ is the unique density (with respect to $\delta_x\mathrm{d} t$) for an invariant measure of $\hat X$ that satisfies $E_\mb P[\rho_\omega(0,0)]=1$; \item for all $x\in\Z^d$, $\rho_\omega(x,t)$ is weakly differentiable in $t$ with \begin{equation}\label{rho-invariance} \dot{\rho}_\omega(x,t)=\sum_{y} \rho_\omega(y,t)\omega_t(y,x), \end{equation} where $\dot{\rho}_\omega$ denotes the weak derivative of $\rho_\omega$ with respect to $t$ and $\omega_t(x,x):=-\sum_{y:y\sim x}\omega_t(x,y)$. \end{enumerate} The proof of properties (i)-(iii) is an easy exercise and will be included in the appendix of the arXiv version. \begin{remark}\label{rm:11} The weak differentiability of $\rho_\omega$ in $t$ implies that it has an absolutely continuous version (as a function of $t$). Since $\rho_\omega(x,t)$ is only used as a density, from now on, we always assume that $\mb P$-almost surely, $\rho_\omega(x,\cdot)$ is continuous and almost-everywhere differentiable in $t$. \end{remark} \end{remark} As a main result of our paper, we will present the following {\it local limit theorem} (LLT), which is a finer characterization of the local behavior of the random walk than the QCLT in a large space-time scale. \begin{theorem} [LLT]\label{thm:llt} Let $K, t_0>0$. For $\mb P$-almost all $\omega$, \[ \lim_{n\to\infty}\sup_{|x|\le K, t\ge t_0} \Abs{ n^d\frac{P_\omega^{0,0}(X_{n^2t}=\floor{nx})}{\rho_\omega(\floor{nx}, n^2t)} -p_t^\Sigma(0,x) }=0. \] Here $\floor{x}:=(\floor{x_1},\ldots, \floor{x_d})\in\Z^d$ for $x\in\R^d$ and $p_t^\Sigma$ is the transition kernel of the Brownian motion with covariance matrix $\Sigma$ and starting point $0$. \end{theorem} Recall the definition of the transition function $p^\omega$ in \eqref{kernel}. For $\hat x=(x,t),\hat y=(y,s)\in \Z^d\times\R$, $s<t$, define the {\it heat kernel} \begin{equation}\label{eq:def-hk} q^{\omega}(\hat y,\hat x):=\dfrac{p^\omega(\hat y,\hat x)}{\rho_\omega(\hat x)}. \end{equation} Note that for fixed $\hat x=(x,t)\in\Z^d\times\R$, the function $\hat y\mapsto u(\hat y):=q^\omega(\hat y,\hat x)$ solves the parabolic equation \[ \mc L_\omega u(\hat y)=0 \quad \mbox{for all }\hat y\in\Z^d\times(-\infty,t),\] and regularity estimates of the function $u$ follows from the {\it parabolic Harnack inequality} (PHI) for the balanced operator $\mc L_\omega$. For $r>0$, let \[ B_r=\{x\in\Z^d: |x|_2\le r\} \] and write $B_r(x):=x+B_r$. \begin{theorem}[PHI for $\mc L_\omega$]\label{Harnack} Assume $\omega\in\Omega_\kappa$. Let $u$ be a non-negative solution of $\mc L_\omega u= 0$ in $B_{2R}\times(0, R^2)$. Then, for any constants $0<\theta_1<\theta_2<\theta_3<1$, there exists a constant $C=C(\kappa,d,\theta_1,\theta_2,\theta_3)$ such that \[ \sup_{B_{R/2}\times(\theta_3 R^2,R^2)}u\le C\inf_{B_{R/2}\times(\theta_1R^2, \theta_2 R^2)}u.\tag{PHI} \] \end{theorem} Theorem~\ref{Harnack} is the lattice analogue of the classical Harnack inequality of Krylov and Safonov \cite{KS80} for {\it parabolic} differential operators of non-divergence form. In discrete space and discrete time setting, (PHI) is obtained by Kuo and Trudinger for the so-called parabolic difference operators of {\it implicit form}, see \cite[(1.16)]{KT98}. Our proof of Theorem~\ref{Harnack} mimics the classical proof of the Krylov-Safonov PHI (See \cite{Lieb}). But for the simplicity of our presentation, we will only provide the proof of Theorem~\ref{Harnack} in the Appendix of the arXiv version of this paper. In order to prove the LLT (Theorem~\ref{thm:llt}), we need good regularity property of the heat kernel $\hat x\mapsto v_\omega(\hat x):=q^\omega(\hat 0,\hat x)$ which solves the {\it adjoint equation} \begin{equation}\label{e27} \mc L_\omega^*v(\hat x):=\sum_{y:y\sim x}\omega_t^*(x,y)(v(y,t)-v(\hat x))-\dot{v}(\hat x)=0, \end{equation} for all $\hat x\in\Z^d\times(0,\infty)$, where $\dot{v}$ denotes the weak derivative of $v(x,t)$ with respect to $t$ and \[ \omega_t^*(x,y):=\frac{\rho_\omega(y,t)\omega_t(y,x)}{\rho_\omega(x,t)} \quad\mbox{ for }x\sim y\in\Z^d. \] To this end, we need to prove, instead of PHI for $\mc L_\omega$, the PHI for the {\it adjoint operator} $\mc L^*_\omega$. Note that $\omega^*$ is not necessarily a balanced environment anymore. Our main result is \begin{theorem}[PHI for $\mc L^*_\omega$]\label{thm-ah} Let $\mb P$ satisfy the same conditions as in Theorem~\ref{thm:recall}. Then for $\mb P$-almost all $\omega$, any non-negative solution $v$ of the adjoint equation \begin{equation}\label{e26} \mc L_\omega^* v(x,t)=0 \qquad \forall (x,t)\in B_{2R}\times(0, 4R^2] \end{equation} satisfies \[ \sup_{B_R\times(R^2,2R^2)}v\le C\inf_{B_R\times (3R^2,4R^2]}v. \] \end{theorem} The Harnack inequality for the adjoint of non-divergence form differential operators was first first proved by Bauman \cite{Baum84}for elliptic operators which are uniformly-elliptic, and was generalized to the parabolic setting by Escauriaza \cite{Esc00}. Our proof of Theorem~\ref{thm-ah} follows the main idea of \cite{Esc00}. For a static discrete time environment, Theorem~\ref{thm-ah} was obtained by Mustapha \cite{Mustapha06}. His argument follows basically \cite{Esc00}, and uses the PHI \cite[Therem~4.4]{KT98} of Kuo and Trudinger for operators under {\it explicit} scheme (see definition \cite[(1.16)]{KT98}) \[ L_\omega f(x,n)=\sum_{y:y\sim x}\omega(x,y)(f(y,n)-f(x,n))-(f(x,n+1)-f(x,n)), \] $\forall (x,n)\in\Z^d\times\Z$. However, Kuo and Trudinger \cite[pp.607]{KT98} states that their result for the implicit scheme is not valid under the explicit scheme, so that we believe that there is a gap in the proof of \cite{Mustapha06}. Moreover, the volume-doubling property of the invariant distribution, which is the essential part of the proof of Theorem~\ref{thm-ah}, is much simpler in the static case, see \cite{FS84}. In our dynamical setting, a {\it parabolic} volume-doubling property (Theorem~\ref{thm:vd}) is required. To this end, we need to adapt the proofs of Safonov-Yuan \cite{SY} and the results in the references therein \cite{FSY, Baum84,Garo} into our discrete space setting. The main challenge in proving Theorem~\ref{thm-ah} is that $\mc L^*_\omega$ is not balanced, and so the classical PHI for $\mc L_\omega$ (Theorem~\ref{Harnack}) is not immediately applicable. This is the main difference with the random conductance model with symmetric jump rates where \[ \omega_t(x,y)=\omega_t(y,x)=\omega^*_t(x,y), \] and thus which PHI for $\mc L_\omega$ is the same as PHI for $\mc L_\omega^*$. See \cite{Andres14,Delm99,DD05,ACDS}, \cite{HK16}. Although our main result Theorem~\ref{thm:llt} is of probabilistic nature, our proofs rely strongly on analylical methods. On the other hand, in our presentation we are using probabilistic tools such as martingale, optional stopping times, time-reversal and coupling arguments. This not only greatly simplifies some of the crucial steps of the proof but also sheds a new light into the deep connections between the two topics. Let us explain the main idea for the proof of Theorem~\ref{thm-ah}. An important observation is that (by optional stopping) solutions of the adjoint equation can be expressed in terms of boundary values and hitting probabilities of the time-reversed process. Thus to compare values of the adjoint solution, one only needs to estimate the hitting probability (of the {\it reversed} process) at the boundary. In other words, we need to compare hitting probabilities of the {\it original} process that {\it starts from} the boundary. To this end, we will use a ``boundary Harnack inequality" (Theorem~\ref{thm-bh}) which compares $\mc L_\omega$-harmonic functions near the boundary. We will also need a volume-doubling inequality for the invariant measure (Theorem~\ref{thm:vd}) to control the change of probabilities due to time-reversal. Once the PHI for $\mc L^*_\omega$ is shown, by standard argument, we get the following H\"older estimate for solutions of \eqref{e26}, which then yields our LLT. \begin{corollary}\label{cor:hoelder} Let $\mb P$ satisfy the same conditions as in Theorem~\ref{thm:recall}. Then there exists a constant $\gamma=\gamma(d,\kappa)>0$ such that for $\mb P$-almost all $\omega$, any non-negative solution $v$ of $\mc L_\omega^*$ in $B_R\times(0,R^2]$, $R\ge 1$, satisfies \[ |v(x,t)-v(y,s)|\le C \left( \frac{|x-y|+|s-t|^{1/2}}{R} \right)^\gamma \sup_{ B_R\times(0,R^2]}v \] for all $(x,t),(y,s)\in B_R\times(0,R^2]$. \end{corollary} Heat kernel estimates (HKE) then follows from Theorem~\ref{thm-ah} and Corollary~\ref{cor:hoelder}. \begin{theorem} [HKE]\label{thm:hke} Let $\mb P$ satisfy the same conditions as in Theorem~\ref{thm:recall}. Then $\mb P$-almost every $\omega$ and all $(x,t)\in\Z^d\times(0,\infty)$, \begin{align*} &P_\omega^{0,0}(X_t=x) \le \frac{C\rho_\omega(x,t)}{\rho_\omega(B_{\sqrt t}(y),s)} e^{-c(1\wedge\frac{|x|}{t})|x|} \\ \text{and }\quad &P_\omega^{0,0}(X_t=x) \ge \frac{c\rho_\omega(x,t)}{\rho_\omega(B_{\sqrt t}(y),s)} e^{-C\frac{|x|^2}{t}} \end{align*} for all $s\in[0,t]$ and $y$ with $|y|\le |x|$. Here $\rho_\omega(B_r(y),s):=\sum_{x\in B_r(y)}\rho_\omega(x,s)$. \end{theorem} By the ergodic theorem, we can obtain Gaussian bounds for large time $t$. Furthermore, we can characterize the asymptotics of the rescaled Green function of the RWRE. Recall the definitions of $\Sigma$ (in Theorem~\ref{thm:recall} \eqref{item:qclt}), $p^\Sigma_t$, $\floor{x}$ (in Theorem~\ref{thm:llt}) and the heat kernel $q^\omega(\cdot,\cdot)$ in \eqref{eq:def-hk}. \begin{corollary}\label{cor:q-estimates} Let $\mb P$ be as in Theorem~\ref{thm:recall}. We write $\hat 0:=(0,0)$. The following statements are true for $\mb P$-almost every $\omega$. \begin{enumerate}[(i)] \item\label{cor:q-hke} There exists $t_0(\omega)>0$ such that for any $\hat x=(x,t)\in\Z^d\times(t_0,\infty)$, \[ \frac{c}{t^{d/2}}e^{-\frac{C|x|^2}{t}} \le q^\omega(\hat 0,\hat x) \le \frac{C}{t^{d/2}}e^{-c(1\wedge\frac{|x|}{t})|x|}. \] As a consequence, the RWRE is recurrent when $d=2$ and transient when $d\ge 3$. \item\label{cor:green1} When $d=2$, for all $x\in \R^d\setminus\left\{0\right\}$, \[\lim_{n\to\infty}\frac{1}{\log n}\int_0^\infty \left[q^\omega(\hat 0;0,t)-q^\omega(\hat 0;\floor{nx},t)\right]\mathrm{d} t =\frac{1}{\pi\sqrt{\det\Sigma}}. \] \item\label{cor:green2} When $d\ge 3$, for all $x\in\R^d\setminus\left\{0\right\}$, \[ \lim_{n\to\infty}n^{d-2}\int_0^\infty q^\omega(\hat 0;\floor{nx},t)\mathrm{d} t =g^\Sigma(0,x), \] where $g^\Sigma(0,x):=\int_0^\infty p^\Sigma_t(0,x)\mathrm{d} t$. \end{enumerate} \end{corollary} The organization of this paper is as follows. In Section~\ref{sec:vd}, we obtain a space-time volume-doubling property for the density $\rho_\omega$. In Section~\ref{sec:near-bdry}, we establish estimates of $\mc L_\omega$-harmonic functions near the boundary, showing both the interior elliptic-type and boundary parabolic Harnack inequalities (PHI). With the volume-doubling property and the boundary PHI, we will prove the PHI for the adjoint operator (Theorem~\ref{thm-ah}) in Section~\ref{sec:proof-of-ah}. Finally, with the adjoint PHI, we will prove Theorem~\ref{thm:llt}, Theorem~\ref{thm:hke} and Corollary~\ref{cor:q-estimates} in Section~\ref{sec:proof-llt-hke}. Section~\ref{sec:auxiliary-prob} contains probability estimates that are used in the previous sections. Throughout this paper, we assume that $\mb P$ is balanced, uniformly elliptic and ergodic with respect to the shifts $\{\theta_{x,t}, x\in\Z^d,t>0\}$. We let $\Omega_\kappa\subset\Omega$ denote the set of balanced environments $\omega$ with ellipticity constant $\kappa$. We let $C, c$ be generic constants which depend only on the dimension $d$ and $\kappa$, and which may differ from line to line. \section{Volume-doubling properties}\label{sec:vd} The purpose of this section is to prove a space-time volume-doubling property for the invariant measure $\rho_\omega$ of the process $(\hat X_t)_{t\ge 0}$. \begin{theorem}\label{thm:vd} Let $\mb P$ satisfy the same conditions as in Theorem~\ref{thm:recall} and let $\rho(x,t)=\rho_\omega(x,t)$ be as in Remark~\ref{rm3}. Then, $\mb P$-almost surely, for every $r\ge 1/2$ and $t\in[-r^2,r^2]$, \[ \rho(B_{2r},t)\le C\rho(B_r,0). \] \end{theorem} For non-divergence form differential equations, this type of estimate was first established by Fabes and Stroock \cite{FS84} for the adjoint solutions of elliptic operators, and then generalized by Escauriaza \cite{Esc00} to the parabolic setting. Note that in \cite{Esc00}, the operator is deterministic and the adjoint solutions are constructed as re-scaled limit of the Green functions. Here in our RWRE setting, we follow a different route. We constructed $\rho_\omega$ as the density of the invariant measure $\mb Q$ of the environmental process $(\bar\omega_t)_{t\ge 0}$, and we will prove the volume-doubling properties using the ergodicity of $\mb Q$. To prove Theorem~\ref{thm:vd}, a crucial estimate is the volume-doubling property (Theorem~\ref{prob_doubling}) for the hitting measure of the random walk, which is of interest in its own right. Theorem~\ref{prob_doubling} is a discrete version of \cite[Theorem 1.1]{SY} established by Safonov and Yuan in the PDE setting. \begin{theorem}\label{prob_doubling} Assume that $\omega\in\Omega_\kappa$. Let $K\ge 1$ be any fixed constant. Let $(X_t)$ be the continuous-time random walk generated by the operator $\mc L_\omega$ with ellipticity constant $\kappa>0$. For any $r>0$ and any $(y,s)\in\Z^d\times[0,\infty)$ with $|y|\le K\sqrt s$, we have \[ P_\omega^{y,0}(X_s\in B_{2r})\le C P_\omega^{y,0}(X_s\in B_r). \] Here $C$ is a constant depending on only $d,\kappa$ and $K$. \end{theorem} For a finite subgraph $D\subset\Z^d$, let \[ \partial D=\{y\in\Z^d\setminus D: y\sim x \mbox{ for some }x\in D\}, \quad \bar D:=D\cup\partial D \] and let $\partial' D=\partial(\Z^d\setminus D)$ denote the {\it inner boundary} of $D$. For an open set $\ms D\subset \Z^d\times\R$, define the {\it parabolic boundary} of $\ms D$ as \[ \ms D^\p:= \{ (x,t)\notin\setminus\ms D: \big(B_1(x)\times(t-\epsilon,t]\big)\cap\ms D\neq\emptyset \mbox{ for all }\epsilon>0 \}. \] In the special case $\ms D=D\times[0,T)$ for some finite $D\subset\Z^d$, it is easily seen that $\ms D^\p=(\partial D\times[0,T])\cup(\bar D\times\{T\})$. See figure~\ref{fig:p-bdry}. By the optional stopping theorem, for any $(x,t)$ in an open set $\ms D\subset\Z^d\times\R$ and any bounded integrable function $u$ on $\ms D\cup\ms D^\p$, \begin{equation}\label{representation} u(x,t)=-E_\omega^{x,t}\left[\int_0^\tau\mc L_\omega u(\hat X_r)\mathrm{d} r\right]+ E^{x,t}_\omega[u(\hat X_\tau)], \end{equation} where $\tau=\inf\{r\ge 0: (X_r,T_r)\notin \ms D\}$. \begin{figure}[H] \centering \pgfmathsetmacro{\R}{2} \begin{tikzpicture} \draw[gray!70,->, >=stealth] (-1-\R,0)--(\R+1,0) node[black,above right] {Space}; \draw[gray!70,->,>=stealth] (0,0) node[black, above right] {\small $0$}--(0,3.6) node[black,above] {time}; \draw (-\R,0) rectangle (\R,3); \draw[line width=0.6mm] (-\R,0) to (-\R,3) to (\R,3) to (\R,0); \draw[gray,decorate,decoration={brace,amplitude=10pt,mirror},yshift=-1pt] (-\R,0) -- (\R,0) node[black,below,midway, yshift=-10pt] {$D$}; \node[above right] at (0,3) {\small $T$}; \end{tikzpicture} \caption{The parabolic boundary of $D\times[0,T)$.\label{fig:p-bdry}} \end{figure} \begin{proof}[Proof of Theorem~\ref{prob_doubling}:] Note that the case $0<r<1/2$ is trivial. We only consider $r\ge 1/2$. Since $P_\omega^{y,0}=P_{\theta_{o,-s}\omega}^{y,s}$ and $\omega$ is arbitrary in Theorem~\ref{prob_doubling}, it suffices to show that for any $\omega$ and any $(x,t)\in\Z^d\times(-\infty,0]$ with $|x|\le K\sqrt{\abs{t}}$, \begin{equation}\label{doubling1} P_\omega^{x,t}(X_{|t|}\in B_{2r})\le C P_\omega^{x,t}(X_{|t|}\in B_r). \end{equation} Note that writing $\tau:=\inf\{s\ge 0: T_s=0\}$, then we have $\tau=|t|$ when $X_0=t<0$ and so \begin{equation}\label{e36} u_r(x,t):=P_\omega^{x,t}(X_{|t|}\in B_r)=P_\omega^{x,t}(X_\tau\in B_r) \end{equation} is a $\mathcal L_\omega$-harmonic function on $\Z^d\times(-\infty,0)$. Our proof of \eqref{doubling1} contains several steps. \begin{enumerate}[Step 1.] \item\label{s1} First, we show that there exists $c_0=c_0(d,\kappa)\in(0,1)$ such that \begin{equation}\label{lowerb_ur} \inf_{B_{r/2}\times[-c_0^2r^2,0)}u_r\ge \frac{1}{2}. \end{equation} Indeed, since $(X_s)_{s\ge 0}$ is a martingale, by Doob's submartingale inequality and uniform ellipticity, for any $(x,t)\in B_{r/2}\times[-\epsilon^2r^2,0)$, $\epsilon\in(0,1)$, we have \begin{align}\label{Hoeffding} P_\omega^{x,\cdot}(X_{|t|}\notin B_r) &\le P_\omega^{x,\cdot}\left(\sup_{0\le s\le \epsilon^2 r^2}|X_s-x|\ge r/2\right)\nonumber\\ &\le \frac{E_\omega^{x,\cdot}[|(X_{\epsilon^2 r^2}-x)|^2]}{r^2/4} \nonumber\\ &\le \frac{\epsilon^2 r^2/\kappa}{r^2/4}, \end{align} where in the last inequality we used the optional stopping theorem and the fact that $|X_t|^2-\tfrac{1}{\kappa}t$ is a super-martingale. Hence, taking $c_0=\sqrt\kappa/4>0$, we have \[ \sup_{(x,t)\in B_{r/2}\times[-c_0^2r^2,0)}P_\omega^{x,\cdot}(X_{|t|}\notin B_r)\le \frac{1}{2} \] and \eqref{lowerb_ur} follows. \item \label{s2} We will show that for any $k\ge 1$, there exists $\gamma_k=\gamma_k(d,\kappa,k)>0$ such that for any $\rho\ge c_0 r/k$, \begin{equation}\label{ur-decay} \inf_{B_{k\rho}\times\{-\rho^2\}}u_r\ge \frac{1}{2}\left( \frac{c_0r}{2k\rho} \right)^{\gamma_k}. \end{equation} To this end, we let $n\ge 1$ be the constant that $\frac{k\rho}{2^n}\le c_0r<\frac{k\rho}{2^{n-1}}$. Then, by Harnack's inequality (Theorem~\ref{Harnack}), there exists $\gamma_k=\gamma_k(d,\kappa,k)$ such that \begin{align*} \inf_{B_{k\rho}\times\{-\rho^2\}}u_r &\ge 2^{-\gamma_1} \sup_{B_{\frac{k\rho}{2}}\times\{-\frac{\rho^2}{4}\}}u_r\\ &\ge \ldots\\ &\ge 2^{-n\gamma_1}\sup_{B_{\frac{k\rho}{2^n}\times\{-(\frac{\rho}{2^n})^2\}}}u_r\\& \ge \left( \frac{c_0r}{2k\rho} \right)^{\gamma_1}\cdot\frac{1}{2}, \end{align*} where we used \eqref{lowerb_ur} in the last inequality. Display \eqref{ur-decay} is proved. Setting (c.f. figure~\ref{fig:dkr}.) \begin{equation}\label{def-D} D_{k, \rho}:=\{(x,t)\in \Z^d\times(-\infty,0]: |x|/k\le \sqrt{-t}\le \rho\}, \end{equation} we conclude from \eqref{lowerb_ur} and \eqref{ur-decay} that for $\rho\ge c_0 r/k$, \begin{equation}\label{e42} \inf_{D_{k,\rho}}u_r\ge \frac{1}{2}\left( \frac{c_0r}{2k\rho} \right)^{\gamma_k}. \end{equation} \begin{figure}[h] \centering \begin{tikzpicture} [ declare function={ f(\t) = -\t^2/(\K*\K); } \pgfmathsetmacro{\R}{2} \pgfmathsetmacro{\K}{2} \pgfmathsetmacro{\Rho0}{0.8*\R} \pgfmathsetmacro{\Rho}{0.9*\R} \begin{axis}[ xmin=-\K*\R-1, xmax=\K*\R+1, ymin=-\R*\R-0.8, ymax=1.5, ticks=none, xlabel=$x$, ylabel=$t$, axis lines=middle, unit vector ratio=1 1 ] \addplot[samples=30,domain=-\K*\R:\K*\R, name path=A] {f(x)}; \addplot[domain=-\K*\R:\K*\R,name path=D](x,-\R*\R) node[pos=0.5, below right]{\footnotesize $-R^2$} \addplot[fill=blue!20, fill opacity=0.2] fill between [of =A and D]; \draw (-\K*\R,0) rectangle (\K*\R,-\R*\R); \node[above left] at (\K*\R,0) {\footnotesize $KR$}; \fill (0,0) coordinate (o) circle (2pt); \node at (o)[above right] {\footnotesize $0$}; \node at (60:-0.8*\R*\R) {\footnotesize $D_{K,R}$}; \end{axis} \end{tikzpicture} \caption{The shaded region is $D_{K,R}$.\label{fig:dkr}} \end{figure} \item \label{s3} Next, we will define the constants $k_0,\beta,\alpha_K$ and $c_1$ which will be useful later. The reasons will be clear in the next sections. First, let \begin{equation}\label{def-betak} \beta_K:=\frac{\log 2}{\log[ K/(K+1-c_0^{-1})]}. \end{equation} We choose a constant $k_0=k_0(d)>2/c_0$ to be big enough such that \[ \beta:=\beta_{k_0}>\gamma_1. \] It is clear that we only need to prove Theorem~\ref{prob_doubling} for $K\ge 1$ large enough. In what follows we only consider $K\ge k_0$. By \eqref{e42}, there exists a constant $\alpha_K=\alpha_K(d,\kappa,K)$ such that for any $\omega$ and $\rho\ge 1$, \begin{equation} \label{e43} \inf_{x\in B_{2K\rho}}P_\omega^{x,0}(X_{3\rho^2}\in B_\rho)\ge \alpha_K. \end{equation} We choose a constant $c_1=c_1(d,\kappa,K)\ge 8$ to be big enough such that \[ \alpha_K c_0^{\gamma_1}c_1^{\beta-\gamma_1}\ge 8^\beta. \] \item For $K\ge k_0$, setting \begin{equation}\label{e45} v=4\left( \frac{2Kc_1}{c_0} \right)^{\gamma_K}u_r-u_{2r}, \end{equation} we have by \eqref{e42} \[ \inf_{D_{K,c_1 r}}v\ge 2-1=1. \] Moreover, $\inf_{B_{c_0r}\times[-c_0^2r^2,0)} v\ge 1$ by \eqref{lowerb_ur}. Put \begin{equation}\label{def-R0} R_0=R_0(K,d,\kappa)=\sup\{\rho>0: \inf_{D_{K,\rho}}v\ge 0\}. \end{equation} Clearly, $R_0\ge c_1r$. Since the parabolic boundary $D_{1,R_0}^\p$ is away from $D_{K,R_0}^\p$, by the same argument as in the proof of \eqref{ur-decay} in Step~\ref{s2} (I.e, apply the Harnack inequality consecutively) we obtain \begin{equation} \label{e44} \inf_{B_\rho\times\{-\rho^2\}}v \ge \left( \frac{c_0r}{2\rho} \right)^{\gamma_1} \qquad\text{for }\rho\in[c_0r, R_0). \end{equation} Our goal in the next steps is to prove that $R_0=\infty$. \item Put $v_-:=\max\{0,-v\}$. We claim that for all $\rho\in(2r,R_0)$, \begin{equation} \label{f-rho} f(\rho):= \sup_{\partial B_{K\rho}\times[-\rho^2,0]} v_- \le \left( \frac{4r}{\rho} \right)^{\beta_K}, \end{equation} where $\beta_K$ was defined in \eqref{def-betak}. Note that by the definition of $u_r$ we have $v_-=0$ in $\{(x,0):|x|>2r\}$ and $v_-=0$ in $D_{K,\rho}$ by \eqref{def-R0}. Hence, by formula \eqref{representation}, $f(\rho)$ is a decreasing function of $\rho$ for $\rho\in(2r,R_0)$. Further, set \[ q=q(K)=\frac{K+1-c_0^{-1}}{K}. \] Since $v_-(X_s,T_s), s\ge 0$ is a sub-martingale in $\Z^d\times(-\infty,0)$, by the optional stopping lemma we have for any $(x,t)\in \partial B_{K\rho}\times[-\rho^2,0]$, \begin{align*} v_-(x,t) &\le P_\omega^{x,t}(X_\cdot\text{ visits $\partial B_{(K+1-c_0^{-1})\rho}$ before times $\tau=|t|$})f(q\rho)\\ &\le P_\omega^{x,t}\left(\sup_{0\le s\le\rho^2}|X_s-x|\ge(c_0^{-1}-1)\rho \right)f(q\rho)\\ &\le f(q\rho)/2, \end{align*} where in the last inequality we used Doob's submartingale inequality as in \eqref{Hoeffding}. Let $n\ge 0$ be the integer such that $q^{n+1}\rho<2r\le q^n\rho$. We conclude that for $\rho\in(2r,R_0)$, \[ f(\rho)\le 2^{-n}f(q^n\rho)\le \left( \frac{4r}{\rho} \right)^{-\log 2/\log q}f(2r). \] Inequality \eqref{f-rho} then follows from the fact that $v_-\le 1$. \item Recall $D_{K,\rho}$ and $R_0$ in \eqref{def-D} and \eqref{def-R0}. Finally, we will prove that $R_0=\infty$ for $K\ge k_0$. Indeed, otherwise, $R_0<\infty$ and there exists $(x_0,\rho_0^2)$ with $|x_0|\le K\rho_0$, $\rho_0\in(R_0,2R_0)$ such that $v(x_0,\rho_0^2)<0$. Define a set $ S=B_{K\rho_0/2}\times[-\rho_0^2/4,0] $ and a stopping time \[ \tau_S:=\inf\{s\ge 0: (X_s,T_s)\in S\}. \] Notice that $(X_{\tau_S},T_{\tau_S})$ belongs to one of the two sets \[ S_1:=B_{K\rho_0/2}\times\{-\rho_0^2/4\}, \qquad S_2:=\partial' B_{K\rho_0/2}\times[-\rho_0^2/4,0]. \] Note also that $\inf_{S_1}v\ge 0$ by the definition of $R_0$. Hence, for $K\ge k_0$, \begin{align*} v(x_0,\rho_0^2) &= E_\omega^{x_0,\rho_0^2}\left[v(X_{\tau_S},T_{\tau_S})\right]\\ &\ge P_\omega^{x_0,\rho_0^2}\left( X_{\tau_S}\in B_{\rho_0/2} \right) \inf_{B_{\rho_0/2}\times\{-\rho_0^2/4\}}v +\inf_{S_2}v\\ &\ge \alpha_K \left( \frac{c_0r}{\rho_0} \right)^{\gamma_1}- \left( \frac{8r}{\rho_0} \right)^\beta, \end{align*} where we used \eqref{e43}, \eqref{e44}, \eqref{f-rho} and $\beta_K\ge \beta$ in the last inequality. It then follows from the fact that $\rho_0>R_0\ge c_1r$ and the choice of $c_1$ in Step~\ref{s3} that $v(x_0,\rho_0^2)\ge 0$, which contradicts with our assumption. \end{enumerate} Therefore, $v\ge 0$ holds on $D_{K,\infty}$ and the theorem is proved. \end{proof} \begin{corollary} \label{cor:space-time-vd} Given $K\ge 1$ and $\omega\in\Omega_\kappa$. For any $r\ge 1/2$, $t>0$, $|x|\le K\sqrt t$ and $s\in[0,r^2]$, we have \begin{enumerate}[(i)] \item $P_\omega^{x,0}(X_{t-s}\in B_{2r})\le C P_\omega^{x,0}(X_t\in B_r)$ if $t\ge s$. \item $P_\omega^{x,0}(X_{t+s}\in B_{2r})\le CP_\omega^{x,0}(X_t\in B_r)$. \end{enumerate} Here the constant $C$ depends only on $d,\kappa$ and $K$. \end{corollary} \begin{proof} \begin{enumerate}[(i)] \item First, by Theorem~\ref{prob_doubling}, \[ P_\omega^{x,0}(X_t\in B_r) \ge CP_\omega^{x,0}(X_t\in B_{4r}). \] On the other hand, notice that by \eqref{lowerb_ur} and \eqref{e42} (and recall the definition of $u_r$ in \eqref{e36}), we have \[ \inf_{(y,s)\in B_{2r}\times[0,r^2]}P_\omega^{y,\cdot}(X_s\in B_{4r})\ge C. \] Hence, by the Markov property, \begin{align*} P_\omega^{x,0}(X_t\in B_{4r}) &\ge \sum_{y\in B_{2r}}P_\omega^{x,0}(X_{t-s}=y)P_\omega^{y,t-s}(X_s\in B_{4r})\\ &\ge C P_\omega^{x,0}(X_{t-s}\in B_{2r}). \end{align*} We proved (i). \item By the Markov property, \begin{align*} P_\omega^{x,0}(X_{t+s}\in B_{2r}) &=\sum_y P_\omega^{x,0}(X_t=y) P_\omega^{y,t}(X_s\in B_{2r})\\ &=\sum_{n=0}^\infty\sum_{y:|y|\in[2^nr,2^{n+1}r)}P_\omega^{x,0}(X_t=y) P_\omega^{y,t}(X_s\in B_{2r}) \\ &\stackrel{\eqref{prob-upperb}}{\le } C\sum_{n=0}^\infty P_\omega^{x,0}(X_t\in B_{2^n r})(e^{-2^nr}+ e^{-c 4^n}). \end{align*} The second inequality then follows by observing that (cf. Theorem~\ref{prob_doubling}) \[ P_\omega^{x,0}(X_t\in B_{2^n r})\le C^n P_\omega^{x,0}(X_t\in B_r). \] \end{enumerate} Our proof is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:vd}:] First, we will obtain for any non-negative bounded measurable function $f\ge 0$ on $\Omega$ and $t\in\R, r\ge 1$, \begin{equation}\label{formula1} E_\mb P[\rho_\omega(B_r,t)f] = \lim_{T\to\infty}\sum_{z}\frac{1}{T}\int_0^T E_\mb P\left[ P_\omega^{z,0}(X_s\in B_r)f(\theta_{0,s-t}\omega) \right]\mathrm{d} s. \end{equation} Indeed, by the translation-invariance of $\mb P$ we have \begin{align}\label{e48} E_\mb P[\rho_\omega(B_r,t)f(\omega)] &= \sum_{x\in B_r}E_\mb P\left[\rho(\omega)f(\theta_{x,-t}\omega)\right]\nonumber\\ &=E_\mb Q \left[ \sum_{x\in B_r}f(\theta_{x,-t}\omega) \right]. \end{align} Further, by the ergodic theorem and dominated convergence theorem, \begin{equation}\label{e47} E_\mb Q \left[ \sum_{x\in B_r}f(\theta_{x,-t}\omega) \right] = \lim_{T\to\infty}\frac{1}{T} \int_0^T E_\mb P E_\omega^{0,0} \left[ \sum_{x\in B_r}f(\theta_{x,-t}\bar\omega_s) \right]\mathrm{d} s. \end{equation} Note that \begin{align} \label{e46} E_\omega^{0,0} \left[ \sum_{x\in B_r}f(\theta_{x,-t}\bar\omega_s) \right] &= \sum_{y\in\Z^d,x\in B_r}P_\omega^{0,0}(X_s=y)f(\theta_{x+y,-t+s}\omega)\nonumber\\ &= \sum_{y,z}P_\omega^{0,0}(X_s=y)f(\theta_{z,-t+s}\omega)\mathbbm{1}_{|y-z|\le r}\nonumber\\ &= \sum_{z}P_\omega^{0,0}(X_s\in B_r(z))f(\theta_{z,-t+s}\omega). \end{align} Hence \eqref{formula1} follows from \eqref{e48}, \eqref{e47} and \eqref{e46}. Next, from \eqref{formula1} we obtain \begin{align*} E_\mb P[\rho_\omega(B_r,t)f] &= \lim_{T\to\infty}\sum_{z}\frac{2}{T}\int_{T/2}^T E_\mb P\left[ P_\omega^{z,0}(X_s\in B_r)f(\theta_{0,s-t}\omega) \right]\mathrm{d} s\\ &= \lim_{T\to\infty}\sum_{z:|z|\le\sqrt T}\frac{2}{T}\int_{T/2}^T E_\mb P\left[ P_\omega^{z,0}(X_s\in B_r)f(\theta_{0,s-t}\omega) \right]\mathrm{d} s. \end{align*} Using this formula and Corollary~\ref{cor:space-time-vd}, for any $t\in[-r^2,r^2]$ we get \begin{align*} E_\mb P[\rho_\omega(B_r,0)f] \ge &C\lim_{T\to\infty}\sum_{z:|z|\le\sqrt T}\frac{2}{T}\int_{T/2}^T E_\mb P\left[ P_\omega^{z,0}(X_{s+t}\in B_{2r})f(\theta_{0,s}\omega) \right]\mathrm{d} s\\ &= CE_\mb P[\rho_\omega(B_{2r},t)f]. \end{align*} Since the measurable function $f\ge 0$ is arbitrary, Theorem~\ref{thm:vd} follows. \end{proof} \begin{remark} By Theorem~\ref{thm:vd}, for any $r\ge 1$, \[ \frac{c}{r^2|B_r|}\int_0^{r^2} \rho_\omega(B_r, s)\mathrm{d} s \le \frac{1}{|B_r|}\rho_\omega(B_r,0) \le \frac{C}{r^2|B_r|}\int_0^{r^2} \rho_\omega(B_r, s)\mathrm{d} s. \] Hence, by the ergodic theorem, for $\mb P$-almost every $\omega$, \begin{equation}\label{eq:rho-ergodic} c\le \varliminf_{r\to\infty}\frac{1}{|B_r|}\rho_\omega(B_r,0) \le \varlimsup_{r\to\infty}\frac{1}{|B_r|}\rho_\omega(B_r,0) \le C. \end{equation} \end{remark} \section{Estimates of solutions near the boundary}\label{sec:near-bdry} Throughout this section, we let \[ Q_r:=B_r\times[0,r^2). \] \subsection{An elliptic-type Harnack inequality} The purpose of this subsection is to establish an elliptic-type PHI, which is a discrete version of \cite[Theorem 2.6]{Garo}. Unlike the usual PHI, which compares values in the same spatial ball but different time coordinates, the elliptic-type PHI states that values of a $\mc L_\omega$-harmonic function are comparable for any space-time coordinates away from the parabolic boundary, as long as it takes zero-value in the lateral boundary. See figure~\ref{fig:etharnack}. \begin{theorem}[Interior elliptic-type Harnack inequality]\label{thm-eh} Assume $\omega\in\Omega_\kappa$. Let $R\ge 1$ and $u\ge 0$ satisfies \[ \left\{ \begin{array}{rl} &\mc L_\omega u=0 \quad\mbox{ in }Q_R:=B_R\times[0,R^2) \\ & u=0 \quad\mbox{ in }\partial B_R\times[0,R^2). \end{array} \right. \] Then for $0<\delta\le \tfrac{1}{4}$, letting $Q^\delta_R:=B_{(1-\delta) R}\times[0, (1-\delta^2)R^2)$, there exists a constant $C=C(d,\kappa, \delta)$ such that \[ \sup_{Q^\delta_R}u\le C\inf_{Q_R^\delta}u. \] \end{theorem} \begin{figure}[H] \centering \pgfmathsetmacro{\R}{2.5} \pgfmathsetmacro{\r}{1} \begin{tikzpicture} \begin{axis}[ xmin=-\R-.7, xmax=\R+1, ymin=-1, ymax=1.8+\R*\R, ticks=none, xlabel=$x$, ylabel=$t$, axis lines=middle, unit vector ratio=2 1 ] \draw (-\R, 0) rectangle (\R,\R*\R); \draw[fill=blue!20, fill opacity=0.2] (-\R+\r,0) rectangle (\R-\r,\R*\R-2*\r*\r); \node at (0,0) [below right]{$0$}; \node at (\R,0) [below]{\footnotesize $R$}; \node at (0,\R*\R)[above right]{\footnotesize $R^2$}; \node at (\R/3,\R*\R/3) {$Q_R^\delta$}; \draw[<->, >=stealth] (\R-\r,\R*\R/2)--(\R,\R*\R/2) node[above, midway]{\footnotesize $\delta R$}; \draw[<->, >=stealth] (-\R/2,\R*\R-2*\r*\r)--(-\R/2,\R*\R) node[midway,right]{\footnotesize $(\delta R)^2$}; \end{axis} \end{tikzpicture} \caption{The values of $u$ are comparable inside the region $Q_R^\delta$. \label{fig:etharnack}} \end{figure} To prove Theorem~\ref{thm-eh}, we need a so-called Carlson-type estimate. For parabolic differential operators in non-divergence form, this kind of estimate was first proved by Garofalo \cite{Garo} (see also \cite[Theorem 3.3]{FSY}). We use the convention $\inf\emptyset=\infty$. \begin{theorem}\label{thm-Carl} Assume $\omega\in\Omega_\kappa$, $R\ge 2, r\in[1,R/2]$. Suppose $0\in\partial B_R(z)$ for some $z\in\Z^d$.Then for a function $u\ge 0$ that satisfies \begin{align*} \left\{ \begin{array}{rl} \mc L_\omega u=0 &\text{ in } D:=B_R(z)\times[-2r^2,4r^2)\\ u=0 &\text{ in } (B_{2r}\cap \partial B_R(z))\times[0,4r^2), \end{array} \right. \end{align*} we have \begin{equation}\label{e25} \sup_{Q_r\cap D}u\le C\inf_{y\in B_{r/2}(z_r)}u(y,-r^2), \end{equation} where $z_r:=\frac{r}{R}z\in\R^d$. See figure~\ref{fig:carl}. \end{theorem} \begin{figure}[H] \centering \begin{tikzpicture} \pgfmathsetmacro{\R}{3} \pgfmathsetmacro{\r}{1.2} \begin{axis}[ xmin=-1.3, xmax=2*\R+0.8, ymin=-2*\r*\r-0.5, ymax=4*\r*\r+1.2, ticks=none, xlabel=$x$, ylabel=$t$, axis lines=middle, unit vector ratio=1 0.8 ] \draw (0,-2*\r*\r) node[left] {\footnotesize $-2r^2$} rectangle (2*\R,4*\r*\r); \draw[fill=blue!10, fill opacity=0.2] (0, 0) rectangle (2*\r,4*\r*\r); \draw[fill=gray!20] (0,0) rectangle (\r,\r*\r); \node at (0.5*\r,0.5*\r*\r) {\tiny $Q_r\cap D$}; \node at (\r,2*\r*\r) {\footnotesize $Q_{2r}\cap D$}; \draw[very thick] (\r/2,-\r*\r) coordinate (a)--(1.5*\r,-\r*\r) node[below, midway] {\scriptsize $B_{r/2}(z_r)$}; \node[left] at (0,4*\r*\r) {\footnotesize $4r^2$}; \fill (0,0) node[above left] {\footnotesize $0$} circle (1.3pt); \fill (\R,0) node[below] {\footnotesize $z$} circle (1.3pt); \draw[dashed] (\r, 0)--(\r,-\r*\r) coordinate (zr); \draw[dashed] (0,-\r*\r) node[left] {$-r^2$}--(a); \fill (zr) circle (1.3pt); \draw[very thick] (0,0)--(0,4*\r*\r); \end{axis} \end{tikzpicture} \caption{Theorem~\ref{thm-Carl}. Values in $Q_r\cap D$ are controlled by values in $B_{r/2}(z_r)\times\{-r^2\}$. Here the region $D=B_R(z)\times[-2r^2,4r^2)$ is the biggest box. \label{fig:carl}} \end{figure} \begin{proof} We will prove the theorem by showing the following stronger estimate than \eqref{e25} \begin{equation}\label{e49} \sup_{\hat x\in Q_{2r}\cap D}\left( d_0(\hat x)/r \right)^\gamma u(\hat x)\le C\inf_{y\in B_{r/2}(z_r)}u(y,-r^2), \end{equation} where $d_0(\hat x):=\sup\left\{\rho\ge 0: Q_\rho(\hat x)\subset Q_{2r}\right\}$ and $\gamma=\gamma(d,\kappa)>0$ is a constant. Our proof of \eqref{e49} consists of two steps. \begin{enumerate}[Step 1.] \item We claim that the supremum \[ M:=\sup_{\hat x\in Q_{2r}\cap D}\left( d_0(\hat x)/r \right)^\gamma u(\hat x) \] of the left side of \eqref{e49} could only be achieved by those $\hat x\in Q_{2r}\cap D$ with $\epsilon d_0(\hat x)\le d_1(\hat x)$, where $\epsilon=\epsilon(\gamma)\in(0,\tfrac{1}{3})$ is a constant to be determined and \[ d_1(\hat x):=\sup\{\rho\ge 0: Q_\rho(\hat x)\in Q_{2r}\cap D\}. \] (Clearly, $d_1\le d_0$.) Indeed, if $\hat x:=(x,t)\in Q_{2r}\cap D$ satisfies $\epsilon d_0(\hat x)> d_1(\hat x)$, then there exists $\hat x_0:=(x_0,t)\in Q_{2r}\cap D$ such that $x_0\in\partial B_R$ and $d_1(\hat x)=|x_0-x|\ge 1$. Then for any $\hat y=(y,s)\in Q_{2d_1(\hat x)}(\hat x_0)$, \begin{align*} d_0(\hat x)&\le d_0(\hat y)+|x-y|+|t-s|^{1/2}\\ &\le d_0(\hat y)+|x-x_0|+|x_0-y|+|t-s|^{1/2}\\ &\le d_0(\hat y)+d_1(\hat x)+2d_1(\hat x)+2d_1(\hat x)\\ &\le d_0(\hat y)+5\epsilon d_0(\hat x). \end{align*} So for any $\hat y\in Q_{2d_1(\hat x)}(\hat x_0)$, we have$(1-5\epsilon)d_0(\hat x)\le d_0(\hat y)$ and \begin{equation}\label{e24} d_0(\hat x)^\gamma u(\hat x) \le (1-5\epsilon)^{-\gamma}d_0(\hat y)^\gamma\sup_{Q_{d_1(\hat x)}(\hat x_0)}u. \end{equation} Next, notice that $Q_{2d_1(\hat x)}(\hat x_0)\subset Q_{3d_1(\hat x)}(\hat x)\subset Q_{d_0(\hat x)}(\hat x)\subset Q_{2r}$, by the boundary condition of $u$ we get for $\hat y\in Q_{d_1(\hat x)}(\hat x_0)$, \begin{align}\label{e50} & u(\hat y)\nonumber\\ &\stackrel{\eqref{representation}}{\le} P_\omega^{\hat y}(\hat X_\cdot \text{ exits $Q_{2d_1(\hat x)}(\hat x_0)\cap D$ not from }\partial B_R) \sup_{Q_{2d_1(\hat x)}(\hat x_0)\cap D}u\nonumber\\ &\le \left(1-P_\omega^{\hat y} (\hat X_\cdot \text{ exits $B_{2d_1(\hat x)}(\hat x_0)\cap B_{2R}$ from $\partial B_R$ before time }4d_1(\hat x)^2) \right)\nonumber\\ &\qquad\times\sup_{Q_{2d_1(\hat x)}(\hat x_0)\cap D}u \end{align} By \eqref{e24}, \eqref{e50} and Lemma~\ref{lem:prob1}, we have for $\hat x\in Q_{2r}\cap D$ with $\epsilon d_0(\hat x)\le d_1(\hat x)$, \[ \left( d_0(\hat x)/r \right)^\gamma u(\hat x) \le (1-5\epsilon)^{-\gamma}(1-\theta)M, \] where $\theta\in(0,1)$ is the constant in Lemma~\ref{lem:prob1}. Now taking $\epsilon>0$ small enough such that $(1-5\epsilon)^{-\gamma}(1-\theta)<1/2$, our claim is proved. \item By Step 1, to prove \eqref{e49} it suffices to show that \begin{equation}\label{e51} \sup_{\hat x\in Q_{2r}\cap D}\left( d_1(\hat x)/r \right)^\gamma u(\hat x)\le C\inf_{y\in B_{r/2}(z_r)}u(y,-r^2). \end{equation} We will prove \eqref{e51} by consecutive applications of the parabolic Harnack inequality to a chain of parabolic cubes that links $\hat x\in Q_{2r}\cap D$ to $(\bar y_r,s+r^2)$. To be specific, take any fixed $\hat x:=(x,t)\in Q_{2r}\cap D$ . observe that we can construct a sequence of $n\le c\log(r/d_1(\hat x))$ balls $B^i:=B_{r_i}(x_i)\subset B_{2r_i}(x_i)\subset B_R(z)\cap B_{2r}$, $i=1,\ldots, n$, such that \begin{itemize} \item $x_1=x$, $x_n=z_r$; \item $r_i=\frac{d_1(\hat x)}{4} (\sqrt 2)^i$, and $r_{n-1}<\frac{r}{2}\le r_n$, $i=1,\ldots, n$; \item $B^i\cap B^{i+1}\neq\emptyset$ for $i=1,\ldots,n-1$. \end{itemize} We let $\theta>0$ be the constant such that $\theta (r_n^2-r_1^2)=r^2+t$. Then, applying the Harnack inequality (Theorem~\ref{Harnack}) to the pairs of cubes \begin{align*} &Q^i_+:=B^i\times[t-\theta (r_{i+1}^2-r_1^2),t-\theta(\frac{5}{3}r_i^2-r_1^2)]\\ &Q^i_-:=B^i\times[t-\theta(\frac{4}{3}r_i^2-r_1^2),t-\theta (r_i^2-r_1^2)], \end{align*} for $i=1,\ldots n-1$, we get \[ u(\hat x)\le C^{\log (r/d_1(\hat x))}\inf_{y\in B_{r/2}(z_r)}u(y,-r^2). \] Display \eqref{e51} is proved. \end{enumerate} Our proof is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm-eh}:] It suffices to consider the case $R\ge 4$. Then \begin{align*} \sup_{Q_R^\delta}u &\stackrel{\eqref{representation}}{\le} \sup_{B_R\times\{R^2-\tfrac{1}{4}(\delta R)^2\}}u\\ &\le C\sup_{B_{(1-\delta)R}\times\{R^2-\tfrac{1}{2}(\delta R)^2\}}u\\ &\le C(d,\kappa,\delta)\inf_{Q_R^\delta }u, \end{align*} where we applied the boundary Harnack inequality \eqref{e25} and the Harnack inequality (Theorem~\ref{Harnack}) in the second and third inequalities. \end{proof} \subsection{A boundary Harnack inequality} For positive harmonic functions that take value zero on the spatial boundary, the following boundary Harnack inequality compares the values near the spatial boundary and values inside the ball, with time coordinate appropriately shifted. \begin{theorem}[Boundary PHI]\label{thm-bh} Let $R\ge 1$. Suppose $u$ is a nonnegative solution to $\mc L_\omega u=0$ in $(B_{4R}\setminus B_{2R})\times(-2R^2,2R^2)$, and $u|_{\partial B_{4R}\times\R}=0$. Then for any $(x,t)\in (B_{4R}\setminus B_{3R})\times(-R^2,R^2)$, we have \[ C\frac{\dist(x,\partial B_{4R})}{R}\sup_{y\in\partial B_{3R}}u(y, t+R^2) \le u(x,t) \le C\frac{\dist(x,\partial B_{4R})}{R}\inf_{y\in\partial B_{3R}}u(y,t-R^2). \] \end{theorem} Theorem~\ref{thm-bh} is a discrete version of inequality (3.9) in \cite{Garo} (for non-divergence form parabolic differential operators). In what follows we offer a proof with probabilistic flavor. \begin{proof}[Proof of Theorem~\ref{thm-bh}:] We only need to consider the case when $R>0$ is large enough. For the lower bound, by the Carlson-type estimate~\eqref{e25}, it suffices to consider the case $x\in B_{4R}\setminus B_{4\beta R}$ and $R>(1-\beta)^{-1}$, where $1-\beta>0$ is small enough so that the equality in Corollary~\ref{cor2} holds. Let $\tau_\beta:=\inf\{s>0: X_s\notin B_{4R}\setminus B_{4\beta R}\}$. Note that for $t\in(-R^2,R^2)$, \begin{align*} u(x,t) &\stackrel{\eqref{representation}}{\ge }\sup_{\hat z\in\partial' B_{4\beta R}\times[t,t+0.5R^2]} u(\hat z)P_\omega^{x,t}(\tau_\beta\le R^2/2,X_{\tau_\beta}\in B_{4\beta R})\\ &\ge C\sup_{y\in\partial B_{3R}}u(y,t+R^2)P_\omega^{x,t}(\tau_\beta\le R^2/2,X_{\tau_\beta}\in B_{4\beta R}) \end{align*} where we applied the Harnack principle (to a chain of balls that covers $\partial B_{3R}$) in the last inequality. By Corollary~\ref{cor2}, the lower bound is obtained. To obtain the upper bound, let $\tau=\inf\{s>0:X_s\notin B_{4R}\setminus B_{3R}\}$. Note that for $(x,t)\in (B_{4R}\setminus B_{3R})\times(-R^2,R^2)$, \begin{align*} u(x,t) &\stackrel{\eqref{representation}}{\le} \bigg[\max_{z\in B_{4R}\setminus B_{3R}}u(z,t+R^2/2)+\max_{\partial' B_{3R}\times(t,t+0.5R^2]}u\bigg] P_\omega^{x,t}(X_{\tau\wedge 0.5R^2}\notin \partial B_{4R})\\ &\stackrel{\eqref{e25}}{\le} C\bigg[\max_{z\in B_{3.5R}\setminus B_{3R}}u(z,t-R^2/2)+\max_{\partial' B_{3R}\times(t,t+0.5R^2]}u\bigg] P_\omega^{x,t}(X_{\tau\wedge 0.5R^2}\notin \partial B_{4R})\\ &\le C\inf_{z\in\partial B_{3R}}u(z,t-R^2)\dist(x,\partial B_{4R})/R, \end{align*} where we applied the Harnack inequality (to a chain of balls that covers $\partial' B_{3R}$) and Lemma~\ref{lem7} in the last inequality. \end{proof} \section{Proof of PHI for the adjoint operator (Theorem~\ref{thm-ah})}\label{sec:proof-of-ah} We define $\hat Y_t=(Y_t,S_t)$ to be the continuous-time Markov chain on $\Z^d\times\R$ with generator $\mc L_\omega^*$. The process $\hat Y_t$ can be interpreted as the time-reversal of $\hat X_t$. We denote by $P_{\omega^*}^{y,s}$ the quenched law of $\hat Y_\cdot$ starting from $\hat Y_0=(y,s)$ and by $E_{\omega^*}^{y,s}$ the corresponding expectation. Note that $P_{\omega^*}^{\cdot,\cdot}$-almost surely, $S_t=S_0-t$. For $R>0$, let \begin{equation}\label{e22} \tau_R(\hat X):=\inf\{t\ge 0: X_t\notin B_R\} \end{equation} and define $\tau_R(\hat Y)$ similarly. For any $\hat x=(x,t)$ and $\hat y=(y,s)$ in $B_R\times\R$ with $s>t$, set \[ \begin{array}{rl} & p_R^\omega(\hat x;\hat y)=P_\omega^{x,t}(X_{s-t}=y,s-t<\tau_R(\hat X)),\\ &p_R^{*\omega}(\hat y;\hat x)=P_{\omega^*}^{y,s}(Y_{s-t}=x, s-t<\tau_R(\hat Y)). \end{array} \] Note that \[ p_R^{*\omega}(\hat y;\hat x)=\frac{\rho_\omega(\hat x)}{\rho_\omega(\hat y)}p_R^\omega(\hat x;\hat y). \] First, we need a representation theorem for solutions to the adjoint equation. \begin{lemma}\label{lem8} For any non-negative solution $v$ to the adjoint operator $\mc L^*_\omega$ in $B_R\times(0,T]$ and $\hat y=(y,s)\in B_R\times(0,T]$, \begin{align*} \MoveEqLeft v(\hat y) = \sum_{x\in\partial B_R,z\in B_R,x\sim z}\int_0^s \frac{\rho_\omega(x,t)}{\rho_\omega(\hat y)}\omega_t(z,x)p^{\omega}_R(z,t;\hat y) v(x,t) \mathrm{d} t\\ &+ \sum_{x\in B_R}\frac{\rho(x,0)}{\rho(y,s)}p_R^\omega(x,0;y,s)v(x,0). \end{align*} \end{lemma} \begin{proof} Since $(v(\hat Y_t))_{t\ge 0}$ is a martingale, by the optional stopping theorem, \begin{align*} v(y,s) &=E_{\omega^*}^{y,s}[v(\hat Y_{\tau_R\wedge s})] =E_{\omega^*}^{y,s}[v(\hat Y_{\tau_R})1_{\tau_R\le s}] +E_{\omega^*}^{y,s}[v(\hat Y_s)1_{\tau_R>s}]. \nonumber \end{align*} Note that \begin{align*} E_{\omega^*}^{y,s}[v(\hat Y_s)1_{\tau_R>s}] &= \sum_{x\in B_R}p^{*\omega}_{R}(\hat y; x,0)v(x,0)= \sum_{x\in B_R}\frac{\rho_\omega(x,0)}{\rho_\omega(y,s)}p_R^\omega(x,0;y,s)v(x,0). \end{align*} It remains to show that \begin{equation}\label{e28} E_{\omega^*}^{y,s}[v(\hat Y_{\tau_R})1_{\tau_R\le s}] = \sum_{x\in\partial B_R,z\in B_R,x\sim z}\int_0^s \frac{\rho_\omega(x,t)}{\rho_\omega(\hat y)}\omega_t(z,x)p^{\omega}_R(z,t;\hat y) v(x,t) \mathrm{d} t. \end{equation} First, we will show that for any $x\in\partial B_R$, \begin{equation}\label{e30} P_{\omega*}^{\hat y}(Y_{\tau_R}=x,\tau_R\in\mathrm{d} t) = \sum_{z\in B_R,z\sim x}\frac{\rho_\omega(x,s-t)}{\rho_\omega(\hat y)}\omega_{s-t}(z,x)p_R^\omega(z,s-t;\hat y)\mathrm{d} t. \end{equation} Indeed, for $h>0$ small enough, $x\in\partial B_R$ and almost every $t\in(0,s)$, \begin{align*} &P_{\omega^*}^{y,s}(Y_{\tau_R}=x,\tau_R\in (t-h,t+h))\\ &=\sum_{z\in B_R:z\sim x}P_{\omega^*}^{\hat y}(Y_{t-h}=z,\tau_R>t-h)P_{\omega^*}^{z,s-t+h}(Y_{2h}=x)+o(h)\\ &=\sum_{z\in B_R:z\sim x}p_R^{\omega^*}(\hat y;z,s-t) \int_{-h}^h\omega^*_{s-t+r}(z,x)\mathrm{d} r+o(h)\\ &= \sum_{z\in B_R:z\sim x}\frac{\rho_\omega(z,s-t)}{\rho_\omega(\hat y)}p_R^{\omega}(z,s-t;\hat y) \int_{-h}^h\frac{\rho_\omega(x,s-t+r)}{\rho_\omega(z,s-t+r)}\omega_{s-t+r}(x,z)\mathrm{d} r+o(h). \end{align*} Dividing both sides by $2h$ and taking $h\to 0$, display \eqref{e30} follows by Lebesgue's differentiation theorem. Finally, display \eqref{e28} is obtained by applying \eqref{e30} to \[ E_{\omega^*}^{y,s}[v(\hat Y_{\tau_R})1_{\tau_R\le s}] = \sum_{x\in\partial B_R}\int_0^s v(x,s-t)P_{\omega*}^{\hat y}(Y_{\tau_R}=x,\tau_R\in\mathrm{d} t) \] and a change of variable. \end{proof} For fixed \[\hat y:=(y,s)\in B_{R}\times\R,\] set $u_{\hat y}(\hat x):=p^\omega_{2R}(\hat x,\hat y)$. Then \[ \begin{cases} \mc L_\omega u_{\hat y}(x,t)=0 &\mbox{ for }(x,t)\in B_{2R}\times(-\infty,s)\cup (B_{2R}\setminus B_R)\times\R,\\ u_{\hat y}(x,t)=0 &\mbox{ when }x\in\partial B_{2R}\mbox{ or }t>s. \end{cases} \] By Theorem~\ref{thm-eh}, for $(x,t)\in B_{3R/2}\times(s-4R^2,s-\tfrac{R^2}{2})$, we have $u_{\hat y}(x,t)\ge C u_{\hat y}(o, s-R^2)$. Moreover, for $(x,t)\in (B_{2R}\setminus B_{3R/2})\times(s-4R^2,s-\tfrac{R^2}{2})$, by Theorem~\ref{thm-bh} and Theorem~\ref{thm-eh}, \begin{align*} u_{\hat y}(x,t) &\ge C\sup_{z\in\partial B_{3R/2}}u_{\hat y}(z,t+R^2/4)\dist(x,\partial B_{2R})/R\\ &\ge Cu_{\hat y}(o,s-R^2)\dist(x,\partial B_{2R})/R. \end{align*} Hence we conclude that for any $(x,t)\in B_{2R}\times(s-4R^2,s-\tfrac{R^2}{2})$, \begin{align}\label{e34} u_{\hat y}(x,t) \ge Cu_{\hat y}(o,s-R^2)\dist(x,\partial B_{2R})/R. \end{align} Similarly, for any $(x,t)\in B_{2R}\times(s-4R^2,s)$, \begin{align}\label{e35} u_{\hat y}(x,t) &\le C\inf_{z\in\partial B_{3R/2}}u_{\hat y}(z,t-R^2/4)\dist(x,\partial B_{2R})/R\nonumber\\ &\le Cu_{\hat y}(o,s-R^2)\dist(x,\partial B_{2R})/R. \end{align} \begin{lemma}\label{lem9} Let $v\ge 0$ satisfies $\mc L_\omega^* v=0$ in $B_{2R}\times(0,4R^2]$, then for any $\bar Y=(\bar y,\bar s)\in B_R\times(3R^2,4R^2]$ and $\lbar Y=(\lbar y,\lbar s)\in B_R\times(R^2,2R^2)$, we have \[ \frac{v(\bar Y)}{v(\lbar Y)} \ge C\dfrac{\sum_{x\in\partial B_{2R}}\int_{0}^{R^2}\rho_\omega(x,t)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})} {\sum_{x\in\partial B_{2R}}\int_{0}^{4R^2}\rho_\omega(x,t)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})}. \] \end{lemma} \begin{proof} Write $\hat x:=(x,t)$ and set \[ \bar u(\hat x):=p^\omega_{2R}(\hat x;\bar Y), \quad \lbar u(\hat x):=p^\omega_{2R}(\hat x;\lbar Y). \] By Lemma~\ref{lem8} and \eqref{e34}, \begin{align}\label{e32} v(\bar Y) &\ge C\sum_{x\in\partial B_{2R},z\in B_{2R},x\sim z}\int_0^{\lbar s} \frac{\rho_\omega(\hat x)}{\rho_\omega(\bar Y)}\bar u(z,t)v(\hat x)\mathrm{d} t \nonumber\\ &\qquad+ C\sum_{x\in B_{2R}}\frac{\rho_\omega(x,0)}{\rho_\omega(\bar Y)}\bar u(0,\bar s-R^2)\frac{\dist(x,\partial B_{2R})}{R}v(x,0)\nonumber\\ &\ge C\frac{\bar u(0,\bar s-R^2)}{R\rho_\omega(\bar Y)}\bigg[\sum_{x\in\partial B_{2R}}\int_0^{\lbar s}\rho_\omega(\hat x)v(\hat x)\mathrm{d} t\nonumber\\ &\qquad+\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})v(x,0)\bigg] \end{align} Similarly, by Lemma~\ref{lem8} and \eqref{e35}, we have \begin{align}\label{e33} v(\lbar Y) &\le C\frac{\lbar u(0,\lbar s-R^2)}{R\rho_\omega(\lbar Y)}\bigg[\sum_{x\in\partial B_{2R}}\int_0^{\lbar s}\rho_\omega(\hat x)v(\hat x)\mathrm{d} t\nonumber\\ &\qquad+\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})v(x,0)\bigg]. \end{align} Combining \eqref{e32} and \eqref{e33} we get \begin{align*} \frac{v(\bar Y)}{v(\lbar Y)} &\ge C\frac{\bar u(o,\bar s-R^2)/\rho_\omega(\bar Y)}{\lbar u(o, \lbar s-R^2)/\rho_\omega(\lbar Y)}. \end{align*} Next, taking $v\equiv 1$, by Lemma~\ref{lem8} and \eqref{e35}, \begin{align*} 1 &= \sum_{x\in\partial B_{2R},z\in B_{2R},z\sim x}\int_{0}^{\bar s} \frac{\rho_\omega(\hat x)}{\rho_\omega(\bar Y)} \omega_t(z,x)\bar u(z,t)\mathrm{d} t + \sum_{x\in B_{2R}}\frac{\rho_\omega(x,0)}{\rho_\omega(\bar Y)}\bar u(x,0)\\ &\le C\frac{\bar u(o,\bar s-R^2)}{R\rho_\omega(\bar Y)} \big[ \sum_{x\in\partial B_{2R}}\int_{0}^{\bar s}\rho_\omega(\hat x)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R}) \big]. \end{align*} Similarly, by Lemma~\ref{lem8} and \eqref{e34}, \begin{align*} 1 &= \sum_{x\in\partial B_R,z\in B_{2R},x\sim z}\int_{0}^{\lbar s} \frac{\rho_\omega(\hat x)}{\rho_\omega(\lbar Y)} \omega_t(z,x)\lbar u(z,t)\mathrm{d} t + \sum_{x\in B_{2R}}\frac{\rho_\omega(x,0)}{\rho_\omega(\lbar Y)}\lbar u(x,0)\\ &\ge C\frac{\lbar u(o,\lbar s-R^2)}{R\rho_\omega(\lbar Y)} \big[ \sum_{x\in\partial B_{2R}}\int_{0}^{\lbar s/2}\rho_\omega(\hat x)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_R) \big]. \end{align*} Hence we obtain \[ \frac{\bar u(o,\bar s-R^2)/\rho_\omega(\bar Y)}{\lbar u(o, \lbar s-R^2)/\rho_\omega(\lbar Y)} \ge C\dfrac{\sum_{x\in\partial B_{2R}}\int_{0}^{\lbar s/2}\rho_\omega(x,t)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})} {\sum_{x\in\partial B_{2R}}\int_{0}^{\bar s}\rho_\omega(x,t)\mathrm{d} t +\sum_{x\in B_{2R}}\rho_\omega(x,0)\dist(x,\partial B_{2R})}. \] \end{proof} \begin{remark} It is clear that for a static environment (i.e, the case considered in \cite{Mustapha06}), the conclusion of the adjoint Harnack inequality (Theorem~\ref{thm-ah}) follows immediately from Lemma~\ref{lem9}. However, in our time-dependent case, to prove Theorem~\ref{thm-ah} we need the space-time volume-doubing property of adjoint solutions. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm-ah}] First, we will show that for all $R>0$, \begin{align}\label{e41} &\sum_{x\in\partial B_R}\int_0^s\rho_\omega(x,t)\mathrm{d} t+\sum_{x\in B_R}\rho_\omega(x,0)\dist(x,\partial B_R)\\ &\asymp \frac{1}{R}\int_0^s\rho_\omega(B_R,t)\mathrm{d} t+\sum_{x\in B_R}\rho_\omega(x,s)\dist(x,\partial B_R),\nonumber \end{align} where $A\asymp B$ means $cB\le A\le CB$ for some constants $c,C>0$. Recall the definition of $\tau_R$ at \eqref{e22} and set $g(x,t)=E_\omega^{x,t}[\tau_R(\hat X)]$. Then $g(x,\cdot)=0$ for $x\notin B_R$ and \begin{equation}\label{e38} \mc L_\omega g(x,t)=\left\{ \begin{array}{rl} -1 \qquad&\mbox{if } x\in B_R\\ \sum_{y\in B_R}\omega_t(x,y)g(y,t) \qquad&\mbox{if }x\in \partial B_R\\ 0\qquad &\mbox{ if }x\in\Z^d\setminus\bar B_R. \end{array} \right. \end{equation} Recalling \eqref{rho-invariance}, we have for any $s>0$, \begin{align}\label{e39} &0=\sum_{x\in\Z^d}\int_0^s g(x,t)[\sum_y\rho_\omega(y,t)\omega_t(y,x)-\partial_t\rho_\omega(x,t)]\mathrm{d} t\nonumber\\ &=\sum_x\int_0^s\rho_\omega(x,t)\mc L_\omega g(x,t)\mathrm{d} t-\sum_x[g(x,s)\rho(x,s)-g(x,0)\rho_\omega(x,0)] \end{align} By \eqref{e38} and \eqref{e39}, we get \begin{align}\label{e40} &\sum_{x\in \partial B_R,y\in B_R}\int_0^s\rho(x,t)\omega_t(x,y)g(y,t)\mathrm{d} t+\sum_{x\in B_R}g(x,0)\rho(x,0)\nonumber\\ &= \sum_{x\in B_R}\int_0^s\rho(x,t)\mathrm{d} t+\sum_{x\in B_R}g(x,s)\rho(x,s). \end{align} Note that $|X_t|^2-\frac{1}{\kappa}t$ and $|X_t|^2-\kappa t$ are super-martingale and sub-martingale, respectively. By the optional-stopping theorem, there exists a constant $c_\kappa\in[\kappa,1/\kappa]$ such that $|x|^2=E_\omega^{x,t}[|X_{\tau_R}|^2-c_\kappa\tau_R]$. Hence for any $(x,t)\in B_R\times\R$, \[ g(x,t)\asymp E_\omega^{x,t}[|X_{\tau_R}|^2-|x|^2] \asymp R\dist(x,\partial B_R). \] This, together with \eqref{e40}, yields \eqref{e41}. Combining \eqref{e41} and Lemma~\ref{lem9}, we obtain \[ \frac{v(\bar Y)}{v(\lbar Y)} \ge C\frac{\int_0^{R^2}\rho(B_{2R},t)\mathrm{d} t+R\sum_{x\in B_{2R}}\rho(x,R^2)\dist(x,\partial B_{2R})} {\int_0^{4R^2}\rho(B_{2R},t)\mathrm{d} t+R\sum_{x\in B_{2R}}\rho(x,4R^2)\dist(x,\partial B_{2R})}. \] Finally, Theorem~\ref{thm-ah} follows by Theorem~\ref{thm:vd} and the above inequality. \end{proof} \section{Proof of Theorem~\ref{thm:hke}, Corollary~\ref{cor:q-estimates} and Theorem~\ref{thm:llt}} \label{sec:proof-llt-hke} \subsection{Proof of Theorem~\ref{thm:hke}} \begin{proof} We will first prove the upper bound. Recall that $v(\hat x):=q^\omega(\hat 0,\hat x)$ satisfies $\mc L_\omega^* v=0$ in $\Z^d\times(0,\infty)$. By Theorem~\ref{thm-ah}, for $\hat x=(x,t)\in \Z^d\times(0,\infty)$, we have $v(\hat x)\le C\inf_{y\in B_{\sqrt t}(x)}v(y,3t)$ and so \begin{align*} v(x,t) &\le \frac{C}{\rho(B_{\sqrt t}(x), 3t)}\sum_{y\in B_{\sqrt t}(x)}\rho(y,3t)v(y,3t)\\ &= \frac{C}{\rho(B_{\sqrt t}(x), 3t)}P_\omega^{0,0}(X_{3t}\in B_{\sqrt t}(x)) \end{align*} Hence, by \eqref{prob-upperb}, \[ v(x,t)\le \frac{C}{\rho(B_{\sqrt t}(x), 3t)}(e^{-c|x|^2/(t\vee 1)}+e^{-c|x|}). \] Moreover, for any $s\in[0,t]$, $|y|\le|x|$, by Theorem~\ref{thm:vd} and iteration we get \[ \rho(B_{\sqrt t}(x),3t)\ge C \rho(B_{\sqrt {t\vee1}}(x),s)\ge C\left(\tfrac{|x|}{\sqrt{t\vee1}}+1\right)^{-c} \rho(B_{\sqrt{t\vee1}}(y),s). \] The upper bound is proved. To obtain the lower bound, by similar argument as above, we apply Theorem~\ref{thm-ah} and get $v(x,t)\ge C\sup_{y\in B_{\sqrt t/2}(x)}v(y,t/4)$ for any $(x,t)\in\Z^d\times(0,\infty)$. Hence \begin{equation}\label{e52} v(x,t) \ge \frac{C}{\rho(B_{\sqrt t/2}(x), t/4)} P_\omega^{0,0}(X_{t/4}\in B_{\sqrt t/2}(x)). \end{equation} We claim that for any $(y,s)\in \Z^d\times(0,\infty)$, \begin{equation}\label{hitting-upperb} P_\omega^{y,0}(X_s\in B_{\sqrt s})\ge Ce^{-c|y|^2/s}. \end{equation} Indeed, when $|y|/\sqrt s\le 1$, this follows from \eqref{lowerb_ur} and \eqref{e42}. (Recall the definition of $u_r$ in \eqref{e36}, where one may replace $\omega$ by $\theta^{0,-t}\omega$.) When $|y|/\sqrt s>1$, let $n\in\N$ be the integer such that \[ n-1<2|y|^2/s\le n. \] Set $\tau=\inf\{t\ge 0: T_t=s\}$, then $u(x,t):=P_\omega^{x,t}(X_\tau\in B_{\sqrt s})$ is a $\mc L_\omega$-harmonic function on $\Z^d\times(-\infty,s)$. Taking a sequence of points $y_i\in\Z^d$, $i=0,\ldots,n$ such that $y_0=y, y_n=0$ and $|y_i-y_{i-1}|\le 2|y|/n$, by the Harnack inequality Theorem~\ref{Harnack}, we get \begin{align*} u(y,0)&\ge C u(y_1,\tfrac{|y|^2}{n^2})\\ &\ge C^2 u(y_2,2\tfrac{|y|^2}{n^2})\\ &\ge \cdots\\ &\ge C^n u(0, \tfrac{|y|^2}{n})\ge cC^{|y|^2/s}. \end{align*} Inequality \eqref{hitting-upperb} is proved and then by \eqref{e52} \[ v(x,t)\ge \frac{C}{\rho(B_{\sqrt t/2}(x), t/4)}e^{-c|x|^2/t}. \] Moreover, by Theorem~\ref{thm:vd}, we have for any $s\in[0,t], |y|\le |x|$, \[ \rho(B_{\sqrt t/2}(x), t/4) \le C\rho(B_{\sqrt t/2}(x),s) \le C(\tfrac{|x|}{\sqrt t}+1)^c\rho(B_{\sqrt t}(y),s). \] The lower bound is proved. \end{proof} With the heat kernel bounds, we will prove the local limit theorem. \subsection{Proof of Theorem~\ref{thm:llt}:}\label{subsec:pf-thm-llt} \begin{proof} For any $\hat x=(x,t)\in\R^d\times\R$, define \[ v(\hat x):=q^\omega(\hat 0;\floor{x},t), \] where $\floor{x}$ is as in Theorem~\ref{thm:llt}. For $\epsilon>0$, let \[ A_{n,\epsilon}(x,t) =\frac{1}{\epsilon}\int_t^{t+\epsilon} \Abs{ P_\omega^{0,0}(X_{n^2s}\in B_{n\epsilon}(nx))-\rho(B_{n\epsilon}(nx), n^2s)v(nx,n^2s) }\mathrm{d} s. \] First, we will show that for any $\epsilon\in(0,\sqrt t_0), t\ge t_0$ and $|x|\le K$, \begin{align} \label{e53} \varlimsup_{n\to\infty} A_{n,\epsilon}(x,t) \le C_{K,t_0}\epsilon^{d+\gamma}, \end{align} where $\gamma>0$ is the constant in Corollary~\ref{cor:hoelder}. By the H\"older estimate in Corollary~\ref{cor:hoelder}, \begin{align*} A_{n,\epsilon}(x,t) &\le \frac{1}{n^2\epsilon}\sum_{y\in B_{n\epsilon}(nx)}\int_{n^2t}^{n^2(t+\epsilon)} \rho(y,s)|v(y,s)-v(nx,s)|\mathrm{d} s\\ &\le \frac{C}{n^2\epsilon}\sum_{y\in B_{n\epsilon}(nx)}\int_{n^2t}^{n^2(t+\epsilon)} \rho(y,s)\left( \frac{n\epsilon}{n\sqrt t} \right)^\gamma \sup_{B_{n\sqrt t}(nx)\times[n^2t/2,2n^2t]}v \mathrm{d} s\\ & \le C\left(\frac{\epsilon}{\sqrt t} \right)^\gamma \frac{\int_{n^2t}^{n^2(t+\epsilon)}\rho(B_{n\epsilon}(nx),s)\mathrm{d} s} {\int_{n^2t}^{n^2(t+\epsilon)}\rho(B_{n\sqrt t}(nx),s)\mathrm{d} s}, \end{align*} where we used Theorem~\ref{thm:hke} and Theorem~\ref{thm:vd} in the last inequality. Apply the ergodic theorem \cite[Theorem~2.8, Chapter 6]{Krengel}, we have \[ \varlimsup_{n\to\infty}A_{n,\epsilon}(x,t) \le C_K \left(\frac{\epsilon}{\sqrt t} \right)^{\gamma+d}. \] Display \eqref{e53} is proved. Next, Corollary~\ref{cor:hoelder}, Theorem~\ref{thm:hke} and Theorem~\ref{thm:vd} imply that for any $s\in(t,t+\epsilon)$, \[ |v(nx,n^2s)-v(nx,n^2t)| \le (\frac{n\epsilon}{n\sqrt t})^\gamma \frac{Cn^2t}{\int_0^{n^2 t}\rho(B_{n\sqrt t}(nx),s)\mathrm{d} s}. \] Then, by the ergodic theorem and $E_{\mb P}[\rho]=1$, we conclude that \[ \varlimsup_{n\to\infty} \frac{1}{\epsilon}\int_t^{t+\epsilon} \Abs{\rho(B_{n\epsilon}(nx), n^2s)v(nx,n^2s) -|B_{n\epsilon}|v(nx,n^2t)}\mathrm{d} s \le C_{t_0}\epsilon^{d+\gamma}. \] This, together with \eqref{e53} and Theorem~\ref{thm:recall}(b) yields \[ \lim_{n\to\infty} \Abs{ p^\Sigma_t(0,\mathcal O_{\epsilon}(x))\mathrm{d} x-|\mathcal O_{n\epsilon}| v(nx,n^2t) } \le C_{K,t_0}\epsilon^{d+\gamma}, \] where $\mathcal O_r$ denotes the ball of radius $r$ in $\R^d$. Theorem~\ref{thm:llt} now follows. \end{proof} \subsection{Proof of Corollary~\ref{cor:q-estimates}} \begin{proof} \eqref{cor:q-hke} This follows from Theorem~\ref{thm:hke} and \eqref{eq:rho-ergodic}. \eqref{cor:green1} For any $\hat x=(x,t)\in\R^d\times[0,\infty)$ and $\omega\in\Omega_\kappa$, set \[ v(\hat x)=q^\omega(\hat 0; \floor{x},t) \quad\mbox{ and }\quad a^\omega(x):=\int_0^\infty(v(0,t)-v(x,t))\mathrm{d} t. \] When $d=2$, it suffices to consider $x\in\R^2$ with $0<|x|<1$. We fix a small number $\epsilon\in(0,1)$ and split the integral $a^\omega(nx)$ into four parts: \begin{align*} a^\omega(nx)=\int_0^{n^\epsilon}+\int_{n^\epsilon}^{n^2} +\int_{n^2}^\infty=:\rom{1}+\rom{2}+\rom{3}, \end{align*} where it is understood that the integrand is $(v(0,t)-v(x,t))\mathrm{d} t$. First, we will show that $\mb P$-almost surely, \begin{equation}\label{eq:green-1} \varlimsup_{n\to\infty}|\rom{1}|/\log n\le \epsilon. \end{equation} By Theorem~\ref{thm:hke}, for any $t\in(0,n^\epsilon)$, $x\in\Z^2\setminus\left\{0\right\}$ and all $n$ large enough, $v(nx,t) \le Ce^{-cn|x|}/\rho_\omega(B_{\sqrt t},0). $ Thus \[ \int_0^{n^\epsilon}v(nx,t)\mathrm{d} t \le \frac{n^\epsilon}{\rho_\omega(\hat 0)} e^{-cn|x|}. \] By \eqref{cor:q-hke}, there exists $t_0(\omega)>0$ such that for $n$ big enough with $n^\epsilon>t_0$, \begin{align*} \int_0^{n^\epsilon}v(0,t)\mathrm{d} t \le \frac{Ct_0}{\rho_\omega(\hat 0)}+\int_{t_0}^{n^\epsilon}\frac{C}{t}\mathrm{d} t \le \frac{Ct_0}{\rho_\omega(\hat 0)}+C\epsilon\log n, \end{align*} Display \eqref{eq:green-1} follows immediately. In the second step, we will show that (note that $2p^\Sigma_1(0,0)=1/\pi\sqrt{\det\Sigma}$) \begin{equation}\label{eq:green-2} \limsup_{n\to\infty}|\rom{2}-2p_1^\Sigma(0,0)\log n|/\log n\le C\epsilon, \quad \mb P\mbox{-a.s.} \end{equation} Indeed, by Theorem~\ref{thm:llt}, there exists $C(\omega,\epsilon)>0$ such that $|tv(0,t)-p_1^\Sigma(0,0)|\le\epsilon$ whenever $t\ge C(\omega,\epsilon)$. Now, taking $n$ large enough such that $n^{\epsilon}>C(\omega,\epsilon)$, \begin{align}\label{eq:green-21} &\Abs{\int_{n^\epsilon}^{n^2}v(0,t)\mathrm{d} t-(2-\epsilon)p_1^\Sigma(0,0)\log n}\nonumber\\ &\le \int_{n^\epsilon}^{n^2}\Abs{\frac{tv(0,t)-p^\Sigma_1(0,0)}{t}}\mathrm{d} t\nonumber\\ &\le \epsilon\int_{n^\epsilon}^{n^2}\frac{\mathrm{d} t}{t}<2\epsilon\log n. \end{align} On the other hand, for $t\ge n^\epsilon>t_0(\omega)$, by \eqref{cor:q-hke}, $v(nx,t)\le \frac{C}{t}(e^{-cn|x|}+e^{-cn^2|x|^2/t})$. Thus \begin{equation}\label{eq:green-22} \int_{n^\epsilon}^{n^2}v(nx,t)\mathrm{d} t\le \int_{n^\epsilon}^{n^{2-\epsilon}}\frac{C}{t}e^{-cn^\epsilon|x|^2}\mathrm{d} t +\int_{n^{2-\epsilon}}^{n^2}\frac{C}{t}\mathrm{d} t \le C\epsilon\log n. \end{equation} Displays \eqref{eq:green-21} and \eqref{eq:green-22} imply \eqref{eq:green-2}. Finally, we will prove that for $\mb P$-almost every $\omega$, \begin{equation}\label{eq:green-3} \limsup_{n\to\infty}|\rom{3}|/\log n=0. \end{equation} By Corollary~\ref{cor:hoelder}, for any $t\ge n^2$, \[ \Abs{v(0,t)-v(nx,t)}\le C\left( \frac{|nx|}{\sqrt t} \right)^\gamma\sup_{B_{\sqrt t}\times(\tfrac{t}{2},\tfrac{3t}{2}]}v. \] Further, by Theorem~\ref{thm:hke}, Theorem~\ref{thm:vd} and \eqref{eq:rho-ergodic}, for all $t>t_0(\omega)$, \[ \sup_{B_{\sqrt t}\times(\tfrac{t}{2},\tfrac{3t}{2}]}v \le \frac{C}{\rho_\omega(B_{\sqrt t},0)}\le \frac{C}{t}. \] Therefore, $\mb P$-almost surely, when $n^2>t_0(\omega)$, \begin{align*} \Abs{\int_{n^2}^\infty v(0,t)-v(nx,t)\mathrm{d} t} &\le Cn^\gamma\int_{n^2}^\infty \frac{1}{t^{\gamma/2+1}}\mathrm{d} t\le C. \end{align*} Display \eqref{eq:green-3} follows. Combining \eqref{eq:green-1}, \eqref{eq:green-2} and \eqref{eq:green-3}, we have for $d=2$, \[ \varlimsup_{n\to\infty}\Abs{\frac{a^\omega(nx)}{\log n}-2p_1^\Sigma(0,0)}\le C\epsilon, \] Noting that $\epsilon>0$ is arbitrary, we obtain Corollary~\ref{cor:q-estimates}\eqref{cor:green1}. \eqref{cor:green2} We fix a small constant $\epsilon\in(0,1)$. Note that \[ n^{d-2}\int_0^\infty q^\omega(\hat 0;\floor{nx},t)\mathrm{d} t =\int_0^\infty n^d v(nx,n^2s)\mathrm{d} s. \] For any fixed $x\in\R^d$, write \begin{align*} \int_0^\infty n^d v(nx,n^2s)\mathrm{d} s =\int_0^{n^{-\epsilon}}+\int_{n^{-\epsilon}}^\epsilon+\int_\epsilon^{1/\sqrt \epsilon}+\int_{1/\sqrt\epsilon}^\infty =:\rom{1}+\rom{2}+\rom{3}+\rom{4}. \end{align*} First, by Theorem~\ref{thm:hke}, for $s\in(0,n^{-\epsilon})$, we have $v(nx,n^2s)\le Ce^{-cn^\epsilon|x|^2}/\rho_\omega(\hat 0)$, hence \begin{equation}\label{eq:green2-1} \varlimsup_{n\to\infty}\rom{1}\le C\lim_{n\to\infty}n^{d-\epsilon}e^{-cn^\epsilon|x|^2}/\rho_\omega(\hat 0)=0. \end{equation} Second, by \eqref{cor:q-hke}, when $n$ is large enough, then for all $t\ge n^{2-\epsilon}$, we have $v(nx,t)\le Ct^{-d/2}e^{-cn^2|x|^2/t}$ . Hence \begin{equation}\label{eq:green2-2} \varlimsup_{n\to\infty}\rom{2}\le \varlimsup_{n\to\infty} Cn^d\int_{n^{-\epsilon}}^\epsilon (n^2s)^{-d/2}e^{-c|x|^2/s}\mathrm{d} s\le C\epsilon. \end{equation} Moreover, by Theorem~\ref{thm:llt}, there exists $N(\omega,\epsilon)$ such that for $n\ge N(\omega,\epsilon)$, we have $\sup_{|s|\ge \epsilon}|v(nx,n^2s)-p_s^\Sigma(0,x)|\le\epsilon$. Hence \begin{equation}\label{eq:green2-3} \varlimsup_{n\to\infty}\Abs{\rom{3}-\int_{\epsilon}^{1/\sqrt \epsilon}p_s^\Sigma(0,x)\mathrm{d} s}\le \sqrt\epsilon. \end{equation} Further, by \eqref{cor:q-hke}, for $d\ge 3$, \begin{equation}\label{eq:green2-4} \varlimsup_{n\to\infty}\rom{4} \le C\int_{1/\sqrt\epsilon}^\infty \frac{n^d}{(n^2s)^{d/2}}\mathrm{d} s=C\epsilon^{(d-2)/4}. \end{equation} Finally, combining \eqref{eq:green2-1},\eqref{eq:green2-2}, \eqref{eq:green2-3} and \eqref{eq:green2-4}, we get \[ \varlimsup_{n\to\infty} \Abs{\int_0^\infty n^d v(nx,n^2s)\mathrm{d} s-\int_{\epsilon}^{1/\sqrt \epsilon}p_s^\Sigma(0,x)\mathrm{d} s} \le C\epsilon^{1/4}. \] Letting $\epsilon\to 0$, \eqref{cor:green2} is proved. \end{proof} \section{Auxiliary probability estimates}\label{sec:auxiliary-prob} In this section we will obtain probability estimates which are useful in the previous sections. Recall that $(X_t,T_t)_{t\ge 0}$ is a Markov process on $\Z^d\times\R$ with generator $\mc L_\omega$. \begin{theorem} Assume $\omega\in\Omega_\kappa$. Then for $t>0$, $r\ge 0$, \begin{equation}\label{fluctuation} P_\omega^{0,0}(\sup_{0\le s\le t}|X_s|>r)\le Ce^{-cr}+Ce^{-cr^2/t}. \end{equation} \end{theorem} \begin{proof} There is nothing to prove when $r\le 1$. When $r\ge 1$ and $t\in(0,1)$, the right side of \eqref{fluctuation} becomes $Ce^{-cr}$, which should follow from the case $t\ge 1$. Hence it suffices to prove \eqref{fluctuation} for $r\ge 1$ and $t\ge 1$. Let $x(i), i=1,\ldots,d,$ denotes the $i$-th coordinate of $x\in\R^d$. Then \[ P_\omega^{0,0}(\sup_{0\le s\le t}|X_s|>r) \le \sum_{i=1}^d P_\omega^{0,0}(\sup_{0\le s\le t}|X_s(i)|>r/d). \] It suffices to show that for $i=1,\ldots,d$ and any $r\ge 1$, $t\ge 1$, \[ P_\omega^{0,0}(\sup_{0\le s\le t}|X_s(i)|>r) \le C\exp\left(-cr(\tfrac{r}{t}\wedge1)\right). \] We will prove the statement for $i=1$ {\it Case I.} When $r\ge \tfrac{9}{2d\kappa}t$, we let $\tilde N_t:=\#\{0\le s\le t: X_s\neq X_{s^-}\}$ be the number of jumps before time $t$. Let $(S_n)$ be a simple random walk on $\Z$ and notice that $X_t(1)\stackrel{d}{=}S_{\tilde N_t}$. On the other hand, By uniform ellipticity, we can construct a Poisson process $N_t$ with rate $1/(2d\kappa)$ such that $\tilde N_t$ is stochastically controlled by $N_t$. Hence, setting $\theta:=\log(2d\kappa r/t)\ge \log 9\ge 2$, \begin{align*} P_\omega^{0,0}(\sup_{0\le s\le t}|X_s(1)|>r) &\le P(N_t\ge r)+P_\omega^{0,0}(\tilde N_t\le r, \sup_{0\le s\le t}|X_s(1)|\ge r)\\ &\le e^{-\theta r} E[\exp(\theta N_t)]+P(\max_{0\le m\le r}|S_m|\ge r)\\ &\le \exp(-\theta r+re^{-\theta}(e^\theta-1))+Ce^{-cr}\\ &\le C e^{-cr}, \end{align*} where we in the second to last inequality we used the moment generating function of a Poisson variable and applied the Azuma-Hoeffding's inequality to the simple random walk $(S_n)$. {\it Case II. } When $r\le \tfrac{9}{2d\kappa}t$, let $\alpha:=\tfrac{9}{2d\kappa}$. We claim that there exists a constant $c_\alpha> 1$ such that for any $\beta\in(0,\alpha)$, $\exp(\beta X_t(1)-\tfrac{c_\alpha}{2}\beta^2t)$ is a super-martingale. Indeed, setting $u(x,t):=\exp(\beta x(1)-\tfrac{c_\alpha}{2}\beta^2t)$, then \begin{align*} L_\omega u(x,t)&=u(x,t)\left[\omega_t(x,e_1)(e^\beta+e^{-\beta}-2)-\tfrac{c_\alpha}{2}\beta^2\right]\\ &\le u(x,t)[(e^\beta+e^{-\beta}-2-c_\alpha\beta^2)/2]\le 0 \end{align*} for all $\beta\in(0,\alpha)$, if we take $c_\alpha>1$ large enough. Hence $\exp(\beta X_t(1)-\tfrac{c_\alpha}{2}\beta^2t)$ is a super-martingale. Now, taking $\beta:=\tfrac{r}{c_\alpha t}<\alpha$ and using the optional stopping theorem and Doob's inequality, we have \begin{align*} P_\omega^{0,0}(\sup_{0\le s\le t}X_s(1)>r) &\le e^{-\beta r}E_\omega^{0,0}[e^{\beta X_t}]\\ &\le \exp(-\beta r+\tfrac{c_\alpha}{2}\beta^2t)\\ &=\exp(-r^2/2c_\alpha t). \end{align*} Our proof is complete. \end{proof} \begin{corollary} Assume $\omega\in\Omega_\kappa$ For any $r>0, \theta\ge 4$, there exists constants $C,c$ depending on $d,\kappa$ and $\theta$ such that for all $t\in[0,\theta r^2]$ and $x\in\Z^d$, \begin{equation}\label{prob-upperb} P_\omega^{0,0}(X_{t}\in B_r(x))\le Ce^{-c|x|}+Ce^{-c|x|^2/\theta (r^2\vee 1)}. \end{equation} \end{corollary} \begin{proof} The inequality is trivial when $|x|\le 2r$. It suffices to consider the case $|x|>2r$ and $r\ge 1$. Then, by \eqref{fluctuation}, \begin{align*} P_\omega^{0,0}(X_t\in B_r(x)) &\le P_\omega^{0,0}(\sup_{0\le s\le\theta r^2}|X_s|>|x|/2)\\ &\le Ce^{-c|x|}+Ce^{-c|x|^2/\theta r^2}. \end{align*} \end{proof} \begin{lemma}\label{lem:prob1} Assume that $\omega\in\Omega_\kappa$, $R/2>r>1$ and $y\in\partial B_R$. There exists a constant $\theta=\theta(d,\kappa)\in(0,1)$ such that for any $x\in B_r(y)$, \[ P_\omega^{x,0}\left(X_\cdot \mbox{ exits $B_r(y)\cap B_R$ from $\partial B_R$ before time } 4r^2\right)>\theta. \] \end{lemma} \begin{figure} \centering \begin{tikzpicture} \tikzstyle{ann} = [fill=white,font=\footnotesize,inner sep=1pt] \pgfmathsetmacro{\R}{3} \pgfmathsetmacro{\r}{1.1} \pgfmathsetmacro{\angle}{110} \pgfmathsetmacro{\ex}{0} \pgfmathsetmacro{\ey}{0} \draw[very thick] (\ex,\ey) +(\angle:\R) arc (\angle:360-\angle:\R); \draw (\ex,\ey) ++(180:\R) coordinate (Y3) ++(90:2*\r) arc (90:-90:2*\r); \draw[dashed] (Y3) ++(90:2*\r) arc (90:270:2*\r); \draw[dashed,gray] (Y3)++(90:\r) arc (90:270:\r); \begin{scope} \clip (\ex,\ey) circle (\R); \fill[blue!20, fill opacity=0.2] (Y3) circle (2*\r); \end{scope} \fill (Y3) circle (2pt); \fill (\ex,\ey) circle (1pt); \fill (Y3)+(15:\r*0.7) coordinate (x) circle (1pt); \fill (Y3)+(0:-1.7*\r) coordinate (y') circle (1.5pt); \node [above left] at (Y3) {$y$}; \node[right] at (\ex,\ey) {$0$}; \node[left] at (x) {$x$}; \node[above right] at (y') {$y''$}; \draw[dashed] (y') to (\ex,\ey); \draw[gray] (Y3)+(90:.2+2*\r) coordinate(l2)--+(-90:.3+2*\r) node[below, black]{$\ell_2$}; \draw[gray] (Y3)++(0:-2*\r)+(90:.2+2*\r) coordinate (l1)--+(-90:.3+2*\r)node[below, black]{$\ell_1$}; \draw[gray] (Y3)+(90:\r) arc (90:-90: \r); \draw[<->, >=stealth] (\ex,\ey)++(-\angle-10:\R)--(\ex,\ey) node[fill=white,midway,sloped]{$R$}; \draw[<->,>=stealth] (l1)--(l2) node[fill=white,midway]{$2r$}; \end{tikzpicture} \caption{The shaded area is $B_{2r}(y)\cap B_R$.\label{fig:3}} \end{figure} \begin{proof} By uniform ellipticity, it suffices to prove the lemma for all $r\ge 3/C_1$, where $C_1\in(0,1)$ is a small constant to be determined. As figure~\ref{fig:3} shows, we let $\ell_2$ denote the hyper-plane through $y$ which is perpendicular to the vector $y$. Let $\ell_1$ be the tangent-plane of $B_{2r}(y)$ that is to the left of $\ell_2$ and parallel to $\ell_2$. The plane $\ell_2$ divides the boundary set $\partial B_{2r}(y)$ into the left and right parts, which we denote by $S_1$ and $S_2$, respectively. Set \[ \begin{array}{rl} &v_1(x,t)=P_\omega^{x,t}(X_\cdot \mbox{ visits $S_1$ before $S_2$ and before $T_\cdot=4r^2$}) ,\\ &v_2(x,t)=P_\omega^{x,t}(X_\cdot\mbox{ exits $\ell_1$ before $\ell_2$ and before $T_\cdot=4r^2$}). \end{array} \] Since the projection of $X_\cdot$ in the $y$ direction is a simple random walk, by estimates for simple random walk and uniform ellipticity, we have for any $x$ that lies in between $\ell_1, \ell_2$, there exists a constant $C_1\in(0,1)$ such that \[ P_\omega^{x,2r^2}(X_\cdot\mbox{ exits $\ell_2$ or $\ell_1$ before $T_\cdot=4r^2$})\ge C_1. \] Moreover, letting $y'\in\R^d$ be a point with $|y'-y|<2r$ in the direction of $y$ whose distance to $\ell_1$ is $C_1 r$. Denote by $y''\in\Z^d$ a point within distance 1 from $y'$ who is closer to $\ell_1$ than $y'$ is. See Figure~\ref{fig:3}. By estimate for simple random walk, we obtain \[ P_\omega^{y'',\cdot}(\text{$X_\cdot$ visits $\ell_2$ before $\ell_1$}) \le \frac{C_1r}{2r}=C_1/2. \] Hence \[ v_2(y'',2r^2)\ge C_1/2. \] Note that $\mc L_\omega v_1=0$ in $B_{2r}(y)\times(-\infty,4r^2)$. Therefore, for any $x\in B_r(y)\cap B_R$ and $t\in[0,r^2)$, letting $\tau$ denote the exit time from $B_r(y)\cap B_R$, \[ P_\omega^{x,0}(X_\tau\in\partial B_R, \tau<4r^2) \ge v_1(x,0) \ge C v_1(y'', 2r^2) \ge C v_2(y'',2r^2)\ge C, \] where we applied the parabolic Harnack inequality (to $v_1$) in the second inequality. \end{proof} The following lemma gives a lower bound of the probability that starting in the annulus, the random walk will exit the annulus from the outer circle. \begin{lemma}\label{lem6} Assume $\omega\in\Omega_\kappa$, $\beta\in(0.5,1)$ and $R> (1-\beta)^{-1}$. Let $\tau_\beta=\tau_\beta(R)=\inf\{t\ge 0: X_t\notin B_R\setminus B_{\beta R}\}$. Then, for any $y\in B_R\setminus B_{\beta R}$ and any balanced environment $\omega$, \begin{equation}\label{e31} P^{y,0}_\omega(X_{\tau_\beta}\in B_{\beta R})\ge \frac{C}{e^{(1-\beta^2)/\kappa}-1}\dfrac{\dist(y,\partial B_R)}{R}. \end{equation} \end{lemma} \begin{proof} Set $\alpha=\alpha(\kappa)=3/\kappa$ and \[ D_\beta=D_{\beta}(R):=B_R\setminus B_{\beta R}. \] It suffices to prove the lemma for $R>6\alpha^2$ large enough. By uniform-ellipticity, it is enough to prove the lemma for $y$ with $R-|y|\ge\sqrt d$. Noting that in this case, $R-|y|\le \dist(y,\partial B_R)\le C (R-|y|)$, we only need to prove the inequality \eqref{e31} with $\dist(y,\partial B_R)$ replaced by $R-|y|$. For $(x,t)\in\R^d\times\R$, put \[ \quad g(x,t):=\exp(-\tfrac{\alpha}{R^2}|x|^2). \] Note that $g(x,t)$ is only a function of $x$. We claim that $g(\hat X_t)$ is a submartingale for $0\le t\le \tau_\beta$. Indeed, by Taylor expansion, for any $x\in D_\beta$ and $|e|=1$, \[ \Abs{e^{-\tfrac{\alpha}{R^2}(1+2x\cdot e)}-[1-\tfrac{\alpha}{R^2}(1+2x\cdot e)+\tfrac{\alpha^2}{2R^4}(1+2x\cdot e)^2]} \le \frac{C_\alpha}{R^3}. \] Hence for any $(x,t)\in D_\beta\times\R$, when $R>0$ is large enough, \begin{align*} \mc L_\omega g(x,t) &= e^{-\tfrac{\alpha}{R^2}|x|^2}\sum_{e:|e|=1}\omega_t(x,x+e) [e^{-\tfrac{\alpha}{R^2}(1+2x\cdot e)}-1]\\ &\ge e^{-\tfrac{\alpha}{R^2}|x|^2}\left(\sum_{e:|e|=1}\omega_t(x,x+e) [-\tfrac{\alpha}{R^2}(1+2x\cdot e)+\tfrac{\alpha^2}{2R^4}(1+2x\cdot e)^2]-\frac{C_\kappa}{R^3}\right)\\ &=\frac{\alpha}{R^2} e^{-\tfrac{\alpha}{R^2}|x|^2}\left(\sum_{e:|e|=1}\omega_t(x,x+e)[\tfrac{2\alpha(x\cdot e)^2}{R^2}-1+\tfrac{1}{2R^2}]-\frac{C_\kappa}{R^3} \right)\\ &\ge \frac{\alpha}{R^2} e^{-\tfrac{\alpha}{R^2}|x|^2} (2\kappa\alpha\beta^2-\frac 1\kappa-\frac{C_\kappa}{R^3})>0, \end{align*} which implies that $g(\hat X_t)$ is a submartingale for $t\le \tau_\beta$ (for $R>0$ large). Now, setting \[ v(x,t):=\frac{g(x)-e^{-\alpha}}{e^{-\alpha\beta^2}-e^{-\alpha}} \quad \mbox{ and } w(x,t):=P_\omega^{x,t}(X_{\tau_\beta}\in B_{\beta R})-v(x,t), \] since $P_\omega^{\hat X_t}(X_{\tau_\beta}\in B_{\beta R})$ is a martingale for $t\le \tau_\beta$, we obtain that the process $w(\hat X_{t\wedge\tau_\beta})$ is a super-martingale. Moreover, noting that $w|_{\partial D_\beta}\ge 0$, by the optional-stopping theorem, we conclude that \[ P_\omega^{x,t}(X_{\tau_\beta}\in B_{\beta R})\ge v(x,t) \quad \mbox{ on } D_\beta\times\R. \] The lemma follows by observing that \[ v(x,t)\ge \frac{\alpha(R^2-|x|^2)}{[e^{\alpha(1-\beta^2)}-1]R^2} \quad\mbox{ for $x\in D_\beta$}. \] \end{proof} \begin{corollary}\label{cor2} Let $R, \tau_\beta$ be as in Lemma~\ref{lem6}. Then, when $1-\beta>0$ is small enough, we have for any $x\in B_R\setminus B_{\beta R}$, \[ P_\omega^{x,0}(\tau_\beta\le R^2/32, X_{\tau_\beta}\in B_{\beta R})>C\frac{\dist(x,\partial B_R)}{R}. \] \end{corollary} \begin{proof} By the previous lemma, it suffices to show that when $1-\beta>0$ is small enough, \[ P_\omega^{x,0}(\tau_\beta>R^2/32|X_{\tau_\beta}\in B_{\beta R})\le 0.5. \] First, note that $\kappa t-|X_t|^2$ is a super-martingale. Hence \[ \kappa E^{x,0}_\omega[\tau_\beta]-E^{x,0}[X_{\tau_\beta}]\le -|x|^2 \] and so \[ E^{x,0}_\omega[\tau_\beta] \le \tfrac{1}{\kappa}(R^2-|x|^2). \] Therefore, by Lemma~\ref{lem6}, \begin{align*} P_\omega^{x,0}(\tau_\beta>R^2/32|X_{\tau_\beta}\in B_{\beta R}) &\le \frac{32E^{x,0}_\omega[\tau_\beta] /R^2}{P_\omega^{x,0}(X_{\tau_\beta}\in B_{\beta R})}\\ &\le C[e^{(1-\beta^2)/\kappa}-1]. \end{align*} Taking $\beta$ so that $1-\beta>0$ is small enough, our proof is complete. \end{proof} \begin{lemma}\label{lem7} Let $\theta\in (0,1)$. Recall the definitions of $\tau_\theta$ in Lemma~\ref{lem6}. Let $R>2$ and $\tau=\tau_{\theta}(R)$. For any $x\in B_R\setminus B_{\theta R}$, there exists a constant $C=C(\theta,\kappa,d)$ such that \[ P^{x,0}_\omega(X_{(R^2/1-\theta)\wedge\tau}\notin\partial B_R)\le C\dist(x,\partial B_R)/R. \] \end{lemma} \begin{proof} For simplicity, we only prove the case $\theta=1/2$. The proof of the general case $\theta\in (0,1)$ is similar. Set $D:=B_R\setminus B_{R/2}$. Noting that $\dist(x,\partial B_R)\ge 1$ for $x\in B_R$, it suffices to consider the case $R>k^2$, where $k>4$ is a large constant to be determined. Let \[h(x,t)=2-|x|^2/(R+1)^2+2t/R^2.\] First, we will show that the process $h(\hat X_t)^{-k}$ is a submartingale inside \[ \ms D:=(B_R\setminus B_{R/2})\times[0,R^2/2). \] Indeed, for any $i=1,\ldots, d$ and $(x,t)\in \ms D$, note that $1\le h\le 3$ and (for $R>k^2$) by Taylor expansion, \begin{align*} &|h^{-k}(x+e_i,t)+h^{-k}(x-e_i,t)-2h^{-k}(x,t)-\partial_{ii}(h^{-k})(x,t)|\\ &\le \sup_{\gamma\in[-1,1]}|\partial_{iii}(h^{-k})(x+\gamma e_i,t)|\\ &\le Ck^3R^{-3}h^{-k}(x,t). \end{align*} Hence for any $(x,t)\in \ms D$, \begin{align*} &\mc L_\omega (h^{-k})(x,t)\\ &= \sum_{i=1}^d\omega_t(x,x+e_i)[h(x+e_i)^{-k}+h(x-e_i)^{-k}-2h(x,t)^{-k}]+\partial_t (h^{-k})(x,t)\\ &\ge C\sum_{i=1}^d \partial_{ii}(h^{-k})(x,t)-Ck^3R^{-3}h(x,t)^{-k}+\partial_t (h^{-k})(x,t)\\ &= C\sum_{i=1}^d[4k(k+1)\tfrac{x_i^2}{(R+1)^4}h^{-k-2}+\tfrac{2k}{(R+1)^2}h^{-k-1}]-Ck^3R^{-3}h^{-k}-\tfrac{2k}{R^2}h^{-k-1}\\ &\ge Ckh^{-k}R^{-2}[k-C-Ck^2R^{-1}]>0 \end{align*} when $k>0$ is sufficiently large. This implies that the process $h(\hat X_t)^{-k}$ is a submartingale inside the region $\ms D$. Next, set \[ u(x,t)=P_\omega^{x,t}(X_{\tau\wedge 0.5R^2}\notin\partial B_R). \] Then $u(\hat X_t)+2h(\hat X_t)^{-k}$ is a submartingale in $\ms D$. Noticing that \[ \left\{ \begin{array}{rl} &h^{-k}|_{x\in \partial B_R}\le (2-1+0)^{-k}=1\\ &h^{-k}|_{x\in \partial' B_{R/2}}\le (2-1/4+0)^{-k}<1/2\\ & h^{-k}|_{t=R^2/2}\le (2-1+1)^{-k}<1/2, \end{array} \right. \] by the optioning stopping theorem, we have for $x\in B_R\setminus B_{R/2}$, \begin{align*} u(x,0)+2h(x,0)^{-k}\le \sup_{\ms D^\p} (u+2h^{-k})\le 2. \end{align*} Therefore, for any $x\in B_R\setminus B_{R/2}$, \begin{align*} u(x,0)&\le 2(1-h(x,0)^{-k})\\ &\le C(h(x,0)-1)\\ &= C[1-|x|^2/(R+1)^2]\\ &\le C\dist(x,\partial B_R)/R. \end{align*} Our proof of Lemma~\ref{lem7} is complete. \end{proof}
2,869,038,155,784
arxiv
\section{Introduction} Hadrons with high transverse momentum provide a good probe of the high energy density matter created at RHIC, since the production of high $p_T$ particles is dominated by the initial hard parton-parton scatterings with large momentum transfer $Q^{2}$. After hard-scattering, partons traverse a medium with a high density of color charges where they interact strongly, emit gluon radiation, and lose energy before fragmenting into hadrons. The production of hadrons depends on the initial parton distributions in the colliding nuclei, the elementary parton-parton cross section and the hadronization process of partons into hadrons. It is also important to distinguish nuclear effects from initial state effects, such as described by shadowing and/or color glass condensate models, and final state effects. To disentangle all these behaviors requires a very comprehensive data set. The BRAHMS experiment\cite{brahms_nim,brahms_white} has studied p+p, d+Au, and Au+Au collisions over a broad range of rapidity and transverse momentum. We will discuss these data in the context of the above processes. \section{Result} High $p_T$ suppressions have been observed in central Au+Au collisions at RHIC\cite{phenix,star,phobos} and are attributed to final-state interactions based on the absence of such suppressions in d+Au collisions\cite{brahms_highpt,phenix_da,star_da,phobos_da}. The suppression is quantified by use of nuclear modification factors, which are defined as $R_{AA}$ or $R_{CP}$ : \begin{equation} R_{AA} \equiv \frac{1}{\langle N_{coll} \rangle} \frac{d^2N^{AA}/dp_Tdy}{d^2N^{pp}_{inel}/dp_Tdy}, R_{CP} \equiv \frac{\frac{1}{\langle N^{C}_{coll} \rangle}{d^2N^{C}/dp_Tdy}} {\frac{1}{\langle N^{P}_{coll} \rangle}{d^2N^{P}/dp_Tdy}} \label{eq:NMF} \end{equation} $R_{AA}$ gives the deviation in yields from AA collisions relative to the scaled yields from nucleon-nucleon collisions. $R_{CP}$ can provide similar information based on the relative yield in central(C) and peripheral(P) collisions scaled by the mean number of binary collisions, but does not depend on the reference nucleon-nucleon system. Figure~1 shows the rapidity(a) and particle dependence(b) of $R_{CP}$ in Au+Au collisions at $\sqrt{s_{NN}} = $ 200~GeV. The observed suppression is similar at forward rapidities~($\eta \sim$ 2.2, 3.2) as compared to midrapidity. This result may indicate quenching extends in the longitudinal direction. $R_{CP}$ for protons reaches unity around $p_T \sim$ 1.5~GeV/$c$, but $R_{CP}$ for pions is suppressed at higher $p_T$. The difference between baryon and meson behaviors is discussed later. \begin{figure}[ht] \centerline{\epsfxsize=4.1in\epsfbox{fig1.eps}} \caption{(a) Nuclear modification factor for the most central and peripheral collisions at pseudorapidities $\eta = 0,~2.2, ~3.2$. The values for $\eta = 0,~2.2$ are from BRAHMS publication$^{6}$, and the one for $\eta = $3.2 is preliminary result. (b) Central (0-10\%) to peripheral (60-90\%) ratios, $R_{CP}$, as a function of $p_T$ for identified hadrons at midrapidity. (a) and (b) are from Au+Au collisions at $\sqrt{s_{NN}} = $200~GeV. Error bars are statistical only.} \end{figure} \begin{figure}[ht] \centerline{\epsfxsize=5.12in\epsfbox{fig2.eps}} \caption{Top row : Nuclear modification factor for charged hadrons at pseudorapidities $\eta = 0,~1.0,~2.2,~3.2$. Systematic errors are shown with shaded boxes with widths set by the bin sizes. Bottom row : Central(field circles) and semi-central(empty circles) $R_{CP}$ ratios in d+Au collisions at $\sqrt{s_{NN}} = $ 200~GeV. Shaded bands indicate the uncertainty in the calculation of $\langle N_{coll} \rangle$ in the peripheral collisions~(12\%).} \end{figure} The rapidity dependence of $R_{dA}$ and $R_{CP}$ for d+Au collisions~\cite{brahms_rda} is shown in Fig.~2. At midrapidity, $R_{dA}$($p_T > $ 2~GeV/$c$) shows a Cronin type enhancement compared to the binary scaling limit. At higher rapidity, this enhancement is followed by a suppression which becomes stronger at forward rapidity. Along the bottom row, the $R_{CP}$ for two different centrality ranges is shown as function of pseudorapidity. The more central $R_{CP}$ exhibits greater suppression as the rapidity increases. This is consistent with the picture of parton saturation in the Au-wave function\cite{cgc_da}. However, the suppression of $R_{CP}$ at forward rapidity can also be reproduced in the framework of parton recombination in the final state\cite{reco_da}, without involving multiple scattering and gluon saturation in the initial state. \begin{figure}[ht] \centerline{\epsfxsize=4.1in\epsfbox{fig3.eps}} \caption{(left panel) $R_{AuAu}$ for $\pi^{-}$ and $\overline{p}$ at midrapidity and forward rapidity for 0-10\% central Au+Au collisions at $\sqrt{s_{NN}} = $ 200~GeV. (right panel) $R_{dAu}$ of $\pi^{-}$ and $\overline{p}$ at forward rapidity, $\eta =$ 2.2 and 3.2 for d+Au collisions at $\sqrt{s_{NN}} = $ 200~GeV. No weak decay feed-down correction applied.} \end{figure} Figure~3 shows the dependence of the high $p_T$ behavior on the type of particle in d+Au and Au+Au collisions. Results in Au+Au collisions show $\pi^{-}$ are suppressed at midrapidity and forward rapidity. At forward rapidity, the suppression is stronger for $\pi^{-}$, while the $\overline{p}$ yields are enhanced at both rapidities. In d+Au collisions, the $\pi^{-}$ yields are more suppressed at $\eta \sim$ 3.2, while, again, the $\overline{p}$ yields are enhanced at forward $\eta$. This different behavior between $\pi^{-}$ and $\overline{p}$ is not consistent with standard fragmentation functions, and indicates pions experience high $p_T$ suppression while protons do not. This is not yet fully understood. Proton excess might arise from hydrodynamic expansion, or parton recombination\cite{reco_ratio} and/or quark coalescence\cite{coal_ratio} processes that enhance the yield of baryons containing three quarks by pulling them from the medium rather than relying on a simple fragmentation origin. The measured $p/\pi^{+}$ and $\pbar/\pi^{-}$ ratios as a function of $p_T$ for central Au+Au collisions at different rapidities are shown in Fig.~4. There is a clear increase of the $p/\pi$ ratios at intermediate $p_T$ ($2< p_T < 5$~GeV/$c$) relative to the level seen in nucleon-nucleon collisions\cite{ratioinpp,ratioinee}. There is no significant difference for the ratios at rapidity $y=0$ and $y \sim 1$, and $\pbar/\pi^{-}$ ratio shows a similar tendency up to $p_T \sim 1.5$ GeV/$c$ at $\eta \sim 2.2$. \begin{figure}[ht] \centerline{\epsfxsize=3.91in\epsfbox{fig4.eps}} \caption{$p/\pi^{+}$ (a) and $\pbar/\pi^{-}$ (b) ratios at rapidity $y = 0,~ 1.0 $ and $\eta = 2.2$. for 0-10\% central Au+Au collisions at $\sqrt{s_{NN}} = $ 200~GeV. Feed-down corrections applied. Comparisons with model calculations$^{13,14}$ are shown.} \end{figure} \section{Summary} BRAHMS has measured rapidity dependent nuclear modification factors and particle ratios in different colliding systems. The evolution of nuclear modification factors in d+Au collisions may indicate parton saturation in the initial state. The high $p_T$ suppression in Au+Au collisions at midrapidity also exists at forward rapidity, and depends on particle type. The recombination/coalescence models seem to give a reasonable explanation of the observed baryon-meson production mechanism at intermediate $p_T$. \section{Acknowledgments} This work was supported by the division of Nuclear Physics of the Office of Science of the U.S. DOE, the Danish Natural Science Research Council, the Research Council of Norway, the Polish State Committee for Scientific Research and the Romanian Ministry of Education and Research.
2,869,038,155,785
arxiv
\section{Introduction} A proponderance of evidence indicates that galaxies are embedded in massive, extended dark matter (DM) {\em halos}. Simulations of structure formation in the hierarchical cold dark matter (CDM) paradigm predict that CDM halos are generally triaxial\cite{triaxial,dubinski_carlberg91} that they teem with self-bound {\em subhalos}\cite{dsp}. The structure of halos is an important ingredient in modeling the DM direct detection signals\cite{ddm} and halo shapes have recently received attention for testing the CDM paradigm as new and improved probes of halo shape have been applied\cite{hs,sag}. {\em Dissipationless} simulations predict that Milky Way(MW)-size halos have a mean minor-to-major axis ratio of $c/a \approx 0.6-0.7$ with a dispersion of $\sim 0.1$\cite{triaxial}, while dynamical studies suggest that the observed coherence of the Sagittarius tidal stream constrains the inner MW halo to $c/a \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle >}\over{\sim}\ $}} 0.8$\cite{sag}. In \S~\ref{sec:shapes}, we present recent results on the effect of baryonic dissipation on halo shapes in high-resolution, cosmological simulations. In \S~\ref{sec:subs}, we turn to halo substructure. In the MW and M31, there are more than an order of magnitude fewer observed satellites than the predicted number of subhalos of comparable size\cite{dsp}. Several explanations have been offered, including alternative DM properties\cite{sdm} and inefficient galaxy formation in the shallow potentials of small subhalos\cite{bkwl}. We study the sensitivity of the dwarf satellite population to the primordial power spectum (PPS) of density fluctuations on small, sub-galactic scales and demonstrate that our interpretation of the missing satellite problem is a function of the amount of small-scale power. If the lack of luminous MW satellites is due to inefficient galaxy formation, the MW halo should contain $\lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle >}\over{\sim}\ $}} 10^{2}$ otherwise dark subhalos. Strong lensing will be one probe of dark subhalos\cite{dk}. More speculatively, the annihilation of DM particles in these dense substructures may result in numerous $\gamma$-ray sources in the MW halo. We assess the potential for instruments like VERITAS\cite{veritas} and GLAST\cite{glast} to detect such sources in favorable models of supersymmetric (SUSY) DM in \S~\ref{sec:idds}. \section{Halo Shapes} \label{sec:shapes} \begin{figure}[t] \centerline{\epsfxsize=5.0in\epsfbox{shpc.eps}} \label{fig:schp} \caption{ The effect of gas cooling on halo shapes. {\em Left}: Minor-to-major axis ratio $c/a$, as a function of major axis length for a cluster-size halo. The {\em dashed} line shows the shape profile of the DM in the adiabatic simulation. The {\em thick, solid} line shows the shape profile of DM in the cooling run, while the {\em thin, solid} line shows the shape profile for DM and baryons in the cooling run. {\em Right}: Same as the left panel, but for a MW-size galaxy progenitor (see text). } \end{figure} We studied the effect of gas cooling on the shapes of DM halos using high-resolution cosmological simulations of cluster and galaxy formation in a concordance $\Lambda$CDM cosmology. The simulations were performed with the ART $N$-body plus Eulerian gasdynamics code\cite{art}. We refer the reader to Kazantzidis et al.\cite{gas} for further details. Briefly, we analyzed simulations of $8$ cluster-size objects of mass $10^{13} \ h^{-1}\mathrm{M}_{\odot}$ to $3 \times 10^{14} \ h^{-1}\mathrm{M}_{\odot}$. The cluster simulations had a peak force resolution of $\simeq 2.4 h^{-1}\mathrm{kpc}$ and a DM particle mass of $m_{\mathrm{p}} \simeq 2.7 \times 10^{8} \ h^{-1}\mathrm{M}_{\odot}$. We also analyzed a simulation of the early evolution ($z \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle >}\over{\sim}\ $}} 4$) of a galaxy that becomes MW-size at $z=0$ described by Kravtsov\cite{kravtsov03}. This simulation had $m_{\mathrm{p}} \simeq 9.2 \times 10^{5} \ h^{-1}\mathrm{M}_{\odot}$ and peak resolution $\simeq 183 h^{-1}\mathrm{kpc}$. The mass and force resolution are adequate to study the inner regions of halos reliably. For each object, we analyzed two sets of simulations started from the same set of initial conditions, but including different physical processes. In one set, the gas dynamics were treated adiabatically, without any radiative cooling and the results agreed well with those of $N$-body simulations with no baryonic component. The second set of simulations included radiative cooling, and star formation. We measured halo shapes by diagonalizing the moment of inertia tensor\cite{dubinski_carlberg91}. We used ``differential'' shape measurements because this makes the axis ratios measured at each radial bin nearly independent. Our main results are summarized in Figure~1. In the left panel, we show the profile $c/a$, as a function of major axis length for a representative cluster-size halo. On the right, we show results for the galaxy progenitor. The net effect of baryon dissipation is striking. At small radii, the axis ratios in the cooling simulations are greater by $\Delta(c/a) \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle >}\over{\sim}\ $}} 0.3$ and the systematic difference persists out to $\sim R_{\mathrm{vir}}$, where $\Delta(c/a) \sim 0.1$. The baryons in the cluster are mostly in a massive, central, elliptical galaxy while in the galaxy formation simulation, $\sim 90\%$ of the baryons are in a flattened, gaseous disk. In both cases the effect of cooling is weakly dependent upon radius implying that the effect of baryonic dissipation on halo shapes is not critically sensitive to the detailed morphology of the baryonic component. In addition, the axis ratios change with radius in a manner that is not generally monotonic, indicating that different regions of a system may be flattened to different degrees. \section{Halo Substructure} \label{sec:subs} The most accurate technique for studying halo substructure is numerical simulation; however, the computational expense of simulations limits their dynamic range and their applicability in explorations of cosmological parameter space. To overcome this, Zentner and Bullock (ZB)\cite{zb03} developed an approximate, analytic model for subhalo populations and an updated model has recently been successfully tested against a suite of $N$-body simulations\cite{z05}. The model approximately accounts for the merger statistics of subhalos, dynamical friction, and mass loss and redistribution due to tidal forces. The model allows one to generate hundreds of realizations of MW-like halos and thereby explore the distribution of possible subhalo populations. \begin{figure}[t] \centerline{\epsfxsize=3.5in\epsfbox{vcomp.eps}} \label{fig:dsp} \caption{ Dwarf satellites and the power spectrum. We show the observed satellite velocity functions ({\em squares}) and the predicted satellite velocity functions ({\em thick lines}) for $6$ different PS. Clockwise from the top left: standard $n=1$, $\sigma_{8} = 0.95$; $n=0.94$, $\sigma_{8} = 0.83$; WMAP best-fit $n=1.03$, $\mathrm{d} n/\mathrm{d} \ln k = -0.03$, $\sigma_{8} = 0.84$; BSI; $n=0.84$, $\sigma_{8} = 0.65$; and $n=0.90$, $\sigma_{8} = 0.75$. The models are labeled by $\sigma_{8}$. Lines are the means of $100$ model realizations and errorbars represent the $1\sigma$ scatter. Observational data are from the review of Mateo$^{21}$. } \end{figure} In the standard paradigm, structure forms from primordial density fluctuations characterized by a nearly scale-invariant PPS, $P(k) \propto k^{n}$ with $n \simeq 1$. This basic picture has significant observational support\cite{wmap}. However, cosmic microwave background anisotropy constrains the PPS on large scales, $k \sim 10^{-2} \ h\mathrm{Mpc}^{-1}$, while halo substructure is sensitive to small scale power, $k \sim 10-100 \ h\mathrm{Mpc}^{-1}$. ZB studied the effect of variant power spectra on the MW dwarf satellites. They took several PPS with various motivations, all normalized to COBE: (1) standard $n=1$, $\sigma_{8} = 0.95$; (2) $n=0.94$, $\sigma_{8} = 0.83$; (3) $n=0.9$, $\sigma_{8} = 0.75$; (4) running mass inflation $n=0.84$, $\sigma_{8} = 0.65$; (5) broken scale-invariance (BSI) with a power cut-off at $k_c = 1 \ h\mathrm{Mpc}^{-1}$\cite{kl}; and (6) the best-fit running spectrum from WMAP $n=1.03$, $\mathrm{d} n/\mathrm{d} \ln k = -0.03$, $\sigma_{8} = 0.84$. The steps in the calculation are first to generate MW halo substructure realizations for each PPS and to model the velocity dispersions of the embedded stellar components to determine the appropriate subhalo size (labelled by maximum circular velocity $V_{\mathrm{max}}$) in which the observed satellites may be embedded. In this way, one constructs predicted and observed cumulative velocity functions. Figure~2 summarizes the results. First, one sees that the degree to which the dwarf satellite problem represents a challenge is greatly alleviated in the WMAP best-fit cosmology. The level at which inefficient galaxy formation or a critical mass scale for galaxy formation must be invoked to solve the satellite scarcity problem is degenerate with the PPS on small scales. Second, the MW satellite population by itself provides independent evidence against extreme models, such as the low normalization, $\sigma_{8} = 0.65$ model which under-predict substructure. \section{$\gamma$-rays from Dark Substructure} \label{sec:idds} One way of probing the distribution and properties of substructure as well as the particle nature of the DM is through the detection of gamma-rays from annihilations of the dark matter particle in the dense, inner regions of subhalos. The currently favored DM particle is provided by supersymmetry (SUSY) and it is the lightest of the neutralinos ($\chi$). The uncertainties involved in trying to deduce information about the distribution and properties of substructure indirectly via the detection of $\gamma$-rays are twofold. First, there are uncertainties that stem from the underlying cosmological model and the details of formation of very small-scale structures\cite{zb02,zb03} and second, uncertainties that arise from the lack of knowledge of the mass and couplings of the dark matter particle. Using the analytic substructure model of \S~\ref{sec:subs}, we can assess the ability of experiments like VERITAS and GLAST to detect $\gamma$-ray fluxes from DM annihilations. Koushiappas et al.\cite{kzb04}, adopted this approach and assumed the most optimistic SUSY parameters consistent with constraints on $\Omega_{M}$\cite{wmap} to determine the number of expected detections at a significance $S > 3$, as a function of subhalo mass $M$. In order to project counts of observed subhalos beyond the masses of the dwarf galaxies, several physically-motivated extrapolations are necessary; however, the recent simulation of ``mini-halos'' at $z \sim 26$ are a first step toward justifying these extrapolations with explicit numerical simulations\cite{mini}. Our results are summarized in Figure~3. The figure shows that for $\chi$ masses $M_\chi \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle <}\over{\sim}\ $}} 100 \mathrm{GeV}$, the large field of view of GLAST and the energy sensitivity of VERITAS will allow them to detect substructure when operated in concert. For example, if $M_\chi \sim 75 \mathrm{GeV}$, then in the case of optimal coupling to photons there will be on average $\sim 1$ detectable subhalo per GLAST field of view. In this case, subsequent direct observations with VERITAS should be able to confirm the line emission feature at an energy of $\sim M_\chi$ after an exposure time of $\sim 450$ hours. For $100 \mathrm{GeV} \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle <}\over{\sim}\ $}} M_\chi \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle <}\over{\sim}\ $}} 500 \mathrm{GeV}$, detection requires an instrument with a large effective area, like VERITAS; however, such a detection must rely on serendipity due to the small number of potentially detectable objects in VERITAS' comparably small field of view. For neutralino masses in excess of $M_\chi \lower0.6ex\vbox{\hbox{$ \buildrel{\textstyle >}\over{\sim}\ $}} 500 \mathrm{GeV}$, substructure detection via the $\gamma$-ray signal is unlikely with either GLAST or VERITAS.\cite{kzb04} \begin{figure}[t] \centerline{\epsfxsize=5.in\epsfbox{idd.ps}} \label{fig:gray} \caption{ The cumulative number of subhalos of mass $M \ge M_{\min}$ detectable at $S > 3$ on the sky. Results are based on $100$ realizations of a MW-size halo. Errorbars indicate the $68 \%$ range and down arrows indicate that $> 16 \%$ realizations have zero subhalos at that mass. {\em Left}: The number detectable by VERITAS. The {\em solid} line shows the highest detection efficiency case of $M_{\chi}=500 \mathrm{GeV}$. For comparison, the {\em dotted} line shows results for $M_{\chi} = 200 \mathrm{GeV}$ and the {\em dashed} line for $M_{\chi} = 5 \mathrm{TeV}$. {\em Middle}: The {\em solid} line shows our standard result for a $\Lambda$CDM cosmology with $n=1$ and $\sigma_{8} = 0.95$. The {\em dashed} line shows the detectable number of subhalos with the WMAP best-fit running power spectrum, $\mathrm{d} n /\mathrm{d} \ln k = -0.03$. {\em Right}: The number detectable with GLAST. The {\em dashed} line represents the best case of $M_{\chi} = 50 \mathrm{GeV}$ in a standard $\Lambda$CDM cosmology. The {\em dot-dashed} line shows the potential number of detections for a $M_{\chi} = 100 \mathrm{GeV}$ neutralino. } \end{figure} \section*{Acknowledgments} These results are based on several collaborative works. We thank B.~A. Allgood, J.~S. Bullock, A.~V. Kravtsov, B. Moore, D. Nagai, and T.~P. Walker for their invaluable contributions and for allowing us to present our results here. We thank Von Freeman and Risa Wechsler for stimulating discussions. ARZ and SK are funded by the Kavli Institute for Cosmological Physics at The University of Chicago and The National Science Foundation through grant NSF PHY 0114422. SMK is funded by the Swiss National Science Foundation.
2,869,038,155,786
arxiv
\section{Introduction} BaTiO$_{3}$ is often given as a classic example of a proper ferroelectric where, due to the second-order Jahn-Teller effect, an off-centering of the Ti$^{4+}$ cation from its TiO$_6$ octahedron results in a net polarisation\cite{Bersuker1966}. The resulting ferroelectric properties and high dielectric constant make BaTiO$_{3}$ a very attractive material for use in devices such as capacitors\cite{Acosta2017}, and the perovskite-structured material (shown in Figure \ref{f1}a) has become the prototypical ferroelectric; intensively studied to understand the link between ferroelectricity and crystal structure. Despite many decades of investigation, there remains an ongoing debate about the nature of the ferroelectric phase transition. Above its Curie temperature (\textit{T}{$_\textup{C}$}), BaTiO$_{3}$ adopts a cubic structure. Below \textit{T}{$_\textup{C}$}, the structure is reduced to a tetragonal symmetry and on decreasing temperature further, BaTiO$_{3}$ transforms to an orthorhombic and, finally, rhombohedral structure\cite{Hippel1946,Megaw1947,Cochran1960,Hayward2002}. A popular theory, suggested by Cochran et al.\cite{Cochran1960} describes a displacive model whereby Ti$^{4+}$ cations are displaced microscopically along 〈100〉, 〈110〉, and 〈111〉 directions for the tetragonal, orthorhombic and rhombohedral phases respectively. This model however fails to address key observations such as the strong diffuse X-ray scattering in all but the rhombohedral phase\cite{Comes1968,Ravy2007,Pasciak2018} and the presence of first-order Raman excitations in the cubic phase\cite{Quittet1973}. In 1968, Com\'es et al.\cite{Comes1968} proposed an order-disorder (OD) model, also commonly referred to as the `eight-site' model, where the crystallographically-rich phase diagram of BaTiO$_{3}$ is rationalised due to correlations of local Ti displacements along the eight 〈111〉 directions. Correlated displacements of the Ti atom in successive 〈100〉 directions give rise to the observed average symmetry, and it is this underlying disorder that appears to simultaneously reconcile the perceived average symmetry with the anomalous experimental results, discussed above. Since the first proposal of these two contending models, a multitude of experimental and computational studies have favoured either one of these two possible scenarios. Local probes tend to support an OD model\cite{Ravel1998,Zalar2003,Laulhe2009}, for example, our symmetry-motivated analyses of pair distribution functions (PDFs) of BaTiO$_{3}$ have shown that Ti displacements are rhombohedral-like across all known phases\cite{Senn2016}. However, the observation of heavily-damped modes\cite{Luspin1980,Harada1971,Yamada1969} appears at odds with an OD model, and supports the soft-mode explanation. Furthermore, there is not yet consensus---within the OD interpretation---on the exact nature of the disordered local arrangements of Ti cations, where some reports (\textit{via} solid state NMR\cite{Zalar2003}) suggest a local tetragonal distortion and others support a rhombohedral\cite{Ravel1998,Stern2004} distortion. More recently, additional work has come out in support of the soft mode model \cite{Pasciak2018}, where diffuse scattering is attributed to the overdamped anharmonic soft phonon branch. This results in a local probability distribution for the Ti atoms that has a minimum coinciding with the average crystallographic position and a maximum along 〈111〉 directions with an average magnitude of ca.~0.15 \mbox{\normalfont\AA}. It seems that a wealth of experimental and computational observations can either be explained by invoking an OD scenario or considering highly over-damped, anharmonic, soft phonon modes that imply the Ti atoms spend a substantial amount of time off-centre. Regardless of the perspective adopted, it is clear that the local symmetry deviates substantially from the average crystallographic symmetry over short length scales and long time periods, indicating a significant departure from the harmonic soft mode/displacive picture. Consideration of the long range ordering of dynamic 〈111〉 Ti displacements projected onto the 〈100〉 directions appears to reconcile these two models\cite{Senn2016}. Clearly, the investigation of the temperature-induced phase transitions of BaTiO$_{3}$ has been extensive, and a wide range of techniques have been utilised to investigate the average and local structure of the perovskite material \cite{Culbertson2020}. However, challenges associated with \textit{in situ} high pressure measurements have perhaps limited investigation of the local structure of BaTiO$_{3}$ in other regions of the phase diagram. It is predicted that modest hydrostatic pressure will initially act to suppress ferroelectric distortions in ABO$_{3}$ perovskites due to the increasing influence of short-range electronic repulsions over long-range Coulomb ionic interactions which favour polar distortions\cite{Kornev2005}. This is born out by the well-established average structure phase diagram of BaTiO$_{3}$ that indicates that at ambient temperature, there is a tetragonal-to-cubic phase transition at ca.~2 GPa \cite{Hayward2002,Ishidate1997,Bull2021}. However, high-pressure Raman studies show evidence for persistent disorder within the cubic phase, with the suggestion that this disorder results from off-centre Ti atoms and grain boundary/intergrain stress \cite{Venkateswaran1998}. X-ray absorption spectroscopy (XAS) of the Ti \textit{K} edge also suggests that Ti remains displaced until 10 GPa, above which the Ti is centred, and local and average symmetries are reconciled\cite{Itie2006}. Together, these results might imply that the high temperature and high pressure behaviour mimic each other from both an average (crystallographic) and local structure perspective. However, neither of these studies appear to have allowed for robust refinement of models with competing symmetries against the local probe data. On the other hand, PDFs generated from total scattering experiments and their sensitivity to short-range atom--atom correlations are well suited to this kind of modelling that interrogates the precise local symmetry breaking behaviour in BaTiO$_{3}$. Whilst X-ray PDF work has been carried out\cite{Ehm2011}, the insensitivity of X-rays to the lighter oxygen atoms often fail to resolve the level of detail available to neutron measurements. The lack of high-pressure neutron PDF studies of BaTiO$_{3}$, and indeed of crystalline materials more generally can be attributed to the often opposing requirements of high pressure and PDF experiments. It is only relatively recently that high-pressure neutron PDF measurements have been achieved for crystalline materials\cite{Playford2017,Herlihy2021}. With this in mind, we undertake the first analysis of neutron total scattering measurements of BaTiO$_{3}$ at pressures up to 4.2 GPa, in order to directly investigate the nature of the pressure-induced tetragonal-to-cubic phase transition of BaTiO$_{3}$ over a range of length scales. Building on our recently developed symmetry adapted PDF analysis (SAPA) \cite{Bird2021} technique, whereby distortion modes grouped by irreducible representation are refined against local structure measurements, we analyse the high-pressure PDF data, revealing pressure-induced suppression of the local Ti off-centerings. We apply the same modelling approach to previously-published variable temperature PDFs\cite{Senn2016} in order to determine how the departure of local from average symmetry compares for pressure \textit{vs} temperature. Our analysis of ambient temperature, variable pressure PDFs points toward a gradual pressure-induced suppression of the anharmonic potential implicit in describing the OD behaviour of BaTiO$_{3}$, towards a more harmonic-like potential, more consistent with a soft-mode picture. \section{Experimental Details and Data Analysis} Polycrystalline BaTiO$_{3}$ (also used for the variable-temperature study described in reference \citenum{Senn2016}) was measured\cite{DATA} on the high-pressure instrument PEARL\cite{Bull2016}, at the ISIS Neutron and Muon Facility. The powder sample was loaded into a null-scattering Ti--Zr single-toroidal gasket, with a gas loader\cite{Klotz2013}, used to fill the remaining gasket volume with an argon gas pressure transmitting media (PTM). A Paris--Edinburgh (PE) press, equipped with zirconia-toughened alumina (ZTA) anvils was used to apply loads of 3, 25, 40 and 50 tonnes to the sample. The lattice parameters of BaTiO$_{3}$ were determined from Rietveld refinement against the Bragg data and the known equation of state\cite{Bull2021} used to calculate sample pressures of 0.24(2), 1.19(2), 2.55(6) and 4.18(8) GPa. Neutron powder diffraction patterns were collected for a minimum of 11 hours each to ensure sufficient signal-to-noise ratio at high $Q$ (where $Q=(4\pi\sin\theta)/\lambda$). Stacked vanadium discs were measured in the same way with an argon PTM, and analogous data collections were performed at loads of 8, 20, 30 and 45 tonnes, corresponding to pressures roughly equivalent to those of the measured BaTiO$_{3}$ data. Total scattering data were collected and treated using the same procedure described in references \citenum{Playford2017} and \citenum{Herlihy2021}, without the added complication of needing to model the PTM, since argon gas is a relatively weak neutron scatterer. That being said, scattering due to the PTM was observed, with the presence of the (111) Bragg reflection in the diffraction pattern at 4.18 GPa, suggesting the PTM had crystallised. However, the absence of any significant sample peak broadening indicated that hydrostatic conditions remained and there was no evidence of an argon contribution to the PDF (i.e.~no misfits in regions where an Ar--Ar peak would be expected). Data were reduced using the MANTID software package \cite{Arnold2014} to correct for the effects of attenuation by the ZTA anvils and normalised by a vanadium standard to account for flux profile and detector efficiencies. Scattering from the gasket and anvils were accounted for by subtracting data from the vanadium measurements, and total scattering patterns (\textit{S}(\textit{Q})s) were produced by applying a scaling and offset value such that \textit{S}(\textit{Q})\textrightarrow1 at \textit{Q}\textsubscript{max}. PDFs (shown in Figure \ref{f1}c) were obtained \textit{via} Fourier transform of the \textit{S}(\textit{Q}) function using the program StoG, distributed with the RMCProfile package \cite{Tucker2007}. \begin{figure}[t!] \includegraphics[width=0.5\textwidth]{Figure1.png} \caption{\label{f1} a) The average structure unit cell of cubic BaTiO$_{3}$, with an arrow indicating the shortest atom--atom correlation within the structure (Ti--O). b) The longer-range structure and circles with radii of 4 and 30 \mbox{\normalfont\AA}~indicating the minimum and maximum range of atom--atom correlations probed by our variable range PDF refinements. c) Variable-pressure PDFs measured on PEARL (offset in the y-direction with increasing pressure for clarity). The yellow arrow indicates the features arising due to the Ti--O correlations and the horizontal arrows correspond to the probe distances shown in Figure 1b.} \end{figure} PDF modelling and Rietveld refinements were carried out using TOPAS Academic software v6\cite{Coelho2018}. We performed small-box variable range PDF refinements\cite{Smith2008,Culbertson2020}, with the minimum of the fitting range ($r_{\textup{min}}$) kept constant at 1.2 Å, and the maximum ($r_{\textup{max}}$) varied from from 4 to 30 Å in steps of 1 Å. Therefore, the overall fitting range was varied between 2.8 and 28.8 Å, such that increasingly large length-scale atom--atom correlations were probed with increasingly large $r_{\textup{max}}$ values, as depicted in Figure \ref{f1}b. This is in contrast to so-called `box-car' refinements\cite{Usher2016,Hou2018} where the fitting range is held constant and shifted along the PDF, resulting in the progressively reduced influence of the immediate local structure on the refined small-box model. We used a $P1$ unit cell, refining only the polar distortion modes associated with Ti and O which transform as the Γ$_4^-$ irreducible representation (irrep.), and fixing Ba modes to zero to avoid a floating origin of the unit cell. The most general order parameter direction (OPD) associated with this irrep is three dimensional (a,b,c). The Ti($T_{1u}$), O($A_{2u}$) and O($E_{u}$) modes, that form a basis of this irrep, thus have three branches each, where particular constraints on the branched mode amplitudes correspond to higher symmetry OPDs. Rather than allowing distortion modes to refine freely, we constrained the OPD to be consistent with cubic (0,0,0), tetragonal (a,0,0) and rhombohedral (a,a,a) symmetries in order to test these three specific local distortion behaviours. We did not consider other order parameters such as (a,a,0), (a,b,0) or (a,a,b) as the aim of this work was to resolve the OD behaviour of BaTiO$_{3}$ at the tetragonal to cubic phase transition. We found that unconstrained refinements of the Ti($T_{1u}$), O($A_{2u}$) and O($E_{u}$) modes resulted in non-physical coupling, particularly for refinements of the PDFs measured at 2.55 and 4.18 GPa, where for $r_{\textup{max}}$ values of greater than 10 Å, Ti and O atoms refined to displace in the same, rather than opposite, directions (see Supplementary Information (SI)\cite{SI}). In order to maintain the correct relative displacements associated with the modes, a ratio of 1:$-$1.6:$-$1.3 for Ti($T_{1u}$):O($A_{2u}$):O($E_{u}$) displacements, respectively, was applied. These values were calculated by averaging ratios determined by fitting rhombohedral (a,a,a) models against high quality diffraction data measured at 15 and 293 K on GEM\cite{Senn2016}. Mode amplitude values reported herein refer to the A${_\textup{P}}$ values defined in ISODISTORT\cite{Campbell2006}, as the parent-cell-normalized amplitude, and are assigned the mode-specific notation, $|$Q(Γ$_4^-$)$|$. We found that although the refined $|$Q(Γ$_4^-$)$|$ values differ slightly depending on the precise ratio used, the relative values and fitting statistics of each refinement remain essentially constant. Lattice parameters determined from Rietveld refinements of the diffraction patterns (see SI\cite{SI}) were fixed for all small-box PDF refinements, constraining the metric symmetries to those known from the average structures. The $beq\_r\_r2$ function (discussed further in reference \citenum{Bird2021}) was used to describe the correlated thermal motion that leads to \textit{r}-dependent broadening, with isotropic displacement parameters fixed to the lower limits found for the three models (see SI\cite{SI} for further details). The sensitivity of our modelling approach to the limited {\textit{Q}}\textsubscript{max} (20.32 Å$^{-1}$) available on PEARL was thoroughly investigated and reported within Appendix A. \\ \section{Results and Discussion} Neutron diffraction patterns indicate that the measured average structure of BaTiO$_{3}$ at variable pressure is consistent with previous literature\cite{Pruzan2002,Hayward2002,Bull2021}. The neutron diffraction patterns (see SI\cite{SI}) at 0.24 and 1.19 GPa exhibited clear peak splitting (particularly the (200)/(002) reflection), indicative of a tetragonal symmetry, and Rietveld refinements confirmed a $P4mm$ average crystal structure. Above 2 GPa, BaTiO$_{3}$ goes through a well documented phase transition to an average cubic symmetry ($Pm\bar{3}m$), confirmed again by Rietveld refinement at 2.55 and 4.18 GPa. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{Figure2.png} \caption{\label{f4} $R_w$ and $|$Q(Γ$_4^-$)$|$ values for variable range refinements for cubic (0,0,0), tetragonal (a,0,0) and rhombohedral (a,a,a) OPDs against variable pressure (left) and temperature (right) PDFs. $|$Q(Γ$_4^-$)$|$ values for the cubic model were fixed at zero and are plotted as such. $R_w$ values for tetragonal and rhombohedral modes in the high pressure cubic data are almost exactly coincident, and cannot be visually distinguished.} \end{figure*} Refining small-box models over an increasing range of \textit{r} (Å) of the PDF provides information on the correlation length scale. This is particularly relevant for materials with OD behaviour such as BaTiO$_{3}$ where a local rhombohedral distortion may be observed over a short length scale, for example one unit cell, however, longer length scales will increasingly resemble the average structure. Comparisons of fitting statistics ($R_w$) and $|$Q(Γ$_4^-$)$|$ values (shown in Figure \ref{f4}) for cubic, tetragonal and rhombohedral models provide insight into the evolution of local displacements of Ti and O atoms as a function of pressure. We demonstrate that even for limited \textit{Q}\textsubscript{max} values available on PEARL, our data is sensitive to subtle changes in the local structure (see Appendix A). We compare our findings for the local structure of BaTiO$_{3}$ at high pressure with analogous results for the thermally-induced phase transition. The same modelling approach has been applied to PDFs measured at 293, 350, 410 and 500 K using the GEM instrument at ISIS (and processed with a \textit{Q}\textsubscript{max} of 20 Å$^{-1}$ for a fairer comparison), and previously published in support of persistent OD behaviour at high temperature \cite{Senn2016}. The average structure of BaTiO$_{3}$ is tetragonal at 293 and 350 K and cubic at 410 and 500 K, inviting a comparison of the local structure of BaTiO$_{3}$ at high pressure and high temperature. At 0.24 and 1.19 GPa consistent improvements in $R_w$ over all $r_\textup{max}$ (shown in Figure \ref{f4}) indicate that the local and medium-range structure of BaTiO$_{3}$ is best described by a rhombohedral displacement of the Ti atom. The refined $|$Q(Γ$_4^-$)$|$ values for an (a,0,0) OPD are approximately $\sqrt{3}/2$ smaller than those of an (a,a,a) OPD suggesting that we are essentially resolving a projection of the [111] type displacement onto the [100] direction. The results for the local structure of BaTiO$_{3}$ at 1.19 GPa are essentially very similar to those found for the structure at 0.24 GPa and a decrease in $|$Q(Γ$_4^-$)$|$ of ca.~15 \% points towards a small pressure-induced hardening of the local potential describing the off-centre displacements. These results are comparable with those of the variable temperature PDFs, measured at 293 and 350 K. $R_w$ values at 293 and 350 K again favour the rhombohedral-type displacements up to $r_\textup{max}$ = 10 \mbox{\normalfont\AA}, after which, fitting statistics favour the tetragonal model, indicating sensitivity of the PDF to the average, long-range structure. $|$Q(Γ$_4^-$)$|$ values are in good agreement with the variable pressure results. Again, relative $|$Q(Γ$_4^-$)$|$ values for (a,0,0) compared to (a,a,a) OPDs suggest the resolution of the [111] type displacement onto the [100] direction. Results for the local structure of high pressure cubic BaTiO$_{3}$ point toward a departure from the local structure behaviour of the high pressure tetragonal structure, and perhaps more interestingly, from the local structure of the high temperature cubic structure. At 2.55 and 4.18 GPa, $|$Q(Γ$_4^-$)$|$ at $r_{\textup{max}}$ = 4 \mbox{\normalfont\AA}~becomes suppressed by ca.~1/2 (cf. 0.24 GPa) and the magnitudes of $|$Q(Γ$_4^-$)$|$ with OPD (a,0,0) and (a,a,a) are approximately equal. Over all $r_\textup{max}$ there is negligible difference between the $R_w$ values for models of tetragonal and rhombohedral Ti displacements. At 2.55 GPa the difference between cubic models and models with off centre displacements decreases approximately linearly until $r_{\textup{max}}$ = 20 \mbox{\normalfont\AA}, after which the difference in $R_w$ drops below significance, whereas at 4.18 GPa this occurs at $r_{\textup{max}}$ = 10 \mbox{\normalfont\AA}. At 4.18 GPa, by 16 \mbox{\normalfont\AA}, $|$Q(Γ$_4^-$)$|$ refines to zero, suggesting that the correlation length of the Ti displacements is below four unit cell lengths. The suppression of $|$Q(Γ$_4^-$)$|$, isotropy of the displacement with respect to the different OPD, and the reduction in correlation lengths are all consistent with the ferroelectric instability in BaTiO$_{3}$ being well-described by the harmonic approximation at elevated pressures. On the other hand, our results against previously-published high temperature PDF data clearly favour an (a,a,a) OPD, consistent with the model of chains of rhombohedrally displaced off-centre Ti atoms, which retain substantial correlations along 〈100〉 directions. At 410 and 500 K, refined $|$Q(Γ$_4^-$)$|$ values over $r_\textup{max}$ = 4--10 \mbox{\normalfont\AA}) are similar to those observed at lower temperatures (at $r_{\textup{max}}$ = 4 \mbox{\normalfont\AA}, $|$Q(Γ$_4^-$)$|$ at 293 K = 0.094 \mbox{\normalfont\AA}, 350 K = 0.074 \mbox{\normalfont\AA}, 410 K = 0.071 \mbox{\normalfont\AA}, 500 K = 0.071 \mbox{\normalfont\AA}), but drop to values that are ca.~2/3 of those observed over longer $r_\textup{max}$. The persisting sensitivity to off-center displacements in the high temperature cubic regime is consistent with the model of correlated chains of [111] displacements projected along the [100] axis and lends further support to the OD model for the temperature-induced phase transition. We find that our observed high pressure trends of the local structure agree with the work of Ravy et al.\cite{Ravy2007} who report diminishing diffuse scattering planes at high pressure and broadening of the diffuse features indicative of a decrease in correlation length of Ti chains, which they discuss in the wider context of pressure-induced Ti-centering. Correlation lengths of ca.~six unit cell lengths (ca.~24 \mbox{\normalfont\AA}) implied by broadened diffuse features at ca.~4 GPa are also in good agreement with our results, where diffuse scattering is sensitive to chain correlations and the PDF method will average chain and non-chain interactions. While the reported diffuse scattering is sensitive to chain correlations, it is less sensitive to the precise nature of the local symmetry breaking. On the other hand, the method we report here for analysing our high-pressure PDFs has a higher degree of sensitivity to the local symmetry breaking at low $r_{\textup{max}}$ ,but will average over chain and non-chain interactions at high $r_{\textup{max}}$, and thus the two approaches should be viewed as providing complimentary information. We stress that although XAS measurements suggest continual off-centre Ti displacements up to 10 GPa\cite{Itie2006}, the sensitivity of the technique is limited to the immediate local environment of the probe atom, extending as far as the next-nearest neighbour only. This makes it difficult to judge how these results differ from those expected from the root mean square displacement of a harmonic oscillator---estimated to be 0.05 \mbox{\normalfont\AA}~at 4.18 GPa from our $r_{\textup{max}}$ = 4 \mbox{\normalfont\AA}~refinements (see Figure \ref{f4}). Our results not only show robustly that the OD behaviour of BaTiO$_{3}$ is suppressed at high pressure, but also adds to an emerging research direction on neutron local structure measurements of crystalline materials under hydrostatic pressure\cite{Playford2017,Herlihy2021}, where local structure analysis approaches such as the symmetry motivated approach we have used here can be applied. Such experiments would provide fundamental insight into the pressure induced mode softening in negative thermal expansion materials like ScF${_3}$\cite{Greve2010,Bird2020}, for example, or pressure-induced phase behaviour of framework materials such as Prussian blue analogues\cite{Chapman2006,Bostrom2021}. \section{Conclusion} Although it might be tempting to conclude from the average structures that the high temperature and high pressure tetragonal and cubic phases behave in an analogous way, in terms of the local structure, our detailed high pressure PDF study shows that this is not the case. Our symmetry motivated approach of interrogating the local structure of BaTiO$_{3}$ reveals that at high pressure, the OD model provides a less satisfactory description. By 2.55 GPa already, significant suppression of the mode amplitude over short $r_\textup{max}$, isotropy of the OPD and loss of sensitivity to correlated Ti displacements at high pressure all point towards a more harmonic character of the polar mode, which contrasts the high temperature behaviour. \section*{Acknowledgements} We thank Professor David Keen for supplying the total scattering data from GEM. A. H. thanks the Science and Technology Facilities Council and the University of Warwick for a studentship. M. S. S. acknowledges the Royal Society for a University Research Fellowship (UF160265) and the EPSRC for funding (EP/S027106/1). We are grateful to STFC for the provision of neutron beam time at ISIS, supported under experiment number RB1910162\cite{DATA}.
2,869,038,155,787
arxiv
\section{Introduction} In this paper we are interested in the polynomials $P_n$ that are orthogonal with respect to the weight function $J_{\nu}$ on $[0,\infty)$, where $J_{\nu}$ is the Bessel function of order $\nu \geq 0$. The Bessel function is oscillatory with an amplitude that decays like $\mathcal{O}(x^{-1/2})$ as $x \to \infty$, and therefore the moments \[ \int_0^{\infty} x^j J_{\nu}(x) dx \] do not exist. It follows that the polynomials $P_n$ can not be defined by the usual orthogonality property \begin{equation} \label{Pnx} \int_0^\infty P_n(x) x^j J_\nu(x) dx =0, \qquad j=0,1,\ldots,n-1. \end{equation} Asheim and Huybrechs \cite{AH} introduced the polynomials $P_n$ via a regularization of the weight with an exponential factor. For each $s > 0$, they consider the monic polynomial $P_n(x;s)$ of degree $n$ that is orthogonal with respect to the weight function $J_{\nu}(x) e^{-sx}$, in the following sense: \begin{equation} \label{Pnxs} \int_0^\infty P_n(x;s) x^j J_\nu(x) e^{-sx}dx=0, \qquad j=0,1,\ldots,n-1, \end{equation} and they take the limit \begin{equation} \label{Pnlimit} P_n(x) = \lim_{s \to 0+} P_n(x; s), \end{equation} provided that the limit exists. Since the weight function $J_{\nu}(x)e^{-sx}$ changes sign on the positive real axis, there is actually no guarantee for existence or uniqueness of $P_n(x;s)$. For the limit \eqref{Pnlimit} we therefore also have to assume that $P_n(x;s)$ exists and is unique for $n$ large enough. The polynomials $P_n$ can alternatively be defined by the moments, since the limiting moments for the Bessel function of order $\nu \geq 0$ are known, namely \begin{equation} \label{moments} m_j := \lim_{s \to 0+} \int_0^{\infty} x^j J_{\nu}(x) e^{-sx} dx = 2^{j} \frac{\Gamma(\frac{1+\nu+j}{2})}{\Gamma(\frac{1+\nu-j}{2})}, \end{equation} see \cite[section 3.4]{AH}. Thus we have the determinantal formula (which is familiar from the general theory of orthogonal polynomials) \begin{equation} \label{Pndet} P_n(x) = \frac{1}{\Delta_n} \begin{vmatrix} m_0 & m_1 & \cdots & m_{n-1} & m_n \\ m_1 & m_2 & \cdots & m_{n} & m_{n+1} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ m_{n-1} & m_{n} & \cdots & m_{2n-2} & m_{2n-1} \\ 1 & x & \cdots & x^{n-1} & x^n \end{vmatrix} \end{equation} with a Hankel determinant $\Delta_n = \det \left[ m_{i+j} \right]_{i,j=0}^{n-1}$. The polynomial $P_n$ thus exists if and only if $\Delta_n \neq 0$. Asheim and Huybrechs \cite{AH} analyze Gaussian quadrature rules with oscillatory weight functions, such as complex exponentials, Airy and Bessel functions. The nodes for the Gaussian quadrature rule are the zeros of the orthogonal polynomials. Since the weight is not real and positive on the interval of orthogonality there is a problem of existence and uniqueness of the orthogonal polynomials. In addition, even when the orthogonal polynomial exists, its zeros may not be real, and they may distribute themselves on some curve or union of curves in the complex plane as the degree tends to infinity. Examples of this kind of behavior are known in the literature, for instance with Laguerre or Jacobi polynomials with non--standard parameters, see \cite{AMMT}, \cite{KuijMcL} and \cite{KuijMF}, and for complex exponentials \cite{Deano}. \begin{figure}[t] \centering \begin{overpic}[width=.45\textwidth]{nu_025.pdf} \end{overpic} \begin{overpic}[width=.45\textwidth]{nu_05.pdf} \end{overpic} \caption{Plot of the zeros of the polynomials $P_n$ for $n=200$ and $\nu=0.25$ (left), $\nu=0.5$ (right).} \label{fig:points_plot_small} \end{figure} \begin{figure}[t] \centering \begin{overpic}[width=.45\textwidth]{nu_08.pdf} \end{overpic} \begin{overpic}[width=.45\textwidth]{nu_13.pdf} \end{overpic} \caption{Plot of the zeros of the polynomials $P_n$ for $n=200$ and $\nu=0.8$ (left), $\nu=1.3$ (right).} \label{fig:points_plot_big} \end{figure} In the present case, with orthogonality defined as \eqref{Pnxs}--\eqref{Pnlimit}, it was shown in \cite[Theorem 3.5]{AH} that the zeros of $P_n$ are on the imaginary axis in case $\nu = 0$ and $n$ is even. Namely, if $t_1, \ldots, t_{n/2}$ are the zeros of the orthogonal polynomial of degree $n/2$ (where $n$ is even) with respect to the positive weight $K_0(\sqrt{t}) t^{-1/2}$ on $[0,\infty)$, then the zeros of $P_n$ are $\pm i \sqrt{t_1}, \ldots \pm i \sqrt{t_{n/2}}$. Here $K_0$ is the modified Bessel function of the second kind. For $\nu > 0$ the zeros of $P_n$ are not on the imaginary axis, as is clear from the illustrations given in \cite{AH}, see also the Figures \ref{fig:points_plot_small} and \ref{fig:points_plot_big}. The computations have been carried out in {\sc Maple}, using extended precision. From these numerical experiments Asheim and Huybrechs \cite{AH} concluded that the zeros seem to cluster along the vertical line $\Re z = \frac{\nu \pi}{2}$. More precisely, for $\nu \leq \frac{1}{2}$, one sees in Figure \ref{fig:points_plot_small} that the vast majority of zeros are near a vertical line, which is indeed at $\Re z = \frac{\nu \pi}{2}$. For $\nu > \frac{1}{2}$ one sees in Figure \ref{fig:points_plot_big} that the zeros with large imaginary part are close to the vertical line $\Re z = \frac{\nu \pi}{2}$, although they are not as close to the vertical line as the zeros in Figure \ref{fig:points_plot_small}. We were intrigued by these figures and the aim of this paper is to give a partial explanation of the observed behavior of zeros. We are able to analyze the polynomials $P_n$ when $0 \leq \nu \leq \frac{1}{2}$ in the large $n$ limit by means of a Riemann-Hilbert analysis. The result is that we indeed find that the real parts of most of the zeros tend to $\frac{\nu \pi}{2}$ as $n \to \infty$. We are not able to handle the case $\nu > \frac{1}{2}$, since in this case our method to construct a local parametrix at the origin fails. This difficulty may very well be related to the different behavior of the zeros in the case $\nu > \frac{1}{2}$. It would be very interesting to analyze this case as well. From the figures it seems that there is a limiting curve for the scaled zeros, if we divide the imaginary parts of the zeros by $n$ and keeping the real parts fixed. This limiting curve is a vertical line segment if $\nu \leq \frac{1}{2}$ (this will follow from our results), but we do not know the nature of this curve if $\nu > \frac{1}{2}$. \section{Statement of main results} \subsection{Convergence of zeros} Our first result is about the weak limit of zeros. \begin{theorem} \label{Th0} Let $0 < \nu \leq \frac{1}{2}$. Then the polynomials $P_n$ exist for $n$ large enough. In addition, the zeros of $P_n(in \pi z)$ all tend to the interval $[-1,1]$ and have the limiting density \begin{equation} \label{psidensity2} \psi(x)=\frac{1}{\pi}\log \frac{1+\sqrt{1-x^2}}{|x|}, \qquad x\in[-1,1]. \end{equation} \end{theorem} The convergence of zeros to the limiting density \eqref{psidensity2} is in the sense of weak convergence of normalized zero counting measures. This means that if $z_{1,n}, \ldots, z_{n,n}$ denote the $n$ zeros of $P_n$, then \[ \lim_{n \to \infty} \frac{1}{n} \sum_{j=1}^n \delta_{\frac{z_{j,n}}{i \pi n}} = \psi(x) dx \] in the sense of weak$^*$ convergence of probability measures. Equivalently, we have \[ \lim_{n \to \infty} \frac{1}{n} \sum_{j=1}^n f\left( \frac{z_{j,n}}{i \pi n}\right) = \int_{-1}^1 f(x) \psi(x) dx \] for every function $f$ that is defined and continuous in a neighborhood of $[-1,1]$ in the complex plane. The weak limit of zeros, if we rescale them by a factor $i \pi n$, exists and does not depend on the value of $\nu$. Theorem \ref{Th0} is known to hold for $\nu=0$, and we believe that it also holds true for $\nu > \frac{1}{2}$. Regarding the real parts of the zeros of $P_n$ as $n\to\infty$, we have the following result. \begin{theorem}\label{Th2} Let $0<\nu\leq 1/2$, and let $\delta>0$ be fixed. Then there exist $n_0\in\mathbb{N}$ and $C > 0$ such that for $n\geq n_0$, every zero $z_{j,n}$ of $P_n$ outside the disks $D(0,n\delta)$ and $D(\pm n\pi i, n \delta)$ satisfies \begin{equation} \label{Rezjn} \left| \Re z_{j,n} - \frac{\nu\pi}{2} \right| \leq C \epsilon_n, \end{equation} where \begin{equation} \label{epsilonn} \epsilon_n = \frac{n^{\nu-1/2}}{(\log n)^{\nu+1/2}}. \end{equation} \end{theorem} \begin{remark} For each fixed $\delta > 0$ there are approximately $\varepsilon n$ zeros of $P_n$ in the disks $D(0,n\delta)$ and $D(\pm n\pi i, n\delta)$ as $n$ is large, where \[ \varepsilon = \int_{-1}^{-1+\delta/\pi} \psi(x) dx + \int_{-\delta/\pi}^{\delta/\pi} \psi(x) dx + \int_{1-\delta/\pi}^1 \psi(x) dx. \] This is a consequence of the weak convergence of zeros, see Theorem \ref{Th0}. Clearly, $\varepsilon \to 0$ as $\delta \to 0$, and so it follows from Theorem \ref{Th2} by taking $\delta$ arbitrarily small that for all but $o(n)$ zeros one has that the real part tends to $\frac{\nu \pi}{2}$ as $n \to \infty$. \end{remark} \begin{remark} We do not have information about the zeros in the disk $D(0,n \delta)$. In our Riemann-Hilbert analysis we prove the existence of a local parametrix around the origin, but we do not have an explicit construction with special functions. Therefore we cannot analyze the zeros near the origin. On the other hand, we do have potential access to the extreme zeros in the disks $D(\pm n \pi i, n \delta)$ since the asymptotics of the polynomials $P_n(in \pi z)$ is given in terms of Airy functions. From the figures it seems that the result \eqref{Rezjn} also holds for the extreme zeros, but we omit this asymptotic result in Theorem \ref{Th1}, since it does not follow clearly from the construction of the local parametrices in this case. \end{remark} \subsection{Orthogonality of $P_n(in \pi z)$ and discussion} Theorems \ref{Th0} and \ref{Th2} follow from strong asymptotic formulas for the rescaled polynomials \begin{equation} \label{tildePn} \widetilde{P}_n(z) = (in \pi)^{-n} P_n(in\pi z). \end{equation} These polynomials are orthogonal polynomials on the real line, but with a complex weight function. \begin{proposition} \label{prop:Pntildeorthogonal} Let $0 \leq \nu < 1$. Then the polynomial $\widetilde{P}_n$ is the monic orthogonal polynomial of degree $n$ for the weight \begin{equation} \label{eq:weightnu} \begin{cases} e^{ \nu \pi i/2} K_{\nu}(-n \pi x), & \text{ for } x < 0, \\ e^{- \nu \pi i/2} K_{\nu}(n \pi x), & \text{ for } x > 0, \end{cases} \end{equation} on the real line. That is, \begin{equation} \label{Pntildeorthogonal} \int_{-\infty}^{\infty}\widetilde{P}_n(z) x^j e^{- \sgn(x) \nu \pi i/2} K_{\nu}(n \pi |x|) dx = 0, \qquad j =0,1 \ldots, n-1. \end{equation} \end{proposition} The function $K_{\nu}$ in \eqref{eq:weightnu} is the modified Bessel function of second kind of order $\nu$. Proposition \ref{prop:Pntildeorthogonal} is proved in Section \ref{subsec:second}. Since $K_{\nu}(x) \sim x^{-\nu}$ as $x \to 0$, see for instance \cite[10.30.2]{DLMF}, the condition $\nu < 1$ is necessary for the convergence of the integral \eqref{Pntildeorthogonal} with $j=0$. In case $\nu=0$ then \eqref{eq:weightnu} is the real and positive weight function $K_{0}(n\pi |x|)$. Then $\widetilde{P}_n$ has all its zeros on the real line, and consequently the zeros of $P_n$ are on the imaginary axis. This way we recover the result of \cite{AH}. For $\nu = 1/2$, the modified Bessel function reduces to an elementary function and the weight function \eqref{eq:weightnu} is \begin{equation} \label{weightnu12} \begin{cases} e^{\pi i/4} (2n |x|)^{-1/2} e^{-n \pi |x|}, & \quad x < 0, \\ e^{-\pi i/4} (2n |x|)^{-1/2} e^{- n \pi |x|}, & \quad x > 0. \end{cases} \end{equation} The weight \eqref{weightnu12} has three components: \begin{itemize} \item An exponential varying weight $e^{-n \pi |x|}$ with a potential function $V(x) = \pi |x|$ that is convex but non-smooth at the origin. \item A square root singularity at the origin $|x|^{-1/2}$. \item A complex phase factor $e^{\pm \pi i/4}$ with a jump discontinuity at the origin. \end{itemize} The exponential varying weight determines the limiting density \eqref{psidensity2}. Indeed we have that $\psi(x) dx$ is the minimizer of the logarithmic energy in external field $\pi |x|$ among probability measures on the real line, see \cite{ST}, and as is well-known, the zeros of the orthogonal polynomials with varying weight function $e^{-n \pi |x|}$ have $\psi$ as limiting density. This continues to be the case for the weights \eqref{eq:weightnu} as is claimed by Theorem \ref{Th0}. A Riemann-Hilbert analysis for the weight $e^{-n \pi |x|}$, and other Freud weights, is in \cite{KMcL}. The square root singularity and the jump discontinuity are known as Fisher-Hartwig singularities in the theory of Toeplitz determinants. There is much recent progress in the understanding of Toeplitz and Hankel determinants with such singularities \cite{DIK2}. This is also related to the asymptotics of the corresponding orthogonal polynomials, whose local behavior near a Fisher-Hartwig singularity is described with the aid of confluent hypergeometric functions, see the works of Deift, Its and Krasovsky \cite{DIK, IK} and also \cite{FMFS,FMFS2}. We are facing the complication that the Fisher-Hartwig singularity is combined with a logarithmic divergence of the density $\psi$ at the origin, see \eqref{psidensity2}. In our Riemann-Hilbert analysis we were not able to construct a local parametrix with special functions, and we had to resort to an existence proof, where we used ideas from \cite{KMcL} and \cite{BB}, although our proof is at the technical level different from either of these papers. \subsection{Asymptotic behavior} Away from the region where the zeros of $P_n(z)$ lie, the asymptotic behavior is governed by the $g$ function associated with the limiting density $\psi$, that is, \begin{equation} \label{gfunction} g(z)=\int_{-1}^1 \log(z-x)\psi(x)dx, \end{equation} where the density $\psi$ is given by \eqref{psidensity2}. Then $g$ is defined and analytic for $z\in \mathbb{C} \setminus(-\infty,1]$. We prove the following asymptotic behavior of $P_n$ in the region away from the zeros. We continue to use $\epsilon_n$ as defined in \eqref{epsilonn}. \begin{theorem}\label{Th1} Let $0<\nu\leq 1/2$. Then the polynomial $P_n$ exists and is unique for sufficiently large $n$. Moreover, the polynomial $\widetilde{P}_n$ given by \eqref{tildePn} has the following behavior as $n\to\infty$: \begin{equation}\label{asymp:Pn:outer} \widetilde{P}_n(z)=e^{ng(z)} \left(\frac{z(z+(z^2-1)^{1/2})}{2(z^2-1)}\right)^{1/4}\left(\frac{(z^2-1)^{1/2}-i}{(z^2-1)^{1/2}+i}\right)^{-\nu/4} \left(1+\mathcal{O}(\epsilon_n)\right), \end{equation} uniformly for $z$ in compact subsets of $\mathbb{C}\setminus [-1,1]$. Here the branch of the function $(z^2-1)^{1/2}$ is taken which is analytic in $\mathbb{C}\setminus[-1,1]$ and positive for real $z > 1$. \end{theorem} In a neighborhood of $(-1,1)$ we find oscillatory behavior of the polynomials $\widetilde{P}_n$ as $n\to\infty$. We state the asymptotic formula \eqref{asymp:Pn:inner} for $\Re z \geq 0$ only. There is an analogous formula for $\Re z < 0$. This follows from the fact that the polynomial $P_n$ has real coefficients, as all the moments in the determinantal formula \eqref{Pndet} are real. Thus $P_n(\overline{z}) = \overline{P_n(z)}$, and so \[ \widetilde{P}_n(-\overline{z}) = \overline{\widetilde{P}_n(z)}, \qquad z \in \mathbb C. \] To describe the behavior near the interval, we need the analytic continuation of the density \eqref{psidensity2}, which we also denote by $\psi$, \begin{equation} \label{complexpsi} \psi(z) = \frac{1}{\pi} \log \frac{1+ (1-z^2)^{1/2}}{z}, \qquad \Re z > 0, \end{equation} which is defined and analytic in $\{ z \mid \Re z > 0\} \setminus [1, \infty)$. For $\Re z > 0$ with $z \not\in [1, \infty)$ we also define \begin{equation} \label{defthetan} \theta_n(z) = n \pi \int_z^1 \psi(s) ds + \frac{1}{4} \arccos z - \frac{\pi}{4}. \end{equation} \begin{theorem} \label{Th3} Let $0 < \nu \leq 1/2$. There is an open neighborhood $E$ of $(-1,1)$ such that for $z \in E \setminus \{0\}$ with $\Re z \geq 0$ we have \begin{multline} \label{asymp:Pn:inner} \widetilde{P}_n(z)= \frac{z^{1/4} e^{\frac{\nu \pi i}{4}} e^{n \pi z/2}}{2^{1/4} (2e)^n (1-z^2)^{1/4}} \left[\exp\left( \frac{\nu \pi}{2} \psi(z) + i \theta_n(z)\right) \left( 1 + \mathcal{O}\left(\frac{\log n}{n}\right)\right) \right. \\ \left. + \exp\left( - \frac{\nu \pi}{2} \psi(z) - i \theta_n(z) \right) \left( 1 + \mathcal{O}\left(\frac{\log n}{n}\right)\right) + \mathcal{O}(\epsilon_n) \right] \end{multline} as $n \to \infty$, with $\psi$ and $\theta_n$ given by \eqref{complexpsi} and \eqref{defthetan}. The asymptotic expansion \eqref{asymp:Pn:inner} is uniform for $z \in E$ with $\Re z \geq 0$ and $|z-1| > \delta$, $|z| > \delta$, for every $\delta > 0$. \end{theorem} The two terms $\exp\left( \frac{\nu \pi}{2} \psi(z) + i \theta_n(z)\right)$ and $\exp\left(- \frac{\nu \pi}{2} \psi(z) - i \theta_n(z)\right)$ in \eqref{asymp:Pn:inner} describe the oscillatory behavior near the interval as well as the leading order behavior of the zeros. Zeros can only happen when these two terms are of comparable absolute value so that cancellations can take place. When $\nu = 0$ this happens for real $z \in E$. However, for $\nu > 0$ this does not happen for real $z$, but near the line $\Im z = -\frac{\nu}{2n}$, as we will show in Section \ref{section44}. This leads to Theorem \ref{Th2}. \subsection{Outline of the paper} The structure of the rest of the paper is as follows. In Section \ref{Section_RH} we state the Riemann--Hilbert problem $Y^{(s)}$ for $P_n(x;s)$ with $s > 0$, and we make an initial transformation \begin{equation*} Y^{(s)} \mapsto X^{(s)}. \end{equation*} In the RH problem for $X^{(s)}$ we can take the limit $s \to 0+$ which leads to a RH problem for $X$, that characterizes the polynomial $P_n(x)$. Then we carry out the further transformations \begin{equation*} X \mapsto U \mapsto T \mapsto S \mapsto Q \mapsto R \end{equation*} of the Deift--Zhou nonlinear steepest descent method \cite{Deift,DKMVZ}. The step $X\mapsto U$ is rotation and scaling, to translate the problem to the interval $[-1,1]$. This leads to the polynomials $\widetilde{P}_n$ and the proof of Proposition \ref{prop:Pntildeorthogonal}. The normalization at $\infty$ in the $U\mapsto T$ step is carried out using an equilibrium problem with a Freud weight $w(x)=e^{-n V(x)}$, where $V(x)=\pi|x|$ is the pointwise limit as $n\to\infty$ of the varying weight \begin{equation*} V_n(x)=-\frac{1}{n}\log K_{\nu}(n\pi |x|). \end{equation*} The construction of the global parametrix $N$ on the interval $[-1,1]$ involves two Szeg\H{o} functions $D_1(z)$ and $D_2(z)$, that correspond respectively to an algebraic singularity of the weight function at the origin and to a complex phase factor. The local parametrices near the endpoints $\pm 1$ involve Airy functions, since the density $\psi(x)$ in \eqref{psidensity2} behaves like a square root in a neighborhood of these endpoints. The main difficulty of the analysis is the construction of a local parametrix in a neighborhood of the origin, and the reason is the lack of analyticity of the weight function $V_n(x)$ in that neighborhood. In this paper, we reduce the jump matrices in that local analysis to almost constant in a disk around $0$ and then use a small norm argument in $L^2\cap L^{\infty}$ to prove existence of a solution to this local RH problem. In this respect, the analysis is similar to the one presented by Kriecherbauer and McLaughlin in \cite{KMcL}. Also, the same limiting potential $V(x)$ appears in the work of Bleher and Bothner in \cite{BB}. Another example of non--analytic weight function was considered in the work of Foulqui\'e, Mart\'inez--Finkelshtein and Sousa, see \cite{FMFS} and \cite{FMFS2}, although in this case the local parametrix at the origin is explicitly given in terms of confluent hypergeometric functions. Finally, in Section \ref{proofs} we follow the transformations both outside and inside the lens, but away from the origin, to get the asymptotic information about $P_n(z)$ and its zeros. This proves Theorem \ref{Th1} and \ref{Th3}. Theorem \ref{Th0} follows from Theorem \ref{Th1} and Theorem \ref{Th3} is a consequence of \ref{Th2}. \section{Riemann--Hilbert problem}\label{Section_RH} \subsection{RH problem for polynomials $P_n(x;s)$} We let $\nu > 0$ and $s > 0$. Orthogonal polynomials are characterized by a matrix valued Riemann-Hilbert problem as was first shown by Fokas, Its, and Kitaev \cite{FIK}, see also \cite{Deift}. This characterization does not use the fact that the orthogonality weight is non-negative, and it therefore also applies to oscillatory weights. Thus the polynomial $P_n(x;s)$ satisfying \eqref{Pnxs} is characterized by the following Riemann-Hilbert problem: \begin{rhp}\label{RHforY} $Y^{(s)} :\mathbb{C}\setminus [0,\infty) \to \mathbb{C}^{2\times 2}$ is a $2 \times 2$ matrix valued function that satisfies: \begin{itemize} \item[1)] $Y^{(s)}$ is analytic in $\mathbb{C}\setminus [0,\infty)$. \item[2)] $Y^{(s)}$ satisfies the jump condition \begin{equation*} Y^{(s)}_{+}(x)= Y^{(s)}_{-}(x) \begin{pmatrix} 1 & J_{\nu}(x)e^{-sx} \\ 0 & 1 \end{pmatrix} \quad \text{on } (0,\infty). \end{equation*} \item[3)] As $z \to \infty$, \begin{equation}\label{asymp:Y} Y^{(s)}(z)=(I+\mathcal{O}(1/z))\begin{pmatrix} z^{n} & 0 \\ 0 & z^{-n} \end{pmatrix}, \end{equation} where $I$ denotes the $2\times 2$ identity matrix. \item[4)] $Y^{(s)}(z)$ remains bounded as $z \to 0$. \end{itemize} \end{rhp} The polynomial $P_n(x;s)$ exists and is unique if and only if the RH problem has a unique solution. In that case we have \begin{equation} \label{Pn-and-Y11} P_n(x;s) = Y^{(s)}_{11}(x). \end{equation} \subsection{First transformation} In the first transformation we use the following connection formula between $J_{\nu}$ and the modified Bessel function $K_{\nu}$ of the second kind: \begin{equation}\label{connection} J_\nu(z)=\frac{1}{\pi i}\left(e^{-\frac{\nu \pi i}{2}}K_\nu(-iz)-e^{\frac{\nu \pi i}{2}}K_\nu(iz)\right), \qquad |\arg z|\leq \frac{\pi}{2}, \end{equation} see for instance \cite[formula 10.27.9]{DLMF}. Alternatively, the Bessel function can be written in terms of Hankel functions as in \cite[formula 10.4.4]{DLMF}. The formula \eqref{connection} leads to the following factorization of the jump matrix: \begin{equation} \label{factorization} \begin{pmatrix} 1 & J_{\nu}(x)e^{-sx} \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & -\frac{e^{\frac{\nu \pi i}{2}}}{\pi i}K_\nu(ix)e^{-sx} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & \frac{e^{-\frac{\nu \pi i}{2}}}{\pi i}K_\nu(-ix)e^{-sx} \\ 0 & 1 \end{pmatrix}. \end{equation} We define the new matrix valued function $X^{(s)}$ by \begin{equation}\label{Xs} X^{(s)}(z)=\begin{cases} \begin{pmatrix} 1 & 0 \\ 0 & (\pi i)^{-1} \end{pmatrix} Y^{(s)}(z)\begin{pmatrix} 1 & - e^{-\frac{\nu \pi i}{2}} K_\nu(-iz)e^{-sz} \\ 0 & \pi i \end{pmatrix},\quad &\text{if } 0 <\arg z < \frac{\pi}{2}, \\ \begin{pmatrix} 1 & 0 \\ 0 & (\pi i)^{-1} \end{pmatrix} Y^{(s)}(z)\begin{pmatrix} 1 & - e^{\frac{\nu \pi i}{2}} K_\nu(iz)e^{-sz} \\ 0 & \pi i \end{pmatrix},\quad &\text{if } -\frac{\pi}{2} <\arg z < 0, \\ \begin{pmatrix} 1 & 0 \\ 0 & (\pi i)^{-1} \end{pmatrix} Y^{(s)}(z) \begin{pmatrix} 1 & 0 \\ 0 & \pi i \end{pmatrix}, & \text{elsewhere}. \end{cases} \end{equation} Then $X^{(s)}$ has an analytic continuation across the positive real axis, due to the factorization \eqref{factorization}. Thus $X^{(s)}$ is defined and analytic in the complex plane except for the imaginary axis, and it satisfies the following RH problem: \begin{rhp}\label{RHforX0} \begin{itemize} \item[1)] $X^{(s)}$ is analytic in $\mathbb{C}\setminus i\mathbb{R}$. \item[2)] $X^{(s)}$ satisfies the jump condition (the imaginary axis is oriented from bottom to top) \begin{equation} \label{jumps:X1} X^{(s)}_{+}(x)=X^{(s)}_{-}(x) \begin{cases} \begin{pmatrix} 1 & e^{-\frac{\nu \pi i}{2}} K_\nu(-ix) e^{-sx} \\ 0 & 1 \end{pmatrix}, & \text{ for } x \in (0,+i\infty),\\ \begin{pmatrix} 1 & e^{\frac{\nu \pi i}{2}} K_\nu(ix) e^{-sx} \\ 0 & 1 \end{pmatrix}, & \text{ for }x \in (-i\infty,0). \end{cases} \end{equation} \item[3)] As $z\rightarrow\infty$, \begin{equation} \label{asymp:X1} X^{(s)}(z)=(I+\mathcal{O}(1/z))\begin{pmatrix} z^{n} & 0 \\ 0 & z^{-n} \end{pmatrix}. \end{equation} \item[4)] $X^{(s)}(z)$ remains bounded as $z \to 0$ with $\Re z < 0$, and \begin{equation} \label{near0:X1} X^{(s)}(z)=\begin{pmatrix} \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \\ \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \end{pmatrix}, \quad \text{ as } z \to 0 \text{ with } \Re z > 0. \end{equation} \end{itemize} \end{rhp} The asymptotic condition \eqref{asymp:X1} follows from \eqref{asymp:Y}, the definition \eqref{Xs} and the fact that \begin{equation}\label{asympKv} K_{\nu}(z)=\left(\frac{\pi}{2z}\right)^{1/2} e^{-z}\left(1+\mathcal{O}(1/z)\right), \qquad \text{as } z\to\infty, \quad |\arg z|<\frac{3\pi}{2}, \end{equation} see \cite[formula 10.40.2]{DLMF}. The $\mathcal{O}(z^{-\nu})$ terms in \eqref{near0:X1} appear because of the behavior \begin{equation} \label{near0Kv} K_{\nu}(z) \sim \frac{\Gamma(\nu)}{2^{1-\nu}} z^{-\nu} \end{equation} as $z \to 0$ for $\nu>0$, see for instance \cite[formula 10.30.2]{DLMF}. Note that by \eqref{Pn-and-Y11} and \eqref{Xs} \begin{equation} \label{Pn-and-X11} P_n(x;s) = X^{(s)}_{11}(x). \end{equation} In the RH problem for $X^{(s)}$ we can take $s \to 0+$. Indeed, after setting $s=0$ in \eqref{jumps:X1}, the off-diagonal entries in the jump matrices still tend to $0$ as $|x| \to \infty$ because of \eqref{asympKv}. We put $s=0$ and we consider the following RH problem. \begin{rhp} \label{RHforX} We seek a function $X:\mathbb{C}\setminus i\mathbb{R} \to \mathbb{C}^{2\times 2}$ satisfying: \begin{itemize} \item[1)] $X$ is analytic in $\mathbb{C}\setminus i\mathbb{R}$. \item[2)] $X$ satisfies the jump condition (the imaginary axis is oriented from bottom to top) \begin{equation*} X_{+}(x)=X_{-}(x) \begin{cases} \begin{pmatrix} 1 & e^{-\frac{\nu \pi i}{2}} K_\nu(-ix) \\ 0 & 1 \end{pmatrix}, & \text{ for } x \in (0,+i\infty),\\ \begin{pmatrix} 1 & e^{\frac{\nu \pi i}{2}} K_\nu(ix) \\ 0 & 1 \end{pmatrix}, & \text{ for }x \in (-i\infty,0). \end{cases} \end{equation*} \item[3)] As $z\rightarrow\infty$, \begin{equation*} X(z)=(I+\mathcal{O}(1/z))\begin{pmatrix} z^{n} & 0 \\ 0 & z^{-n} \end{pmatrix}. \end{equation*} \item[4)] $X(z)$ remains bounded as $z \to 0$ with $\Re z < 0$, and \begin{equation*} X(z)=\begin{pmatrix} \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \\ \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \end{pmatrix}, \quad \text{ as } z \to 0 \text{ with } \Re z > 0. \end{equation*} \end{itemize} \end{rhp} If there is a unique solution then the $11$-entry is a monic polynomial of degree $n$, say $P_n$, and \begin{equation} \label{PnX} P_n(x) = X_{11}(z) = \lim_{s \to 0+} X^{(s)}_{11}(z) = \lim_{s\to 0+} P_n(x;s) \end{equation} see \eqref{Pn-and-X11}. Thus $P_n$ is the polynomial that we are interested in. \subsection{Second transformation} \label{subsec:second} We introduce a scaling and rotation $z\mapsto i\pi n z$ and our main interest is in the rescaled polynomials $P_n(in\pi z)$ whose zeros will accumulate on the interval $[-1,1]$ as $n\to \infty$. More precisely, we define $U$ as \begin{equation}\label{U} U(z)=\begin{pmatrix} (in\pi)^{-n} & 0 \\ 0 & (in\pi)^{n} \end{pmatrix} X(in\pi z). \end{equation} From \eqref{U} and the RH problem \ref{RHforX}, we immediately obtain the following RH problem for $U(z)$: \begin{rhp}\label{RHforU} \begin{itemize} \item[1)] $U$ is analytic in $\mathbb{C}\setminus \mathbb{R}$. \item[2)] $U$ satisfies the jump condition \begin{equation*} U_{+}(x)=U_{-}(x) \begin{cases} \begin{pmatrix} 1 & e^{\nu\pi i/2} K_{\nu}(n\pi|x|) \\ 0 & 1 \end{pmatrix}, \quad x \in (-\infty,0), \\ \begin{pmatrix} 1 & e^{-\nu\pi i/2} K_{\nu}(n\pi|x|) \\ 0 & 1 \end{pmatrix}, \quad x \in (0,\infty). \end{cases} \end{equation*} \item[3)] As $z\rightarrow\infty$, \begin{equation*} U(z)=(I+\mathcal{O}(1/z))\begin{pmatrix} z^{n} & 0 \\ 0 & z^{-n} \end{pmatrix}. \end{equation*} \item[4)] $U(z)$ remains bounded as $z \to 0 $ with $\Im z > 0$, and \begin{equation*} U(z)= \begin{pmatrix} \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \\ \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \end{pmatrix} \quad \text{ as } z \to 0 \text{ with } \Im z < 0. \end{equation*} \end{itemize} \end{rhp} Note that by \eqref{PnX}, \eqref{U}, and \eqref{tildePn} \begin{equation} \label{UnX} U_{11}(z) = (i n \pi)^{-n} X_{11}(in \pi z) = (in \pi)^{-n} P_{n}(in \pi z) = \widetilde{P}_n(z) \end{equation} which is a monic polynomial of degree $n$. The zeros of $U_{11}(z)$ are obtained from the zeros of $P_n$ by rotation over $90$ degrees in the clockwise direction and by dividing by a factor $\pi n$. We can now prove Proposition \ref{prop:Pntildeorthogonal}. \begin{proof}[Proof of Proposition \ref{prop:Pntildeorthogonal}] The RH problem for $U$ is the RH problem for orthogonal polynomials on the real line for the varying weight function $e^{\mp \nu\pi i/2} K_{\nu}(n \pi |x|)$ for $x \in \mathbb R^{\pm}$, see \cite{Deift,DKMVZ,FIK}. Because of the $e^{\mp \nu\pi i/2}$ factor, the weight function is not real on the real line, and it has a singularity at the origin because of the behavior \eqref{near0Kv} of the $K_{\nu}$ function near $0$. The singularity is integrable since $\nu < 1$, and so $U_{11} = \widetilde{P}_n$ is the monic polynomial of degree $n$ satisfying \eqref{Pntildeorthogonal}. \end{proof} \subsection{Equilibrium problem and third transformation} In order to normalize the RH problem at infinity we make use of an equilibrium problem with external field $V(x)=\pi|x|$. The equilibrium measure $\mu$ minimizes the energy functional \[ I(\mu) = \iint \log \frac{1}{|x-y|} d\mu(x)d\mu(y) + \int \pi |x| d\mu(x) \] among all probability measures on $\mathbb{R}$. The minimizer is supported on $[-1,1]$. It is absolutely continuous with respect to the Lebesgue measure, $d\mu(x)=\psi(x)dx$, and has density \begin{equation*} \psi(x)=\frac1\pi \int_{|x|}^1 \frac{1}{\sqrt{s^2-x^2}}ds, \end{equation*} which corresponds to the case $\beta=1$ in \cite{KMcL}. The integral can be evaluated explicitly and it gives the formula \eqref{psidensity2}. Note that $\psi(x)$ grows like a logarithm at $x = 0$. The $g$ function is defined in \eqref{gfunction}. The boundary values $g_+(x)$ and $g_-(x)$ on the real axis satisfy \begin{equation}\label{gpgm} g_+(x)-g_-(x)=\begin{cases} 2\pi i,\quad &x\leq -1, \\ 2\pi i \displaystyle \int_x^1 \psi(s)ds,\quad &-1 < x < 1,\\ 0,\quad & x\geq 1. \end{cases} \end{equation} The Euler-Lagrange equations for the equilibrium problem imply that we have (see e.g.~\cite{Deift} or \cite{ST}) \begin{equation}\label{var2} g_{+}(x)+g_{-}(x)- \pi |x| \begin{cases} = \ell, & \quad x\in[-1,1], \\ <\ell, & \quad x\in(-\infty,-1)\cup(1,\infty). \end{cases} \end{equation} with the constant $\ell$ (see Theorem IV.5.1 in \cite{ST} or formula (3.5) in \cite{KMcL}) \begin{equation} \label{ell} \ell = - 2 - 2 \log 2. \end{equation} A related function is \begin{equation} \label{phifunction} \varphi(z) = g(z) - \frac{V(z)}{2} - \frac{\ell}{2} \end{equation} where \begin{equation} \label{Vfunction} V(z)=\begin{cases} \pi z, & \qquad \Re z >0,\\ -\pi z, & \qquad \Re z <0. \end{cases} \end{equation} The $\varphi$-function is analytic in $\mathbb{C}\setminus\left((-\infty,1]\cup i\mathbb{R}\right)$. For $x\in[-1,1]$ we have from the variational equation \eqref{var2} \begin{equation}\label{phig} \begin{aligned} \varphi_+(x) & = g_+(x)-\frac{V(x)}{2}-\frac{\ell}{2} = \frac{1}{2}(g_+(x)-g_-(x)), \\ \varphi_-(x) & =-\varphi_+(x). \end{aligned} \end{equation} Thus $2\varphi$ gives an analytic extension of $g_+(x)-g_-(x)$ from $[-1,1]$ into the upper half plane minus the imaginary axis, and of $g_-(x) - g_+(x)$ into the lower half plane minus the imaginary axis. Note that $\varphi_{\pm}(x)$ is purely imaginary on $[-1,1]$, because of \eqref{gpgm}. On the imaginary axis, the function $\varphi(z)$ is not analytic because of the discontinuity in $V(z)$. The boundary values of this weight function satisfy \begin{equation*} V_-(z)=V_+(z)+2\pi z, \end{equation*} and as a consequence, \begin{equation*} \varphi_-(z)=\varphi_+(z)-\pi z, \qquad z \in i \mathbb R. \end{equation*} Here we take the orientation of the imaginary axis from bottom to top. Now we are ready for the third transformation of the RH problem and we define the matrix valued function \begin{equation} \label{T} T(z)=e^{-n\ell\sigma_3/2} (2n)^{\sigma_3/4} U(z)e^{-n(g(z)-\ell/2) \sigma_3} (2n)^{-\sigma_3/4}, \end{equation} where $\sigma_3=\begin{pmatrix} 1 & 0\\0 &-1\end{pmatrix}$ is the third Pauli matrix. We also write \begin{equation} \label{eq:definition-W} W_n(x)= \sqrt{2n} K_{\nu}(n\pi |x|)e^{n\pi|x|}, \qquad x \in \mathbb R. \end{equation} Then from the above definitions and properties and from the RH problem \ref{RHforU} for $U$ we find that $T$ satisfies the following Riemann--Hilbert problem. \begin{rhp}\label{RHforT} \begin{itemize} \item[1)] $T$ is analytic in $\mathbb{C}\setminus \mathbb{R}$. \item[2)] $T$ satisfies the jump conditions \begin{equation*} T_{+}(x)=T_{-}(x) \begin{cases} \begin{pmatrix} 1 & \, e^{\nu\pi i/2} W_n(x) e^{2n \varphi_+(x)} \\ 0 & 1\end{pmatrix}, \quad x\in(-\infty,-1),\\ \begin{pmatrix} e^{-2n\varphi_{+}(x)} & e^{\nu\pi i/2} W_n(x) \\ 0 & e^{-2n\varphi_{-}(x)}\end{pmatrix}, \quad x\in(-1,0),\\ \begin{pmatrix} e^{-2n\varphi_{+}(x)} & e^{-\nu\pi i/2} W_n(x) \\ 0 & e^{-2n\varphi_{-}(x)}\end{pmatrix}, \quad x\in(0,1),\\ \begin{pmatrix} 1 & e^{-\nu\pi i/2} W_n(x) e^{2n \varphi_+(x)} \\ 0 & 1\end{pmatrix}, \quad x\in(1,\infty), \end{cases} \end{equation*} where $W_n$ is given in \eqref{eq:definition-W}. \item[3)] As $z\rightarrow\infty$, \begin{equation*} T(z)=I+\mathcal{O}(1/z). \end{equation*} \item[4)] $T(z)$ remains bounded as $z \to 0$ with $\Im z > 0$, and \begin{equation} \label{at0:Tgeneral} T(z)=\begin{pmatrix} \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \\ \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \end{pmatrix},\quad \text{ as } z \to 0 \text{ with } \Im z < 0. \end{equation} \end{itemize} \end{rhp} The off--diagonal elements in the jump matrices on $(-\infty,-1)$ and $(1,\infty)$ tend to $0$ at an exponential rate, because of the Euler--Lagrange condition \eqref{var2}. \subsection{Fourth transformation} The jump matrix on the interval $(-1,0)$ has a factorization \begin{multline*} \begin{pmatrix} e^{-2n\varphi_{+}(x)} & e^{\nu\pi i/2} W_n(x) \\ 0 & e^{-2n\varphi_{-}(x)}\end{pmatrix} \\ = \begin{pmatrix} 1 & 0\\ \frac{e^{-\nu\pi i/ 2}}{W_n(x)}e^{-2n\varphi_-(x)} & 1\end{pmatrix} \begin{pmatrix} 0& e^{\nu\pi i/2}W_n(x)\\ -\frac{e^{-\nu\pi i/2}}{W_n(x)} & 0\end{pmatrix} \begin{pmatrix} 1 & 0\\ \frac{e^{-\nu\pi i/2}}{W_n(x)}e^{-2n\varphi_{+}(x)} &1\end{pmatrix}, \end{multline*} while the jump matrix on $(0,1)$ factorizes as \begin{multline*} \begin{pmatrix} e^{-2n\varphi_{+}(x)} & e^{-\nu\pi i/2}W_n(x) \\ 0 & e^{-2n\varphi_{-}(x)}\end{pmatrix} \\ = \begin{pmatrix} 1 & 0\\ \frac{e^{\nu\pi i/2}}{W_n(x)} e^{-2n\varphi_{-}(x)}& 1\end{pmatrix} \begin{pmatrix} 0& e^{-\nu\pi i/2}W_n(x)\\ -\frac{e^{\nu\pi i/2}}{W_n(x)} & 0\end{pmatrix} \begin{pmatrix} 1 & 0\\ \frac{e^{\nu\pi i/2}}{W_n(x)} e^{-2n\varphi_{+}(x)} &1\end{pmatrix}. \end{multline*} In order to open the lens around $(-1,1)$, we need the analytic extension of the function $W_n$ from \eqref{eq:definition-W} to $\mathbb C \setminus i \mathbb R$, which we also denote by $W_n$, \begin{equation}\label{analyticW} W_n(z)=\begin{cases} \sqrt{2n} K_{\nu}(n\pi z)e^{n\pi z},& \qquad \Re z > 0, \\ \sqrt{2n} K_{\nu}(-n\pi z)e^{-n\pi z},& \qquad \Re z < 0. \end{cases} \end{equation} Note that as $n \to \infty$, see \eqref{asympKv} and \eqref{analyticW}, \begin{equation} \label{Wnestimate} W_n(z) = \begin{cases} z^{-1/2} ( 1 + \mathcal{O}(1/(nz)), & \Re z > 0, \\ (-z)^{-1/2}(1 + \mathcal{O}(1/(nz))), & \Re z < 0, \end{cases} \end{equation} which explains the factor $\sqrt{2n}$ that we introduced in \eqref{eq:definition-W} and \eqref{analyticW}. \begin{figure} \centerline{\includegraphics{lens2.pdf}} \caption{Opening of a lens around $[-1,1]$, and contour $\Sigma_S$ consisting of $\Sigma_1, \ldots, \Sigma_4$, the segment $(-i\rho,i\rho)$ and the real line.} \label{fig_lens2} \end{figure} Next, we fix a number $\rho>0$ and we open a lens around $[-1,1]$, which defines contours $\Sigma_j$, $j=1, \ldots, 4$ and domains $\Omega_j$, $j=1, \ldots, 4$ as indicated in Figure \ref{fig_lens2}. In the fourth transformation we define the matrix valued function $S(z)$: \begin{align} \label{S} S(z) = \begin{cases} T(z)\begin{pmatrix} 1 & 0\\ -\frac{e^{\nu\pi i/2}}{W_n(z)} e^{-2n\varphi(z)}& 1\end{pmatrix}, & \text{for } z \in \Omega_1,\\ T(z) \begin{pmatrix} 1 & 0\\ -\frac{e^{-\nu\pi i/ 2}}{W_n(z)}e^{-2n\varphi(z)} & 1\end{pmatrix}, & \text{for } z \in \Omega_2,\\ T(z) \begin{pmatrix} 1 & 0\\ \frac{e^{-\nu\pi i/ 2}}{W_n(z)}e^{-2n\varphi(z)} & 1\end{pmatrix}, & \text{for } z \in \Omega_3,\\ T(z) \begin{pmatrix} 1 & 0\\ \frac{e^{\nu\pi i/2}}{W_n(z)} e^{-2n\varphi(z)}& 1\end{pmatrix}, & \text{for } z \in \Omega_4,\\ T(z), & \textrm{elsewhere}, \end{cases} \end{align} using the analytic extension \eqref{analyticW} for the function $W_n(z)$ in each region, and $\varphi(z)$ defined in \eqref{phifunction}. \begin{remark} In order to divide by $W_n(z)$ we need to be careful with possible zeros of this function in the complex plane. Following the general theory in \cite[\S 15.7]{Watson}, the Bessel function $K_{\nu}(n\pi z)$ is free from zeros in the half--plane $|\arg z|\leq \tfrac{\pi}{2}$. Using \eqref{analyticW}, we can conclude that $W_n(z)\neq 0$. \end{remark} From the RH problem \ref{RHforT} and \eqref{S} we find that that $S(z)$ is the solution of the following RH problem: \begin{rhp}\label{RHforS} \begin{itemize} \item[1)] $S$ is analytic in $\mathbb{C}\setminus \Sigma_S$, where $\Sigma_S$ is depicted in Figure \ref{fig_lens2}. \item[2)] $S$ satisfies the jump conditions $S_+ = S_- J_S$ where \begin{align} \label{jumps:Sgeneral} J_S(z) = \begin{cases} \begin{pmatrix} 1 & 0\\ \frac{e^{\nu\pi i/2}}{W_n(z)} e^{-2n\varphi(z)}& 1\end{pmatrix}, & \quad z\in\Sigma_1\cup \Sigma_4,\\ \begin{pmatrix} 1 & 0\\ \frac{e^{-\nu\pi i/ 2}}{W_n(z)}e^{-2n\varphi(z)} & 1\end{pmatrix}, & \quad z\in\Sigma_2\cup\Sigma_3,\\ \begin{pmatrix} 0& e^{\nu\pi i/2}W_n(x)\\ -\frac{e^{-\nu\pi i/2}}{W_n(x)} & 0\end{pmatrix}, & \quad z \in (-1,0),\\ \begin{pmatrix} 0& e^{-\nu\pi i/2}W_n(x)\\ -\frac{e^{\nu\pi i/2}}{W_n(x)} & 0\end{pmatrix}, & \quad z \in (0,1),\\ \begin{pmatrix} 1 & e^{\nu\pi i/2}e^{2n \varphi(z)} W_n(z) \\ 0 & 1\end{pmatrix}, & \quad z\in(-\infty,-1),\\ \begin{pmatrix} 1 & e^{-\nu\pi i/2}e^{2n \varphi(z)} W_n(z) \\ 0 & 1\end{pmatrix}, & \quad z\in(1,\infty),\\ \begin{pmatrix} 1 & 0\\ j_1(z) & 1 \end{pmatrix}, & \quad z\in(0, i \rho),\\ \begin{pmatrix} 1 & 0\\ j_2(z) & 1 \end{pmatrix}, & \quad z \in (-i \rho, 0). \end{cases} \end{align} Here \begin{equation} \label{eq:definition-j1} j_1(z)=\frac{e^{\nu\pi i/2} e^{-2n\varphi_-(z)}}{W_{n,-}(z)}-\frac{e^{-\nu\pi i/2}e^{-2n\varphi_+(z)}}{W_{n,+}(z)}, \qquad z \in (0, i \rho), \end{equation} and \begin{equation} \label{eq:definition-j2} j_2(z)=-\frac{e^{\nu\pi i/2} e^{-2n\varphi_-(z)}}{W_{n,-}(z)}+\frac{e^{-\nu\pi i/2}e^{-2n\varphi_+(z)}}{W_{n,+}(z)}, \qquad z \in (-i \rho, 0), \end{equation} using the appropriate values of $\varphi_{\pm}(z)$ and $W_{n,\pm}(z)$ in each case. The imaginary axis is oriented upwards, and so for $z \in i\mathbb R$, we have that $\varphi_+(z)$ and $W_{n,+}(z)$ ($\varphi_-(z)$ and $W_{n,-}(z)$) denote the limiting value from the left (right) half-plane. \item[3)] As $z\rightarrow\infty$, \begin{equation*} S(z)=I+\mathcal{O}(1/z). \end{equation*} \item[4)] $S(z)$ remains bounded as $z \to 0$ with $\Im z > 0$, and \begin{equation} \label{at0:Sgeneral} S(z)=\begin{pmatrix} \mathcal{O}(z^{\nu}) & \mathcal{O}(z^{-\nu}) \\ \mathcal{O}(z^{\nu}) & \mathcal{O}(z^{-\nu}) \end{pmatrix}, \quad \text{ as } z \to 0 \text{ with } \Im z < 0. \end{equation} \end{itemize} \end{rhp} Note that as a consequence of the definition of $\varphi(z)$ in \eqref{phifunction} and formula \eqref{phig}, $\Im \varphi(x)$ is decreasing on $[-1,1]$. Because of the Cauchy--Riemann equations, $\Re \varphi(z)>0$ as we move away from the interval. We may and do assume that the lens is small enough such that $\Re \varphi(z) > 0$ on the lips of the lens. Then it follows from \eqref{Wnestimate} and \eqref{jumps:Sgeneral} that the jump matrix $J_S$ on the lips of the lens tends to $I$ at an exponential rate as $n \to \infty$, if we stay away from the endpoints $\pm 1$. Also the jump matrix on $(-\infty,-1)$ and $(1, \infty)$ tends to the identity matrix. Thus for any $\delta > 0$, there is a constant $c > 0$ such that \begin{equation} \label{JSasymp} J_S(z) = I + \mathcal{O}(e^{-cn}), \qquad z \in \Sigma_S \setminus ([-1,1] \cup [-i \rho, i\rho] \cup D(\pm 1, \delta)). \end{equation} The condition \eqref{at0:Sgeneral} needs some explanation, since \eqref{at0:Tgeneral} and \eqref{S} at first sight lead to the behavior $S(z)=\begin{pmatrix} \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \\ \mathcal{O}(1) & \mathcal{O}(z^{-\nu}) \end{pmatrix}$ as $z \to 0$ with $\Im z < 0$. However, a cancellation takes place for the entries in the first column, as can be checked from the jump conditions for $S$, see \eqref{jumps:Sgeneral} on the intervals $(-1,0)$ and $(0,1)$. Since $S$ remains bounded as $z \to 0$ with $\Im z > 0$, and \[ S_-(z) = S_+(z) \begin{pmatrix} 0 & \mathcal{O}(z^{-\nu}) \\ \mathcal{O}(z^{\nu}) & 0 \end{pmatrix}, \quad \text{ as } z \to 0, \] one finds \eqref{at0:Sgeneral}. \subsection{Global parametrix}\label{Sec_global} If we ignore the jump matrices in the RH problem for $S$ except for the one on the interval $[-1,1]$, we arrive at the following RH problem for a $2 \times 2$ matrix valued function $N$: \begin{rhp}\label{RHforNgeneral} \begin{itemize} \item[1)] $N$ is analytic in $\mathbb{C}\setminus [-1,1]$. \item[2)] $N$ satisfies the jump conditions \begin{align*} N_+(x)&=N_-(x) \begin{cases} \begin{pmatrix} 0 & e^{\nu\pi i/2}W_n(x) \\ -\frac{e^{-\nu\pi i/2}}{W_n(x)} & 0 \end{pmatrix}, \quad x\in(-1,0), \\ \begin{pmatrix} 0 & e^{-\nu\pi i/2}W_n(x) \\ -\frac{e^{\nu\pi i/2}}{W_n(x)} & 0 \end{pmatrix}, \quad x\in(0,1). \end{cases} \end{align*} \item[3)] As $z\rightarrow\infty$, \begin{equation*} N(z)=I+\mathcal{O}(1/z). \end{equation*} \end{itemize} \end{rhp} We solve the RH problem for $N$ by means of two Szeg\H{o} functions $D_{1,n}$ and $D_2$, see also \cite{KMcVV}, that are associated with $W_n$ and $e^{- \sgn(x) \nu \pi i/2}$, respectively. The first Szeg\H{o} function $D_1 = D_{1,n}$ is defined by \begin{equation} \label{eq:D1(z)} D_{1,n}(z)=\exp \left(\frac{(z^2-1)^{1/2}}{2\pi}\int_{-1}^{1}\frac{\log W_n(x)}{\sqrt{1-x^2}}\frac{dx}{z-x}\right), \end{equation} which is defined and analytic for $z \in \mathbb C \setminus [-1,1]$. It satisfies \begin{equation}\label{D1plusminus} D_{1,n+}(x)D_{1,n-}(x)=W_n(x), \qquad x \in (-1,1). \end{equation} It follows from \eqref{eq:D1(z)} that $D_{1,n}$ has no zeros in $\mathbb C \setminus [-1,1]$ and \begin{equation} \label{D1infinity} D_{\infty,n} := \lim_{z\to\infty} D_{1,n}(z) = \exp \left(\frac{1}{2\pi}\int_{-1}^1 \frac{\log W_n(x)}{\sqrt{1-x^2}} dx\right) \in (0,\infty). \end{equation} In what follows we are not going to indicate the $n$-dependence in the notation for $D_{1,n}$ and $D_{\infty,n}$, since the dependence on $n$ is only mildly. Indeed, because of \eqref{Wnestimate} we have that $D_{1,n}$ tends to the Szeg\H{o} function for the weight $|x|^{-1/2}$ with a rate as given in the following lemma. \begin{lemma} \label{lem:D1nlimit} We have \begin{align} \label{D1nlimit} D_{1,n}(z) & = \left( \frac{z + (z^2-1)^{1/2}}{z} \right)^{1/4} \left( 1 + \mathcal{O}\left(\frac{\log n}{n}\right) \right), \\ \label{Dinftylimit} D_{\infty,n} & = 2^{1/4} + \mathcal{O}\left(\frac{\log n}{n}\right), \end{align} as $n \to \infty$, with $\mathcal{O}$-term that is uniform for $z \in \mathbb C \setminus ([-1,1] \cup D(0, \delta)\cup D(\pm 1,\delta))$ for any $\delta > 0$. \end{lemma} \begin{proof} The Szeg\H{o} function for $|x|^{-1/2}$ is \[ D(z; |x|^{-1/2}) = \exp \left(\frac{(z^2-1)^{1/2}}{2\pi} \int_{-1}^1 \frac{ \log |x|^{-1/2}}{\sqrt{1-x^2}} \frac{dx}{z-x} \right) = \left( \frac{z + (z^2-1)^{1/2}}{z} \right)^{1/4}. \] and so \begin{equation} \label{D1nformula} \left( \frac{z + (z^2-1)^{1/2}}{z} \right)^{-1/4} D_{1,n}(z) = \exp \left(\frac{(z^2-1)^{1/2}}{2\pi} \int_{-1}^1 \frac{ \log (|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{dx}{z-x} \right). \end{equation} Because of \eqref{Wnestimate} there exist $c_0, c_1 > 0$ \[ \left| |x|^{1/2} W_n(x) - 1 \right| \leq \frac{c_1}{n|x|} < \frac{1}{2}, \qquad |x| \geq \frac{c_0}{n}. \] Then also for some $c_2 > 0$, \[ \left| \log(|x|^{1/2} W_n(x))\right| \leq \frac{c_2}{n|x|}, \qquad |x| \geq \frac{c_0}{n}. \] It follows that \begin{align*} \left| \int_{c_0/n}^1 \frac{ \log(|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{dx}{z-x} \right| & \leq \frac{c_2}{\dist(z, [-1,1]) n} \int_{c_0/n}^1 \frac{1}{x\sqrt{1-x^2}} dx \\ & \leq \frac{c_3}{\dist(z, [-1,1])} \frac{\log n}{n} \end{align*} with a constant $c_3$ that is independent of $n$ and $z$. By deforming the integration path into the complex plane in such a way that it stays at a certain distance from $z$, and applying similar estimates we find \begin{equation} \label{D1nestimate1} \left| \int_{c_0/n}^1 \frac{ \log(|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{dx}{z-x} \right| \leq \frac{c_4}{|z|} \frac{\log n}{n} \end{equation} with a constant that is independent of $z \in \mathbb C \setminus ([-1,1] \cup D(0, \delta)\cup D(\pm 1,\delta))$. Similarly \begin{equation} \label{D1nestimate2} \left| \int_{-1}^{-c_0/n} \frac{ \log (|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{dx}{z-x} \right| \leq \frac{c_5}{|z|} \frac{\log n}{n}. \end{equation} Near $x=0$ we use \eqref{near0Kv} and \eqref{eq:definition-W} to find a $c_6 > 0$ such that \[ c_6 |nx|^{1/2 - \nu}\leq |x|^{1/2} W_n(x) \leq 1, \qquad |x| \leq \frac{c_0}{n}. \] The upper bound follows from the fact that $0 < K_{\nu}(s) \leq K_{1/2}(s)$ if $0\leq \nu<1/2$ and $s > 0$ and the explicit formula for $K_{1/2}(s)$ see \cite[10.37.1,10.39.2]{DLMF}. Then \[ \left| \log(|x|^{1/2} W_n(x)) \right| \leq \left|\log c_6 + \left(\tfrac{1}{2} - \nu\right) \log |nx|\right|, \qquad |x| \leq \frac{c_0}{n} \] and \begin{equation} \label{D1nestimate3} \left| \int_{-c_0/n}^{c_0/n} \frac{ \log |x|^{1/2} W_n(x)}{\sqrt{1-x^2}} \frac{dx}{z-x} \right| \leq \frac{2}{|z|} \int_{-c_0/n}^{c_0/n} \left| \log c_6 +\left(\tfrac{1}{2} - \nu\right) \log |nx| \right| dx \leq \frac{c_7}{|z|} \frac{1}{n} \end{equation} for some new constant $c_7 > 0$. Combining the estimates \eqref{D1nestimate1}, \eqref{D1nestimate2}, and \eqref{D1nestimate3}, we get \[ \left| \frac{(z^2-1)^{1/2}}{2\pi} \int_{-1}^1 \frac{ \log(|x|^{1/2} W_n(x))}{\sqrt{1-x^2}} \frac{dx}{z-x} \right| = \mathcal{O}\left(\frac{ \log n}{n}\right) \] with a $\mathcal{O}$ term that is uniform for $|z| > \delta$ $|z\pm 1| > \delta$, and so by \eqref{D1nformula} \[ \left( \frac{z + (z^2-1)^{1/2}}{z} \right)^{-1/4} D_{1,n}(z) = \exp\left( \mathcal{O}\left(\frac{ \log n}{n}\right)\right) = 1 + \mathcal{O}\left(\frac{ \log n}{n}\right) \] as claimed in \eqref{D1nlimit}. Since \eqref{D1nlimit} is uniform for $|z| > \delta$, $|z\pm 1| > \delta$, we can let $z \to \infty$, and obtain \eqref{Dinftylimit}. \end{proof} The second Szeg\H{o} function $D_2$ corresponds to the weight $e^{\pm\nu\pi i/2}$, and is defined as \begin{equation} \label{eq:D2(z)} D_2(z)=\left(\frac{\sqrt{z^2-1}-i}{\sqrt{z^2-1}+i}\right)^{\nu/4}, \qquad z \in \mathbb C \setminus [-1,1], \end{equation} with the branch of the square root that is positive for real $z > 1$. It is not difficult to check that $z \mapsto w = D_2(z)$ is the conformal mapping from $\mathbb C \setminus [-1,1]$ onto the sector $- \frac{\nu \pi}{4} < \arg w < \frac{\nu\pi}{4}$ that maps $z= 0+$ to $w=0$, $z=0-$ to $w=\infty$, $z = \pm 1$ to $e^{\mp \frac{\nu \pi}{4}}$ and $z=\infty$ to $w=1$. The Szeg\H{o} function $D_2$ is related to the function $\psi$ from \eqref{complexpsi}. \begin{lemma} We have \begin{equation} \label{D2andpsi} \log D_2(z) = \begin{cases} - \frac{\nu \pi}{2} \psi(z) - \frac{\nu \pi i}{4}, & \Re z > 0, \, \Im z > 0, \\ \frac{\nu \pi}{2} \psi(z) - \frac{\nu \pi i}{4}, & \Re z > 0, \, \Im z <0, \\ - \frac{\nu \pi}{2} \psi(z) + \frac{\nu \pi i}{4}, & \Re z < 0, \, \Im z > 0, \\ \frac{\nu \pi}{2} \psi(z) + \frac{\nu \pi i}{4}, & \Re z < 0, \, \Im z < 0. \end{cases} \end{equation} \end{lemma} \begin{proof} This follows from \eqref{complexpsi} and \eqref{eq:D2(z)} by straightforward calculation. \end{proof} It follows from \eqref{D2andpsi} that $D_2$ satisfies \begin{equation} \label{D2jump} D_{2+}(x) D_{2-}(x) = \begin{cases} e^{\nu\pi i/2}, & \quad x \in (-1,0), \\ e^{-\nu\pi i/2}, & \quad x \in (0,1), \end{cases} \end{equation} and, since $\psi(z) \sim \frac{1}{\pi} \log (1/z)$ as $z \to 0$, \begin{equation} \label{D2at0} D_{2}(z) = \begin{cases} \mathcal{O}(z^{\nu/2}) & \text{ as } z \to 0 \text{ with } \Im z > 0, \\ \mathcal{O}(z^{-\nu/2}) & \text{ as } z \to 0 \text{ with } \Im z < 0. \end{cases} \end{equation} Having $D_1$ and $D_2$ we seek $N$ in the form \begin{equation} \label{solutionNgeneral} N(z) = D_{\infty}^{\sigma_3} N_0(z) \left(D_1(z) D_2(z) \right)^{-\sigma_3}. \end{equation} Then $N$ satisfies the RH problem \ref{RHforNgeneral} if and only if $N_0$ satisfies the following standard RH problem: \begin{rhp}\label{RHforN0general} \begin{itemize} \item[1)] $N_0$ is analytic in $\mathbb{C}\setminus [-1,1]$. \item[2)] $N_0$ satisfies the jump conditions \[ N_{0+}(x)= N_{0-}(x)\begin{pmatrix} 0 & 1\\ -1 & 0 \end{pmatrix}, \qquad x\in(-1,1). \] \item[3)] $N_0(z) = I + \mathcal{O}(1/z)$ as $z\rightarrow\infty$. \end{itemize} \end{rhp} The RH problem for $N_0$ has the explicit solution (see for instance \cite[Section 7.3]{Deift}): \begin{equation} \label{eq:N00} N_0(z)=\begin{pmatrix} \frac{\beta(z)+\beta(z)^{-1}}{2} & \frac{\beta(z)-\beta(z)^{-1}}{2i}\\ -\frac{\beta(z)-\beta(z)^{-1}}{2i} & \frac{\beta(z)+\beta(z)^{-1}}{2} \end{pmatrix}, \quad \text{ with } \beta(z)=\left(\frac{z-1}{z+1}\right)^{1/4}, \end{equation} for $z \in \mathbb C \setminus [-1,1]$, and we take the branch of the fourth root that is analytic in $\mathbb C \setminus [-1,1]$ and that is real and positive for $z > 1$. Note that we can also write \begin{equation} \label{eq:N0} N_0(z)= \frac{1}{\sqrt{2}(z^2-1)^{1/4}} \begin{pmatrix} f(z)^{1/2} & i f(z)^{-1/2} \\ -i f(z)^{-1/2} & f(z)^{1/2} \end{pmatrix} \end{equation} where \begin{equation}\label{fz} f(z) = z + (z^2-1)^{1/2} \end{equation} is the conformal map from $\mathbb C \setminus [-1,1]$ to the exterior of the unit disk. \subsection{Fifth transformation} Around the endpoints $z=\pm 1$ we build Airy parametrices $P_{\Ai}$ in the usual way. We take $\delta > 0$ sufficiently small, and $P_{\Ai}$ is defined and analytic in $D(\pm 1, \delta) \setminus \Sigma_S$ such that it has the same jumps as $S$ on $\Sigma_S \cap D(\pm 1, \delta)$, and such that \begin{equation} \label{matching} P_{\Ai}(z) = N(z) (1 + \mathcal{O}(n^{-1})), \quad \text{uniformly for } |z \pm 1| = \delta, \end{equation} as $n \to \infty$. We refer the reader for instance to the monograph by Deift \cite[\S 7.6]{Deift} for details. In the fifth transformation we put \begin{equation} \label{Q} Q = \begin{cases} SN^{-1}, & \text{ outside the disks $D(\pm 1, \delta)$,} \\ S P_{\Ai}^{-1}, & \text{ inside the disks.} \end{cases} \end{equation} Then $Q$ is defined and analytic outside of a contour consisting of $\Sigma_S$ and two circles around $\pm 1$. The construction of the Airy parametrix is such that it has the same jump as $S$ inside the circles. As a result $Q$ is analytic inside the two disks. Also $S$ and $N$ have the same jump on $(-1,1)$ and it follows that $Q$ is analytic across $(-1,1)$. Therefore $Q$ is analytic in $\mathbb C \setminus \Sigma_Q$ where $\Sigma_Q$ consists of two circles around $\pm 1$, the parts of $(-\infty, -1)$, $\Sigma_j$, $j=1,\ldots, 4$ and $(1, \infty)$ outside of these circles, and the segment $(-i \rho, i \rho)$ on the imaginary axis. See Figure \ref{figQ}. \begin{figure} \centerline{\includegraphics{SigmaQ.pdf}} \caption{Contour $\Sigma_Q$} \label{figQ} \end{figure} From the RH problem \ref{RHforS} for $S$ and \eqref{Q} it then follows that $Q$ solves the following RH problem. \begin{rhp}\label{RHforQ} \begin{itemize} \item[1)] $Q : \mathbb C \setminus \Sigma_Q \to \mathbb C^{2 \times 2}$ is analytic. \item[2)] $Q$ satisfies the jump condition $Q_+ = Q_- J_Q$ on $\Sigma_Q$ where \begin{align*} J_Q(z) = \begin{cases} N(z) P_{\Ai}^{-1}(z), & \text{ for $z$ on the circles}, \\ N(z) \begin{pmatrix} 1 & 0\\ j_1(z) & 1 \end{pmatrix} N^{-1}(z) & \text{ for } z \in (0, i \rho),\\ N(z) \begin{pmatrix} 1 & 0\\ j_2(z) & 1 \end{pmatrix} N^{-1}(z) & \text{ for } z \in (-i \rho, 0), \\ N(z) J_S(z) N(z)^{-1}, & \text{ elsewhere on $\Sigma_Q$.} \end{cases} \end{align*} Here $j_1$ and $j_2$ are given by \eqref{eq:definition-j1} and \eqref{eq:definition-j2}. \item[3)] As $z\rightarrow\infty$, \begin{equation*} Q(z)=I+\mathcal{O}(1/z). \end{equation*} \item[4)] $Q(z) = \mathcal{O}(1)$ as $z \to 0$. \end{itemize} \end{rhp} In the behavior around $0$ there is no longer a distinction between the upper and lower half planes, and $Q$ remains bounded in all directions. We note that \begin{equation} \label{JQasymp1} J_Q(z) = I + \mathcal{O}(n^{-1}), \qquad \text{ for $z$ on the circles} \end{equation} because of the matching property \eqref{matching}. We also note that \begin{equation} \label{JQasymp2} J_Q(z) = I + \mathcal{O}(e^{-cn}), \qquad \text{ on } \Sigma_Q \setminus (\partial D(\pm 1, \delta) \cup [-i \rho, i\rho] ) \end{equation} because of \eqref{JSasymp}, \eqref{solutionNgeneral}, and Lemma \ref{lem:D1nlimit}. The jump matrix $J_Q$ on the imaginary axis can be rewritten as (we use \eqref{solutionNgeneral}): \begin{align} \label{jumpQ1} J_Q(z) = D_{\infty}^{\sigma_3} N_0(z) \begin{pmatrix} 1 & 0 \\ j_{1,2}(z) (D_1(z) D_2(z))^2 & 1 \end{pmatrix} N_0^{-1}(z) D_{\infty}^{-\sigma_3}, \qquad z \in (-i \rho, i \rho), \end{align} with $j_1$ on $(0, i\rho)$, and $j_2$ on $(-i \rho,0)$. The entry $j_{1,2}(z) (D_1(z) D_2(z))^2$ in \eqref{jumpQ1} depends on $n$, and tends to $0$ as $n \to\infty$ for every $z \in (-i \rho, 0) \cup (0, i \rho)$, but not in a uniform way. Hence, further analysis is needed in the next section. A similar situation is studied in \cite[Section 5]{BB}, where the jump on the imaginary axis has the same structure and approaches the identity matrix at a rate $1/\log(n)$ as $n\to \infty$. In that case no local parametrix near the origin is needed. \subsection{Local parametrix near $z=0$} The construction of a local parametrix in a neighborhood of the origin follows the idea exposed in \cite{KMcL}. We take $\varepsilon > 0$ with \[ \varepsilon < \min \left( \tfrac{1}{2 e}, \tfrac{\rho}{3} \right) \] and we build a local parametrix $P$ defined in a neighborhood $|z| < 3\varepsilon$ of $0$. We use a cut-off function $\chi(z)$ on $i \mathbb R$ such that \begin{enumerate} \item[(a)] $\chi : i \mathbb R \to \mathbb R$ is a $C^{\infty}$ function, \item[(b)] $0 \leq \chi(z) \leq 1$ for all $z \in i \mathbb R$, \item[(c)] $\chi(z) \equiv 1$ for $z \in (-i \varepsilon, i \varepsilon)$, \item[(d)] $\chi(z) \equiv 0$ for $z \in \left(-i \infty, -2i\varepsilon\right) \cup \left(2i\varepsilon, i \infty\right)$. \end{enumerate} Then we modify $J_Q$ by multiplying the off-diagonal entry in the middle factor of \eqref{jumpQ1} by $\chi(z)$, and in addition we use this as a jump matrix in the full imaginary axis. Thus \begin{equation} \label{jump:P} J_{P}(z) = D_{\infty}^{\sigma_3} N_0(z) \begin{pmatrix} 1 & 0 \\ j_{1,2}(z) (D_1(z) D_2(z))^2 \chi(z) & 1 \end{pmatrix} N_0^{-1}(z) D_{\infty}^{-\sigma_3}, \qquad z \in i \mathbb R, \end{equation} with $j_1$ on $i \mathbb R^+$ and $j_1$ on $i \mathbb R^-$. Then the RH problem for the local parametrix $P$ at the origin is: \begin{rhp} \label{RHforP} \begin{itemize} \item[\rm 1)] $P : \{ z\in \mathbb C \mid -1 < \Re z < 1 \} \setminus i \mathbb R \to \mathbb C^{2 \times 2}$ is analytic. \item[\rm 2)] $P$ satisfies the jump condition \begin{equation} \label{jumpP} P_+(z)= P_-(z) J_P(z), \quad z\in i \mathbb R, \end{equation} where $J_P(z)$ is given by \eqref{jump:P}. \item[\rm 3)] $P(z) = I + \mathcal{O} \left( \epsilon_n \right)$ as $n \to \infty$ uniformly for $|z| = 3 \varepsilon$ with $\epsilon_n$ given by \eqref{epsilonn}. \end{itemize} \end{rhp} \begin{proposition} \label{propo8} The RH problem \ref{RHforP} has a solution for $n$ large enough. \end{proposition} The rest of this subsection is devoted to the proof of Proposition \ref{propo8}. It takes a number of steps and it is the most technical part of the paper. \subsubsection{RH problem for $\widehat P$} We introduce a matrix $\widehat{P}(z)$ in the following way: \begin{equation} \label{Phat} P(z) = \begin{cases} D_{\infty}^{\sigma_3} N_0(z) \widehat P(z) N_0(z)^{-1} D_{\infty}^{-\sigma_3}, & \text{for } \Im z < 0, \\[5pt] D_{\infty}^{\sigma_3} N_0(z) \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \widehat P(z) \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} N_0(z)^{-1} D_{\infty}^{-\sigma_3}, & \text{for } \Im z > 0. \end{cases} \end{equation} The extra factors in \eqref{Phat} for $\Im z > 0$ are introduced in order to compensate the jumps of $N_0$ on $[-1,1]$. Then $P$ satisfies the jump condition \eqref{jumpP} in the RH problem \ref{RHforP} if and only if $\widehat P_+ = \widehat P_- J_{\widehat P}$, where the jump is \begin{equation} \label{jump:Phat} J_{\widehat P}(z) = \begin{cases} \begin{pmatrix} 1 & - j_1(z) (D_1(z) D_2(z))^2 \chi(z) \\ 0 & 1 \end{pmatrix}, & \text{ for } z \in i \mathbb R^+, \\[5mm] \begin{pmatrix} 1 & 0 \\ j_2(z) (D_1(z) D_2(z))^2 \chi(z) & 1 \end{pmatrix}, & \text{ for } z \in i \mathbb R^-. \end{cases} \end{equation} Note the difference in the triangularity structure. So, we look for $\widehat P$ that solves the following RH problem: \begin{rhp} \label{RHforPhat} \begin{itemize} \item[1)] $\widehat P : \mathbb C \setminus i \mathbb R \to \mathbb C^{2 \times 2}$ is analytic. \item[2)] $\widehat P$ satisfies the jump conditions \begin{equation} \label{jumpcondition:Phat} \widehat P_+(z)= \widehat P_-(z) J_{\widehat P}(z), \quad z\in i \mathbb R, \end{equation} where $J_{\widehat P}(z)$ is given by \eqref{jump:Phat}. \item[3)] $\widehat P(z) = I + \mathcal{O}(1/z)$ as $z \to \infty$. \end{itemize} \end{rhp} Our aim is to show that the RH problem for $\widehat P$ has a solution for $n$ sufficiently large, and that this solution satisfies in addition \begin{itemize} \item[4)] $\widehat P(z) = I + \mathcal{O} \left( \epsilon_n \right)$ as $n \to \infty$, uniformly for $|z| = 3 \varepsilon$. \end{itemize} Having $\widehat P$ we define $P$ by \eqref{Phat} in terms of $\widehat P$, and it will satisfy the requirements of the RH problem \ref{RHforP}. We prove the following result: \begin{lemma}\label{lem:Phat} If $0<\nu \leq 1/2$, then for $n$ large enough there exists $\widehat{P}(z)$ that solves the RH problem \ref{RHforPhat}, and as $n\to\infty$, \begin{equation} |\widehat{P}_{11}(z) - 1| = \mathcal{O} \left( n^{-1/2} (\log n)^{-2\nu-1/2}\right), \quad |\widehat{P}_{21}(z) | = \mathcal{O} \left( n^{\nu-1/2} (\log n)^{-\nu-1/2} \right), \nonumber \end{equation} in $\mathbb{C}\setminus[-2i\varepsilon,0]$, and \begin{equation} |\widehat{P}_{12}(z)| = \mathcal{O} \left( n^{-\nu-1/2}(\log n)^{-\nu-1/2} \right), \qquad |\widehat{P}_{22}(z) - 1| = \mathcal{O} \left( n^{-1/2} (\log n)^{-2\nu-1/2}\right), \nonumber \end{equation} in $\mathbb{C}\setminus[0,2i\varepsilon]$. \end{lemma} \begin{remark} It follows from Lemma \ref{lem:Phat} that $\widehat P(z) = I + \mathcal{O} \left( \epsilon_n \right)$ as $n \to \infty$, uniformly for $|z| = 3 \varepsilon$, and because of \eqref{Phat}, the same holds for $P(z)$. \end{remark} In the proof of this lemma we will need the following steps: \begin{enumerate} \item We write the jump conditions for $\widehat{P}(z)$ componentwise, and in terms of two integral operators $K_1$ and $K_2$. \item We estimate the operator norms $\|K_1\|$ and $\|K_2\|$ as $n\to\infty$. This requires estimates for the functions $j_1(z)$, $j_2(z)$, $D_1(z)$ and $D_2(z)$, which are uniform as $n\to\infty$ for $y$ in a fixed interval around the origin on the imaginary axis. \item We show that the operators $I-K_2K_1$ and $I-K_1K_2$ are invertible for $n$ large enough, and this gives the existence and asymptotics of $\widehat{P}$. \end{enumerate} Finally, the estimates for $\widehat{P}(z)$ are used to prove that the matrix $R(z)$, which will be defined in Section \ref{finaltrans} and which solves the Riemann--Hilbert problem \ref{RHforR}, is close to the identity matrix as $n\to\infty$. \subsubsection{Integral operators} Let us write \begin{equation} \begin{aligned} \label{eta12z} \eta_1(z) & = - j_1(z) (D_1(z) D_2(z))^2 \chi(z), & z \in i \mathbb R^+,\\ \eta_2(z) & = j_2(z) (D_1(z) D_2(z))^2 \chi(z), & z \in i \mathbb R^-. \end{aligned} \end{equation} These functions depend on $n$, since $j_1$, $j_2$ and $D_1$ depend on $n$. Note, however, that $D_2$ and $\chi$ do not depend on $n$. The jump condition \eqref{jump:Phat}-\eqref{jumpcondition:Phat} yields that for $j=1,2$, \begin{equation}\label{jumps_entries_Phat} \begin{aligned} \widehat{P}_{j1+}(z) &= \begin{cases} \widehat{P}_{j1-}(z), & \text{ for } z \in i \mathbb R^+, \\ \widehat{P}_{j1-}(z) + \eta_2(z) \widehat{P}_{j2-}(z), & \text{ for } z \in i \mathbb R^-, \end{cases}\\ \widehat{P}_{j2+}(z) &= \begin{cases} \widehat{P}_{j2-}(z) + \eta_1(z) \widehat{P}_{j1-}(z), & \text{ for } z \in i \mathbb R^+, \\ \widehat{P}_{j2-}(z), & \text{ for } z \in i \mathbb R^-. \end{cases} \end{aligned} \end{equation} Since $\chi(z) = 0$ for $|z| \geq 2 \varepsilon$, we find that $\widehat{P}_{j1}$ is analytic in $\mathbb C \setminus [-2i \varepsilon, 0] $, and $\widehat{P}_{j2}$ is analytic in $\mathbb C \setminus [0,2i\varepsilon]$. Then by the Sokhotski-Plemelj formula and the asymptotic condition $\widehat P(z) \to I$ as $z \to \infty$, we get \begin{equation}\label{hatP11P12} \begin{aligned} \widehat{P}_{11}(z) & = 1 + \frac{1}{2\pi i} \int_{-2i \varepsilon}^0 \frac{\eta_2(s) \widehat{P}_{12}(s)}{s-z} ds, & \widehat{P}_{12}(z) & = \frac{1}{2\pi i} \int_0^{2i \varepsilon} \frac{\eta_1(s) \widehat{P}_{11}(s)}{s-z} ds. \\ \widehat{P}_{21}(z) & = \frac{1}{2\pi i} \int_{-2i \varepsilon}^0 \frac{\eta_2(s) \widehat{P}_{22}(s)}{s-z} ds, & \widehat{P}_{22}(z) & = 1 + \frac{1}{2\pi i} \int_0^{2i \varepsilon} \frac{\eta_1(s) \widehat{P}_{21}(s)}{s-z} ds. \end{aligned} \end{equation} We can write the equations in operator form if we introduce two operators \[ K_1 : L^2([0,2i \varepsilon]) \to L^2([-2i \varepsilon,0]) \qquad \text{ and } \qquad K_2 : L^2([-2i \varepsilon,0]) \to L^2([0,2i \varepsilon]) \] by \begin{align} \label{K12} (K_1 f)(z) & = \frac{1}{2\pi i} \int_0^{2i \varepsilon} \frac{\eta_1(s) f(s)}{s-z} ds, \qquad f \in L^2([0,2i \varepsilon]), \\ (K_2 g)(z) & = \frac{1}{2\pi i} \int_{-2i \varepsilon}^0 \frac{\eta_2(s) g(s)}{s-z} ds, \qquad g \in L^2([-2i \varepsilon,0]). \end{align} Then $f_1 = \widehat{P}_{11}$, $g_1 = \widehat{P}_{12}$ should solve \begin{equation} \label{eq:formulas-f1-g1} f_1 = 1 + K_2 g_1, \quad g_1 = K_1 f_1 \end{equation} and $f_2 = \widehat{P}_{21}$, $g_2 = \widehat{P}_{22}$ should solve \begin{equation} \label{eq:formulas-f2-g2} f_2= K_2 g_2, \quad g_2= 1 + K_1 f_2. \end{equation} Both $K_1$ and $K_2$ are integral operators between Hilbert spaces with operator norms \begin{align*} \| K_1 \|^2 = \int_{-2i \varepsilon}^0 \int_0^{2i \varepsilon} \frac{|\eta_1(s)|^2}{|s-t|^2} |ds| |dt|, \\ \| K_2 \|^2 = \int_0^{2i \varepsilon} \int_{-2i \varepsilon}^0 \frac{|\eta_2(s)|^2}{|s-t|^2} |ds| |dt|. \end{align*} The $t$-integrals can be done explicitly. This leads to the estimates (we also change to a real integration variable by putting $s = \pm iy$) \begin{equation} \label{eq:norms-K1-K2} \| K_1 \| \leq \left( \int_0^{2\varepsilon} \frac{|\eta_1(iy)|^2}{y} dy \right)^{1/2}, \quad \| K_2 \| \leq \left( \int_0^{2\varepsilon} \frac{|\eta_2(-iy)|^2}{y} dy \right)^{1/2}. \end{equation} The next step is to show that both integrals are finite (so that $K_1$ and $K_2$ are well-defined bounded operators) and that $\| K_1 K_2 \|$ and $\| K_2 K_1 \|$ tend to $0$ as $n \to \infty$. To this end, we need to control the functions $\eta_1$ and $\eta_2$, defined in \eqref{eta12z}. \subsubsection{The functions $\eta_1(z)$ and $\eta_2(z)$} The functions $\eta_1$ and $\eta_2$ are defined in terms of $j_1$, $j_2$, $D_1$ and $D_2$, see \eqref{eta12z}. In this section we obtain estimates for all these functions for large $n$. First we write the functions $j_1(z)$ and $j_2(z)$ in terms of Bessel functions. Because of the property $K_{\nu}(\overline{z})=\overline{K_{\nu}(z)}$ for real $\nu$, see \cite[\S 10.34.7]{DLMF}, if we consider the positive imaginary axis and we write $z=iy$, with $y>0$, then the function $W_n$ (recall \eqref{analyticW}) can be written as \begin{equation} \label{eq:Wpm-imaginary} W_{n,\pm}(iy)= \sqrt{2n} K_{\nu}(\mp n\pi iy)e^{\mp n\pi iy}, \end{equation} so $W_{n,+}(iy)=\overline{W_{n,-}(iy)}$. Similarly, on the negative imaginary axis, \begin{equation} \label{eq:Wpm-imaginary2} W_{n,\pm}(-iy)= \sqrt{2n} K_{\nu}(\pm n\pi iy)e^{\mp n\pi iy}, \end{equation} so again $W_{n,+}(-iy)=\overline{W_{n,-}(-iy)}$. Additionally, we have \begin{equation}\label{WHankel} \begin{aligned} |W_{n,-}(iy)|^2 &= 2n |K_{\nu}(n\pi i y)|^2=\frac{n \pi^2}{2}|H^{(2)}_{\nu}(n\pi y)|^2= \frac{n \pi^2}{2}\left[J_{\nu}(n\pi y)^2+Y_{\nu}(n\pi y)^2\right],\\ |W_{n,-}(-iy)|^2 &= 2n |K_{\nu}(-n\pi i y)|^2=\frac{n\pi^2}{2}|H^{(1)}_{\nu}(n\pi y)|^2= \frac{n \pi^2}{2}\left[J_{\nu}(n\pi y)^2+Y_{\nu}(n\pi y)^2\right], \end{aligned} \end{equation} in terms of Hankel functions, see \cite[\S 10.27.8]{DLMF}. We have the following auxiliary result: \begin{lemma} \label{lem:j1_j2_Bessel} For $y>0$, the functions $j_1(iy)$ and $j_2(-iy)$ can be written as follows: \begin{equation*} \begin{aligned} |j_1(iy)|&=\frac{2e^{-2n\Re \varphi_-(iy)}}{\sqrt{2n} \pi} \frac{|J_{\nu}(n\pi y)\cos\nu\pi-Y_{\nu}(n\pi y)\sin \nu\pi|}{J^2_{\nu}(n\pi y)+Y^2_{\nu}(n\pi y)},\\ |j_2(-iy)|&=\frac{2e^{-2n\Re \varphi_-(-iy)}}{\sqrt{2n} \pi}\frac{|J_{\nu}(n\pi y)|}{J^2_{\nu}(n\pi y)+Y^2_{\nu}(n\pi y)}. \end{aligned} \end{equation*} \end{lemma} \begin{proof} It follows from \eqref{eq:definition-j1} that $j_1$ can be written as \begin{equation*} j_1(iy) =\frac{e^{-2n\varphi_-(iy)-n\pi iy}}{W_{n,-}(iy)W_{n,+}(iy)} \left[e^{\frac{\nu\pi i}{2}+n\pi iy}W_{n,+}(iy) -e^{-\frac{\nu\pi i}{2}-n\pi iy}W_{n,-}(iy)\right], \end{equation*} and because of $\varphi_-(z)=\varphi_+(z)-\pi z$ on the imaginary axis, and the fact that $W_{n,+}(iy)=\overline{W_{n,-}(iy)}$, the two terms on the right hand side are complex conjugates, so \begin{equation}\label{j1Im} j_1(iy)=\frac{-2i e^{-2n\varphi_-(iy)-n\pi iy}}{|W_{n,-}(iy)|^2} \Im \left[e^{-\frac{\nu\pi i}{2}-n\pi iy}W_{n,-}(iy)\right]. \end{equation} Using the formula \begin{equation*} K_{\nu}(z)=-\frac{\pi i}{2}e^{-\frac{\nu\pi i}{2}} H_{\nu}^{(2)}(ze^{-\frac{\pi i}{2}}), \qquad -\frac{\pi}{2}< \arg z\leq \pi, \end{equation*} in terms of Hankel functions, see \cite[\S 10.27.8]{DLMF} and \eqref{eq:Wpm-imaginary} we observe that \begin{equation*} e^{-\frac{\nu\pi i}{2}-n\pi iy}W_{n,-}(iy)=e^{-\frac{\nu\pi i}{2}} \sqrt{2n} K_{\nu}(n\pi iy) =-\frac{\sqrt{2n} \pi i\, e^{-\nu\pi i}}{2}\left(J_{\nu}(n\pi y)-iY_{\nu}(n\pi y)\right). \end{equation*} Hence, on the positive imaginary axis, \begin{equation*} \Im \left[e^{-\frac{\nu\pi i}{2}-n\pi iy}W_{n,-}(iy)\right] =-\frac{\sqrt{2n}\pi}{2}(J_{\nu}(n\pi y)\cos\nu\pi-Y_{\nu}(n\pi y)\sin \nu\pi). \end{equation*} Using \eqref{j1Im} and \eqref{WHankel}, this proves the first formula. Similarly, for $y>0$, \begin{equation}\label{j2Im} j_2(-iy)= \frac{2i e^{-2n\varphi_-(-iy)-n\pi iy}}{|W_{n,-}(-iy)|^2} \Im \left[e^{-\frac{\nu\pi i}{2}-n\pi iy}W_{n,-}(-iy)\right]. \end{equation} In this case, we use \begin{equation*} K_{\nu}(z)=\frac{\pi i}{2}e^{\frac{\nu\pi i}{2}} H_{\nu}^{(1)}(ze^{\frac{\pi i}{2}}), \qquad -\pi<\arg z\leq \frac{\pi}{2}, \end{equation*} see \cite[10.27.8]{DLMF}, and \eqref{eq:Wpm-imaginary2} to obtain \begin{equation*} e^{-\frac{\nu\pi i}{2}-n\pi iy}W_{n,-}(-iy)=e^{-\nu\pi i/2} \sqrt{2n} K_{\nu}(-n\pi iy) =\frac{\sqrt{2n} \pi i}{2}\left(J_{\nu}(n\pi y)+iY_{\nu}(n\pi y)\right), \end{equation*} so \begin{equation*} \Im \left[e^{\frac{-\nu\pi i}{2}-n\pi iy}W_{n,-}(-iy)\right] =\frac{\sqrt{2n} \pi}{2}J_{\nu}(n\pi y). \end{equation*} We use \eqref{j2Im} and \eqref{WHankel}, and this completes the proof. \end{proof} Next, we will obtain estimates of the previous functions $j_1$ and $j_2$ for large $n$. \begin{lemma}\label{lem:asymptotic_j1_j2} For $0<\nu\leq 1/2$ there exist constants $C_\nu, C'_\nu>0$ such that for all $s > 0$ we have \begin{equation*} \begin{aligned} \frac{|J_{\nu}(s)\cos\nu\pi-Y_{\nu}(s)\sin \nu\pi|}{J_{\nu}(s)^2+Y_{\nu}(s)^2} &\leq C_{\nu}\, \frac{s^{\nu}(1+s^{1-2\nu})}{1+s^{1/2-\nu}},\\ % \frac{|J_{\nu}(s)|}{J_{\nu}(s)^2+Y_{\nu}(s)^2}&\leq C'_{\nu}\, \frac{s^{3\nu}(1+s^{1-2\nu})}{1+s^{1/2+\nu}}. \end{aligned} \end{equation*} \end{lemma} \begin{proof} For the proof, we consider the following expansions: as $s\to 0^+$, \begin{equation}\label{asympJ0} J_{\nu}(s)=\frac{s^{\nu}}{2^{\nu}\Gamma(\nu+1)}\left(1+\mathcal{O}\left(s^{-1}\right)\right), \quad \nu\neq -1,-2,\ldots \end{equation} and for $\nu<1$ we have \begin{equation}\label{asympY0} Y_{\nu}(s)=-\frac{\Gamma(\nu)}{\pi}\left(\frac{s}{2}\right)^{-\nu} + \mathcal{O}(s^{\nu}). \end{equation} As $s\to\infty$, we have \begin{equation}\label{asympJYinf} J_{\nu}(s)= \left(\frac{2}{\pi s}\right)^{1/2}\cos\omega \, \left(1+\mathcal{O}\left(s^{-1}\right)\right), \qquad Y_{\nu}(s)= \left(\frac{2}{\pi s}\right)^{1/2}\sin\omega \, \left(1+\mathcal{O}\left(s^{-1}\right)\right), \end{equation} where $\omega=s-\frac{\nu\pi}{2}-\frac{\pi}{4}$. See for instance \cite[formulas 10.7.3--4, 10.17.3--4]{DLMF}. From this, it follows that \begin{equation}\label{asymp:Mnu} \begin{aligned} J_{\nu}(s)^2+Y_{\nu}(s)^2 & = \frac{\Gamma(\nu)^2}{\pi^2}\left(\frac{s}{2}\right)^{-2\nu} + \mathcal{O}(1), & s\to 0,\\ J_{\nu}(s)^2+Y_{\nu}(s)^2 & = \frac{2}{\pi s}+\mathcal{O}\left(s^{-2}\right), & s\to \infty. \end{aligned} \end{equation} From \eqref{asymp:Mnu}, we claim that there exist two constants $C_{1,\nu},C_{2,\nu}>0$ such that \begin{equation*} C_{1,\nu}\,\frac{s^{-2\nu}}{1+s^{1-2\nu}}\leq J_{\nu}(s)^2+Y_{\nu}(s)^2 \leq C_{2,\nu}\,\frac{s^{-2\nu}}{1+s^{1-2\nu}}, \qquad s>0. \end{equation*} Using a similar argument, we have \begin{equation*} |J_{\nu}(s)|\leq C_{3,\nu}\,\frac{s^{\nu}}{1+s^{1/2-\nu}}, \end{equation*} and also \begin{equation*} |J_{\nu}(s)\cos\nu\pi-Y_{\nu}(s)\sin \nu\pi|\leq C_{4,\nu}\,\frac{s^{-\nu}}{1+s^{1/2-\nu}}, \end{equation*} and putting all the estimates together we get the bounds in the lemma. \end{proof} As a consequence of Lemma \ref{lem:j1_j2_Bessel} and Lemma \ref{lem:asymptotic_j1_j2} we obtain the following bounds for $j_1$ and $j_2$ for $y>0$: \begin{equation} \label{eq:formulas-j1-j2} \begin{aligned} |j_1(iy)|&\leq C_{\nu}\, \frac{2e^{-2n\Re \varphi_-(iy)}}{\sqrt{2n} \pi} \frac{(n\pi y)^{\nu}(1+(n\pi y)^{1-2\nu})}{1+(n\pi y)^{1/2-\nu}},\\ |j_2(-iy)|&\leq C'_{\nu}\, \frac{2e^{-2n\Re \varphi_-(-iy)}}{\sqrt{2n} \pi} \frac{(n\pi y)^{3\nu}(1+(n\pi y)^{1-2\nu})}{1+(n\pi y)^{1/2+\nu}}. \end{aligned} \end{equation} Next, we need an estimate for $D_1(z)$ (see formula \eqref{eq:D1(z)}), with $z = iy$, $y \in [-\rho, \rho]$ where we recall that $\pm i \rho$ is the intersection of the lens with the imaginary axis. \begin{lemma} \label{lem:lemmaD1} For $0<\nu\leq 1/2$, there exists a constant $C_\nu$ such that for all sufficiently large $n$, \begin{equation}\label{boundD1} |D_1(iy)|^2\leq C_\nu\,\frac{ n^{1/2-\nu} |y|^{-\nu}}{1+(n|y|)^{1/2-\nu}}, \qquad y \in [-\rho, \rho]. \end{equation} \end{lemma} \begin{proof} We write first $z=iy$ with $y>0$ in \eqref{eq:D1(z)} and use the parity of the function $W_n$ to get the following expression: \begin{equation}\label{D1y} D_1(iy)=\exp\left(\frac{y(y^2+1)^{1/2}}{2\pi}\int_0^1 \frac{\log W_n(x)}{\sqrt{1-x^2}}\frac{dx}{x^2+y^2}\right). \end{equation} Using the asymptotic expansions \eqref{asympJ0}, \eqref{asympY0} and \eqref{asympJYinf}, we claim that there exist two constants $C_1$ and $C_2$, depending on $\nu$, such that $W_n(x)$ satisfies \begin{equation*} W_n(x)\leq C_1 |x|^{-1/2}, \qquad |n \pi x|\geq 1, \end{equation*} and \begin{equation*} W_n(x)\leq C_2 n^{1/2 - \nu} |x|^{-\nu}, \qquad |n \pi x|\leq 1. \end{equation*} Since $\nu\leq 1/2$, both bounds hold uniformly for $n\pi x>0$. Since the integrand in \eqref{D1y} is a real function, we can bound $D_1(iy)$ from above by another Szeg\H{o} function: $$ D_1(iy)^2\leq D(iy;C_1 |\pi x|^{-1/2})^2=C_1 \pi^{-1/2} D(iy;|x|^{-1/2})^2. $$ This last Szeg\H{o} function is explicit, since for a general exponent $\alpha>-1$ we have \begin{equation} \label{eq:SzegoD} D(z;|x|^{\alpha})=\left(\frac{z}{z+\sqrt{z^2-1}}\right)^{\alpha/2}. \end{equation} As a consequence, substituting $z=iy$ with $y \in [-\rho,\rho]$, and $\alpha=-1/2$, $$ D_1(iy)^2\leq C_1 (ny)^{-1/2}(y+\sqrt{y^2+1})^{1/2}\leq C_1\left(\rho +\sqrt{\rho^2+1}\right)^{1/2} (ny)^{-1/2}, $$ and by the same argument with $\alpha=-\nu$, $$ D_1(iy)^2\leq C_2 n^{1/2-\nu} y^{-\nu}(y+\sqrt{y^2+1})^{\nu}\leq C_2 \left(\rho+\sqrt{\rho^2+1}\right)^{\nu} n^{1/2-\nu} y^{-\nu}. $$ The bound in the lemma follows for $y>0$ from these two estimates, for some constant $C_{\nu}$. Finally, from the definition of $D_1$, see \eqref{eq:D1(z)}, we have that if $y<0$, then $D_1(iy)=\overline{D_1(-iy)}$, so the modulus is equal and the bound holds also in this case. \end{proof} Now we write together all the estimates computed before to obtain bounds for the functions $\eta_1$ and $\eta_2$ defined in \eqref{eta12z}. \begin{lemma} \label{lem:bounds-eta1-eta2} For $0<\nu\leq 1/2$, there exist constants $C_{\nu}, C'_{\nu} > 0$ such that for $n$ large enough and $y \in [0,\rho]$, we have the bounds \begin{align} |\eta_1(iy)| &\leq \left|j_1(iy) (D_1(iy) D_2(iy))^2 \right| \leq C_\nu \, y^\nu \, e^{-2n\Re \varphi_-(iy)}, \label{eq:bounds-eta1} \\ |\eta_2(-iy)| &\leq \left|j_2(-iy) (D_1(-iy) D_2(-iy))^2 \right| \label{eq:bounds-eta2} \leq C'_\nu \, (n^{2\nu}y^\nu+ny^{1-\nu}) \, e^{-2n\Re \varphi_-(-iy)}. \end{align} \end{lemma} \begin{proof} We collect the results on $D_1$ (see formula \eqref{boundD1}), $D_2$ (we use the fact that this function does not depend on $n$ and formula \eqref{D2at0}), $j_1$ and $j_2$ (formula \eqref{eq:formulas-j1-j2}). Then for some constant $C_{1,\nu}$ we simplify the bound to $$ |\eta_1(iy)|\leq C_{1,\nu} y^{\nu} \frac{1+(ny)^{1-2\nu}}{(1+(ny)^{1/2-\nu})^2}e^{-2n\Re \varphi_-(iy)} \leq C_{\nu} y^{\nu} e^{-2n\Re \varphi_-(iy)}. $$ Also, $$ \begin{aligned} |\eta_2(-iy)| & \leq C_{2,\nu} n^{2\nu} y^{\nu} \frac{1+(ny)^{1-2\nu}}{(1+(ny)^{1/2+\nu})(1+(ny)^{1/2-\nu})} e^{-2n\Re \varphi_-(-iy)}\\ &\leq C'_{\nu} n^{2\nu} y^{\nu}(1+(ny)^{1-2\nu}) e^{-2n\Re \varphi_-(-iy)}, \end{aligned} $$ and the result follows. \end{proof} \subsubsection{Estimates for $\|K_1\|$ and $\|K_2\|$ as $n\to\infty$} In order to estimate the norms of $K_1$ and $K_2$ we need the $\|\cdot\|_2$ norm of $\eta_1$ and $\eta_2$, see formula \eqref{eq:norms-K1-K2}. For this we use the estimate in Lemma \ref{lem:bounds-eta1-eta2} and the following bound on $\varphi(z)$: \begin{lemma} For every $s \in i \mathbb R$ we have \begin{equation} \label{estimate:Rephi} \begin{aligned} \Re \varphi_+(s) = \Re \varphi_-(s) & = -|s| \log |s| + |s| \log(1+ \sqrt{1+s^2}) + \log(|s| + \sqrt{1+s^2}) \\ & \geq |s| \log \frac{1}{|s|}. \end{aligned} \end{equation} \end{lemma} \begin{proof} We consider $\Re \varphi_-(s)$ with $s \in i \mathbb R_+$. The other cases follow by symmetry. Let $x \in (0,1)$. Then by \eqref{gpgm} and \eqref{phig}, \[ \varphi_{\pm}(x)=\pm \pi i\int_x^1 \psi(t)dt, \] and so $\varphi'_+(x) = - \pi i \psi(x)$. By analytic continuation we find \[ \varphi'(z) = -\pi i \psi(z), \qquad \Re z > 0, \, \Im z > 0. \] Then \[ \varphi_-(s) = \varphi_+(x) + \int_x^s \varphi'(z) dz = \varphi_+(x) - \pi i \int_x^s \psi(z) dz. \] Since $\varphi_+(x)$ is purely imaginary, we obtain by taking the real part and letting $x \to 0+$, \[ \Re \varphi_-(s) = \Im \pi \int_0^s \psi(z) dz = \Im \int_0^s \log \left( \frac{1+ (1-z^2)^{1/2}}{z} \right) dz, \] where we used \eqref{complexpsi} for $\psi$. The integral can be evaluated explicitly and it gives \eqref{estimate:Rephi}. \end{proof} Without loss of generality we assume in what follows that $\rho$ is small enough so that $|s|\log\frac{1}{|s|}>0$ for $s\in(-i\rho,i\rho)$. In order to estimate integrals involving the functions $\varphi_{\pm}(z)$, we use \eqref{estimate:Rephi}, together with the following technical lemma. \begin{lemma} \label{lem:bound-integral} For any $\alpha>-1$, there exists a constant $C = C_{\alpha}$ such that for $n$ large enough \begin{equation} \label{bound-integral} \int_0^{1/e} y^{\alpha}e^{-4n y\log\frac{1}{y}}dy\leq C (n\log n)^{-\alpha-1}. \end{equation} \end{lemma} \begin{proof} We split the integral into two parts and we estimate \begin{align} \int_0^{1/e} y^{\alpha}e^{-4n y\log\frac{1}{y}} \, dy &=\int_0^{1/\sqrt{n}} y^{\alpha}e^{-4n y\log\frac{1}{y}} \,dy +\int_{1/\sqrt{n}}^{1/e} y^{\alpha}e^{-4n y\log\frac{1}{y}} \, dy \nonumber \\ &\leq \int_0^{1/\sqrt{n}} y^{\alpha}e^{-4 yn\log n} \, dy +\int_{1/\sqrt{n}}^{1/e} y^{\alpha}e^{-2 \sqrt{n}\log{n}} \, dy. \label{eq:integrals_phi} \end{align} where for the second integral we used that $-y \log \frac{1}{y}$ is decreasing on $[0, \frac{1}{e}]$ and so $-y \log \frac{1}{y} \leq \frac{1}{\sqrt{n}} \log \sqrt{n}$ for $y \in [\frac{1}{\sqrt{n}}, \frac{1}{e}]$. The first integral of \eqref{eq:integrals_phi} is estimated by extending the integral to $+\infty$ and the result is that it is $\mathcal{O}((n\log n)^{-\alpha-1})$ as $n \to \infty$. The second integral in \eqref{eq:integrals_phi} is $\mathcal{O}(e^{-c \sqrt{n}})$ as $n \to \infty$. This gives the result. \end{proof} Combining the estimates in \eqref{eq:bounds-eta1}, \eqref{eq:bounds-eta2}, \eqref{estimate:Rephi} and \eqref{bound-integral} we obtain, whenever $2 \varepsilon < \frac{1}{e}$, \begin{equation}\label{estimates_eta1} \begin{aligned} \int_0^{2\varepsilon} |\eta_1(iy)|^2 dy&=\mathcal{O}(n^{-2\nu-1}(\log n)^{-2\nu-1}),\quad \int_0^{2\varepsilon} \frac{|\eta_1(iy)|^2}{y} dy&=\mathcal{O}(n^{-2\nu}(\log n)^{-2\nu}), \end{aligned} \end{equation} and \begin{equation}\label{estimates_eta2} \begin{aligned} \int_0^{2\varepsilon} |\eta_2(-iy)|^2 dy&=\mathcal{O}(n^{2\nu-1}(\log n)^{-2\nu-1}),\quad \int_0^{2\varepsilon} \frac{|\eta_2(-iy)|^2}{y} dy&=\mathcal{O} (n^{2\nu}(\log n)^{-2\nu}), \end{aligned} \end{equation} as $n \to \infty$. To obtain \eqref{estimates_eta2} one has to consider the three different integrals coming from square of the factor $n^{2\nu}y^\nu+ny^{1-\nu}$ in \eqref{eq:bounds-eta1}--\eqref{eq:bounds-eta2}, and retain the largest one. Hence, using \eqref{eq:norms-K1-K2} and \eqref{estimates_eta1}-\eqref{estimates_eta2} we have the bounds \begin{equation}\label{normsK1K2} \begin{aligned} \|K_1\|&\leq \left(\int_0^{2\varepsilon} \frac{|\eta_1(iy)|^2}{y} dy\right)^{1/2}=\mathcal{O}(n^{-\nu}(\log n)^{-\nu}),\\ \|K_2\|&\leq \left(\int_0^{2\varepsilon} \frac{|\eta_2(-iy)|^2}{y} dy\right)^{1/2}=\mathcal{O}(n^{\nu}(\log n)^{-\nu}). \end{aligned} \end{equation} Thus $K_1$ and $K_2$ are bounded operators between the Hilbert spaces $L^2([0,2i\varepsilon])$ and $L^2([-2i \varepsilon,0])$. In addition from \eqref{normsK1K2}, we get \begin{equation}\label{estimate_K1K2} \|K_1 K_2\| \leq \| K_1 \| \, \| K_2 \| = \mathcal{O}((\log n)^{-2\nu}), \qquad n \to \infty, \end{equation} and similarly \begin{equation} \label{estimate_K2K1} \|K_2 K_1\|=\mathcal{O}((\log n)^{-2\nu}), \qquad n\to\infty. \end{equation} \subsubsection{Proof of Lemma \ref{lem:Phat}} \begin{proof} It follows from \eqref{estimate_K1K2} and \eqref{estimate_K2K1} that the operators $I-K_2K_1$ and $I-K_1K_2$ are invertible for $n$ large enough, and then we can solve the equations \eqref{eq:formulas-f1-g1} and \eqref{eq:formulas-f2-g2}. Thus we define the entries of the matrix $\widehat P$ as follows: \begin{align}\label{entriesP:1} \widehat P_{11} & = (I-K_2K_1)^{-1} 1, && \widehat P_{12}=K_1\widehat P_{11} \\ \widehat P_{21} & = K_2 \widehat P_{22}, && \widehat P_{22} =(I-K_1K_2)^{-1} 1. \label{entriesP:2} \end{align} In \eqref{entriesP:1} and \eqref{entriesP:2} we use $1$ to denote the identically-one function in $L^2([0, 2 i \varepsilon])$ and $L^2([-2i \varepsilon,0])$, respectively. Then \eqref{eq:formulas-f1-g1} and \eqref{eq:formulas-f2-g2} hold true, which means that the equations in \eqref{hatP11P12} hold. This then also means that the jump condition \eqref{jumpcondition:Phat} in the RH problem \ref{RHforPhat} is satisfied. The equations \eqref{hatP11P12} allow us to give estimates on $\widehat{P}(z)$. First of all we obtain from \eqref{estimate_K1K2}-\eqref{estimate_K2K1}, \eqref{entriesP:1}, and \eqref{entriesP:2} that \begin{equation} \label{normestimates1} \| \widehat P_{11}\|_{L^2([0,2i\varepsilon])} = \mathcal{O}(1), \qquad \| \widehat P_{22}\|_{L^2([-2i\varepsilon,0])} = \mathcal{O}(1), \end{equation} and then by \eqref{normsK1K2} \begin{align} \label{normestimates2} \| \widehat P_{12} \|_{L^2([-2i \varepsilon,0])} & \leq \| K_1 \| \, \| \widehat P_{11}\|_{L^2([0,2i\varepsilon])} = \mathcal{O}(n^{-\nu}(\log n)^{-\nu}),\\ \| \widehat P_{21} \|_{L^2([0, 2i \varepsilon])} & \leq \| K_2 \| \, \| \widehat P_{22}\|_{L^2([-2i\varepsilon,0])} =\mathcal{O}(n^{\nu}(\log n)^{-\nu}). \end{align} For pointwise estimates we use the distances \[ d_+(z)=\dist (z,[0,2i\varepsilon]), \qquad d_-(z) =\dist(z,[-2i\varepsilon,0]). \] Then by the first equation in \eqref{hatP11P12}, we get for $z \in \mathbb C \setminus [-2i\varepsilon,0]$, \begin{align*} |\widehat{P}_{11}(z) - 1| & \leq \frac{1}{2\pi d_-(z)} \left| \int_0^{2i \varepsilon} \eta_2(s) \widehat{P}_{12}(s) ds \right| \leq \frac{1}{2\pi d_-(z)} \| \eta_2\|_2 \, \| \widehat{P}_{12} \|_2 \end{align*} where we used the Cauchy-Schwarz inequality, and $\| \cdot \|_2$ is the $L^2$ norm on $[-2i\varepsilon,0]$. Thus by \eqref{estimates_eta2} and \eqref{normestimates2}, \begin{equation} \label{hatP11bound} |\widehat{P}_{11}(z) - 1| = \frac{1}{d_-(z)} \, \mathcal{O} \left( n^{-1/2} (\log n)^{-2\nu-1/2}\right), \end{equation} as $n \to \infty$, uniformly for $z \in \mathbb C \setminus [-2i\varepsilon,0]$. Using similar arguments, we obtain \begin{align} \label{hatP12bound} |\widehat{P}_{12}(z)| & = \frac{1}{d_+(z)} \mathcal{O} \left( n^{-\nu-1/2}(\log n)^{-\nu-1/2} \right),\\ \label{hatP21bound} |\widehat{P}_{21}(z) | & =\frac{1}{d_-(z)} \mathcal{O} \left( n^{\nu-1/2} (\log n)^{-\nu-1/2} \right), \\ \label{hatP22bound} |\widehat{P}_{22}(z) - 1| & = \frac{1}{d_+(z)} \mathcal{O} \left( n^{-1/2} (\log n)^{-2\nu-1/2}\right), \end{align} as $n \to \infty$, and the $\mathcal{O}$ terms are uniform in $z$. Observe that all $\mathcal{O}$ terms tend to $0$ as $n \to \infty$, since $\nu \leq 1/2$. It follows from \eqref{hatP11bound}--\eqref{hatP22bound} that $\widehat{P}(z) = I + \mathcal{O}(z^{-1})$ as $z \to \infty$ and therefore $\widehat{P}$ satisfies the RH problem \ref{RHforPhat}. For $|z| = 3 \varepsilon$ we have $d_{\pm}(z) \geq \varepsilon$. From \eqref{hatP11bound}--\eqref{hatP22bound} we then immediately find that the estimates in Lemma \ref{lem:Phat} hold, and the lemma is proved. \end{proof} This also completes the proof of Proposition \ref{propo8}. \subsection{Final transformation} \label{finaltrans} Having $P$ as in Proposition \ref{propo8} we define the final transformation $Q \mapsto R$ as \begin{equation}\label{R} R(z)=\begin{cases} Q(z), & \textrm{for } |z| > 3 \varepsilon, \\ Q(z)P(z)^{-1}, & \textrm{for } |z| < 3 \varepsilon. \end{cases} \end{equation} Recall that $Q$ is the solution of the RH problem \ref{RHforQ}. Then $R$ has jumps on a contour $\Sigma_R$ that consists of $\Sigma_Q \setminus (-i \varepsilon, i \varepsilon)$ together with the circle of radius $3 \varepsilon$ around $0$, see Figure \ref{figR}. Note that the jumps of $P$ and $Q$ coincide on $(-i\varepsilon, i \varepsilon)$, so that $R$ has an analytic continuation across that interval. \begin{figure} \centerline{\includegraphics{SigmaR.pdf}} \caption{Contour $\Sigma_R$} \label{figR} \end{figure} From RH problem \ref{RHforQ} and the definition \eqref{R} it follows that $R$ satisfies the following RH problem. \begin{rhp}\label{RHforR} \begin{itemize} \item[1)] $R : \mathbb C \setminus \Sigma_R \to \mathbb C^{2\times 2}$ is analytic. \item[2)] $R$ satisfies the jump condition $R_+ = R_- J_R$ on $\Sigma_R$ where \begin{align} \label{jump:R} J_R(z) = \begin{cases} J_Q(z) & \text{ for } z \in \Sigma_R \text{ with } |z| > 3 \varepsilon, \\ P(z)^{-1} & \text{ for } |z| = 3 \varepsilon, \\ P_-(z) J_Q(z) P^{-1}_+(z) & \text{ for } z \in (-3i \varepsilon, - i \varepsilon) \cup (i \varepsilon, 3i \varepsilon). \end{cases} \end{align} \item[3)] As $z\rightarrow\infty$, \begin{equation*} R(z)=I+\mathcal{O}(1/z). \end{equation*} \end{itemize} \end{rhp} In order to solve this RH problem asymptotically for large $n$, we need to show that the jump matrices for $R(z)$ are close to the identity matrix uniformly for $z\in\Sigma_R$, see Figure \ref{figR}. \begin{lemma} The jump matrix $J_R$ in the RH problem for $R$ satisfies for some constant $c > 0$, \begin{equation} \label{JRasymp} J_R(z) = \begin{cases} I + \mathcal{O}(\epsilon_n), & \text{ for } |z| = 3 \varepsilon, \\ I + \mathcal{O}(1/n), & \text{ for } |z\pm 1| = \delta, \\ I + \mathcal{O}(e^{-cn}), & \text{ elsewhere on $\Sigma_R$}, \end{cases} \end{equation} as $n \to \infty$, where the $\mathcal{O}$ terms are uniform. \end{lemma} \begin{proof} For $z\in\Sigma_R$ with $|z|>3\varepsilon$, we have $J_R(z)=J_Q(z)$. On the boundary of the disks around the endpoints we have $J_Q(z)=I+\mathcal{O}(n^{-1})$, see \eqref{JQasymp1} and on the rest of $\Sigma_R$ except $(-i\rho,i\rho)$ we have $J_Q(z)=I+\mathcal{O}(e^{-cn})$ for some $c>0$, see \eqref{JQasymp2}. On the circle $|z|=3\varepsilon$, the jump is $J_R(z)=P(z)^{-1}$. We use \eqref{Phat} and the fact that $\widehat{P}(z)=I+\mathcal{O}(\epsilon_n)$, uniformly for $|z|=3\varepsilon$, to find that \[ J_R(z) = P(z)^{-1} = I+\mathcal{O}(\epsilon_n), \] as given in \eqref{JRasymp}. For $z \in (3i\varepsilon,i\rho)$ we get from \eqref{jump:R} and \eqref{jumpQ1} \begin{equation*} J_R(z)=J_Q(z)=D_{\infty}^{\sigma_3}N_0(z)\begin{pmatrix} 1 & 0\\ j_1(z)(D_1(z)D_2(z))^2 & 1\end{pmatrix} N_0^{-1}(z)D_{\infty}^{-\sigma_3}. \end{equation*} From \eqref{eq:bounds-eta1} and \eqref{estimate:Rephi}, we obtain for $y \in [0, \rho]$, \begin{equation}\label{boundj1D1D2} |j_1(iy)(D_1(iy)D_2(iy))^2|\leq C_{\nu} y^{\nu}e^{-2n y}, \qquad C_{\nu} > 0, \end{equation} We also use \eqref{Dinftylimit} and then \eqref{JRasymp} for $z \in (3i \varepsilon, i \rho)$ follows. The case $z \in (-i \rho, -3i \varepsilon)$ can be handled in a similar way. What is left are the intervals $(i\varepsilon,3i\varepsilon)$ and $(-3i \varepsilon, -i \varepsilon)$. For $z \in (i\varepsilon, 3i \varepsilon)$ we find from \eqref{jump:R} and \eqref{Phat} that \begin{multline*} J_R(z)=D_{\infty}^{\sigma_3} N_0(z) \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix} \widehat{P}_-(z) \begin{pmatrix} 1 & -j_1(z)(D_1(z)D_2(z))^2\\ 0 & 1\end{pmatrix}\\ \times \widehat{P}^{-1}_+(z) \begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix} N_0(z)^{-1}D_{\infty}^{-\sigma_3}. \end{multline*} Using \eqref{jump:Phat}-\eqref{jumpcondition:Phat} we rewrite this as \begin{multline} \label{JRstep2} J_R(z) =I - j_1(z)(D_1(z)D_2(z))^2(1-\chi(z)) D_{\infty}^{\sigma_3} N_0(z) \begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix} \widehat{P}_+(z) \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \\ \times \widehat{P}^{-1}_+(z) \begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix} N_0(z)^{-1}D_{\infty}^{-\sigma_3}. \end{multline} Here we note that $\det \widehat{P}(z) = 1$, which follows by standard arguments from the RH problem \ref{RHforPhat}, and therefore $\widehat{P}^{-1}_+ = \begin{pmatrix} \widehat{P}_{22} & - \widehat{P}_{12} \\ -\widehat{P}_{21} & \widehat{P}_{11} \end{pmatrix}_+$. Then a little calculation shows that \eqref{JRstep2} reduces to \begin{align} \label{JRstep3} J_R(z) & =I + j_1(z)(D_1(z)D_2(z))^2(1-\chi(z)) \Lambda(z), \qquad z \in (i\varepsilon, 3 i \varepsilon), \end{align} where \begin{equation*} \Lambda(z)=D_{\infty}^{\sigma_3}N_0(z) \begin{pmatrix} -\widehat{P}_{11}(z)\widehat{P}_{21}(z) & -\widehat{P}_{21}(z)^2 \\ \widehat{P}_{11}(z)^2 & \widehat{P}_{11}(z)\widehat{P}_{21}(z) \end{pmatrix} N_0^{-1}(z)D_{\infty}^{-\sigma_3}. \end{equation*} The functions $\widehat{P}_{11}$ and $\widehat{P}_{21}$ are analytic on $(i\varepsilon, 3 i \varepsilon)$ and so we do not have to take the $+$-boundary value. Then it follows from \eqref{Dinftylimit} and the estimates in \eqref{hatP11bound} and \eqref{hatP21bound} that all entries in $\Lambda$ are uniformly bounded as $n \to \infty$. Then by \eqref{eq:bounds-eta1} and \eqref{JRstep3} we find \eqref{JRasymp} for $z \in (i \varepsilon, 3 i \varepsilon)$. A similar argument shows that $J_R(z)$ is exponentially close to the identity matrix for $z \in (-3i \varepsilon, -i \varepsilon)$ as well, and the lemma follows. \end{proof} As a consequence of \eqref{JRasymp}, the biggest estimates for $J_R - I$ are on the circle $|z|=3\varepsilon$. For $0<\nu\leq 1/2$, the jump matrix satisfies (recall $\epsilon_n$ is given by \eqref{epsilonn}) \begin{equation} J_R(z) = I + \mathcal{O}(\epsilon_n), \qquad n \to \infty, \end{equation} uniformly for $z\in \Sigma_R$ where $\Sigma_R$ is the union of contours depicted in Figure \ref{figR}. Note that $J_R(z) \to I$ as $n \to \infty$, but the rate of convergence is remarkably slow. Following standard arguments, we now find that for $n$ sufficiently large, the RH problem \ref{RHforR} for $R$ is solvable, and \begin{equation} \label{eq:asymptotics-R} R(z) = I + \mathcal{O}(\epsilon_n), \qquad n\to\infty, \end{equation} uniformly for $z \in \mathbb C \setminus \Sigma_R$. The convergence rate in \eqref{eq:asymptotics-R} may not be optimal, since some of the bounds in the analysis may not be as sharp as possible. Note that for $\nu = 1/2$ we only have $R(z) = I + \mathcal{O}(\frac{1}{\log n})$, which is a very slow convergence. Since all of the transformations $X \mapsto U \mapsto T \mapsto S \mapsto Q \mapsto R$ are invertible, we then also find that the RH problem for $X$ is solvable for $n$ large enough. In particular we find that the polynomial $P_n = X_{11}$ exists for $n$ large enough. \section{Proofs of the Theorems}\label{proofs} \subsection{Proof of Theorem \ref{Th1}} \label{section41} \begin{proof} Following the transformations of the Deift--Zhou steepest descent analysis and using formula \eqref{eq:asymptotics-R}, we obtain asymptotic information about $\widetilde{P}_n(z) = U_{11}(z)$ in the complex plane, see \eqref{UnX} and \eqref{tildePn}. Consider the region in Figure \ref{figR} which is outside the lens and outside of the disks around $z=\pm1$. In this case $U_{11}(z)=T_{11}(z)e^{ng(z)}$, and by \eqref{T}, \eqref{S}, \eqref{Q}, \eqref{R}, $$ T(z)=S(z)=Q(z)N(z)=R(z)N(z), $$ which means that \begin{equation}\label{proof:outer} \begin{aligned} \widetilde{P}_n(z) e^{-ng(z)} & = T_{11}(z) = R_{11}(z)N_{11}(z)+R_{12}(z)N_{21}(z)\\ &= N_{11}(z)(1+\mathcal{O}(\epsilon_n)) + N_{21}(z) \mathcal{O}(\epsilon_n), \end{aligned} \end{equation} using \eqref{eq:asymptotics-R}. Here $\epsilon_n$ is given again by \eqref{epsilonn}. We observe that $N_{11}=D_{\infty}N_{0,11} (D_1D_2)^{-1}$, from \eqref{solutionNgeneral}, and using \eqref{D1nlimit}, \eqref{Dinftylimit}, \eqref{eq:D2(z)} and \eqref{eq:N0} we get \begin{equation}\label{N11asympexplicit} N_{11}(z)=\left(\frac{z(z+(z^2-1)^{1/2})}{2(z^2-1)}\right)^{1/4}\left(\frac{(z^2-1)^{1/2}-i}{(z^2-1)^{1/2}+i}\right)^{-\nu/4} \left(1+\mathcal{O} \left(\frac{\log n}{n}\right)\right), \end{equation} as $n\to\infty$. Similarly, we also see that $N_{21}(z) = \mathcal{O}(1)$ as $n \to \infty$ and \eqref{asymp:Pn:outer} follows. Since the lens can be taken arbitrarily close to the interval $[-1,1]$ and the disks can be taken arbitrarily small, the asymptotics \eqref{asymp:Pn:outer} is valid uniformly on any compact subset of $\mathbb C \setminus [-1,1]$. This proves Theorem \ref{Th1}. \end{proof} \subsection{Proof of Theorem \ref{Th3}} \label{section42} \begin{proof} Inside the lens, but away from the endpoints and the origin, we use the relation \eqref{S} between the functions $T(z)$ and $S(z)$. Let $z$ be in the lens with $\Re z > 0$. Then we have \begin{equation*} T_{11}(z)=S_{11}(z) \pm S_{12}(z)\frac{e^{\frac{\nu \pi i}{2}-2n\varphi(z)}}{W_n(z)}, \end{equation*} for $\pm \Im z > 0$, and therefore \begin{equation*} \widetilde{P}_n(z) = e^{ng(z)} T_{11}(z) =e^{ng(z)} \left[ S_{11}(z) \pm S_{12}(z)\frac{e^{\frac{\nu \pi i}{2}-2n\varphi(z)}}{W_n(z)} \right]. \end{equation*} Since $S(z)=Q(z)N(z)$ away from the endpoints, and $Q(z)=R(z)$ away from the origin (if $|z|>3\varepsilon$), see \eqref{Q} and \eqref{R}, we obtain \begin{equation}\label{innerP} \widetilde{P}_n(z) = e^{n g(z)} \left[ N_{11}(z) \pm N_{12}(z)\frac{e^{\frac{\nu \pi i}{2}-2n\varphi(z)}}{W_n(z)}+\mathcal{O}(\epsilon_n) \right]. \end{equation} for $\Re z \geq 0$, and $\pm \Im z > 0$. We are going to simplify the expression \eqref{innerP} and we do it for $\Re z > 0$, $\Im z > 0$. First we use \eqref{phifunction}, \eqref{Vfunction}, and \eqref{ell} in \eqref{innerP} to get \begin{equation}\label{innerP2} \widetilde{P}_n(z) = \frac{e^{\frac{n \pi z}{2}}}{(2e)^n W_n(z)^{1/2}} \left[ N_{11}(z) W_n(z)^{1/2} e^{n \varphi(z)} + \frac{N_{12}(z)}{W_n(z)^{1/2}} e^{\frac{\nu \pi i}{2}-n\varphi(z)} +\mathcal{O}(\epsilon_n) \right]. \end{equation} From \eqref{solutionNgeneral} we have $N_{11} = D_{\infty} N_{0,11} (D_1 D_2)^{-1}$, $N_{12} = D_{\infty} N_{0,12} D_1 D_2$ and so \begin{multline}\label{innerP3} \widetilde{P}_n(z) = \frac{D_{\infty} e^{\frac{n \pi z}{2}+\frac{\nu \pi i}{4}}}{(2e)^n W_n(z)^{1/2}} \left[ \frac{N_{0,11}(z) W_n(z)^{1/2}}{D_1(z) D_2(z)} e^{-\frac{\nu \pi i}{4} + n \varphi(z)} \right. \\ \left. + \frac{N_{0,12}(z) D_1(z) D_2(z)}{W_n(z)^{1/2}} e^{\frac{\nu \pi i}{4}-n\varphi(z)} +\mathcal{O}(\epsilon_n) \right]. \end{multline} Next we use \eqref{eq:N0} to write \begin{equation*} N_{0,11}(z)= e^{-\frac{\pi i }{4}} \frac{f(z)^{1/2}}{\sqrt{2} (1-z^2)^{1/4}}, \qquad N_{0,12}(z)= e^{\frac{\pi i}{4}} \frac{f(z)^{-1/2}}{\sqrt{2} (1-z^2)^{1/4}}, \end{equation*} where $(1-z^2)^{1/4}$ denotes the branch that is real and positive for $-1 < z < 1$ and $f(z)$ is given by \eqref{fz}. Thus \begin{multline} \label{innerP4} \widetilde{P}_n(z) = \frac{D_{\infty} e^{\frac{n \pi z}{2}+\frac{\nu \pi i}{4}}}{\sqrt{2} (2e)^n (1-z^2)^{1/4} W_n(z)^{1/2}} \\ \times \left[\left( \frac{f(z)^{1/2} W_n(z)^{1/2}}{D_1(z) D_2(z)} e^{n \varphi(z)-\frac{\nu\pi i}{4}- \frac{\pi i}{4}} + \frac{D_1(z) D_2(z)}{ f(z)^{1/2} W_n(z)^{1/2}} e^{-n\varphi(z)+\frac{\nu\pi i}{4}+ \frac{\pi i}{4}}\right) +\mathcal{O} \left(\epsilon_n \right)\right]. \end{multline} The two terms in parenthesis are inverse of each other. We write all contributing factors in exponential form. We have by \eqref{phig}, \eqref{gpgm} and \eqref{D2andpsi} \begin{align} \label{term1} e^{n \varphi(z)} & = \exp(\pi i n \int_z^1 \psi(s) ds) \\ D_2(z) e^{\frac{\nu \pi i}{4}} & = \exp \left(- \frac{\nu \pi}{2} \psi(z)\right) \label{term2} \end{align} for $\Re z > 0$, $\Im z > 0$, and we note that by \eqref{Wnestimate} and \eqref{D1nlimit} \begin{align} \label{term3} \frac{W_n(z)^{1/2}}{D_1(z)} = f(z)^{-1/4} \left( 1 + \mathcal{O}\left( \frac{\log n}{n}\right) \right) \end{align} as $n \to \infty$. Finally, we write \begin{align} \label{term4} f(z)^{1/2} = e^{\frac{i}{2} \arccos z}, \qquad \Im z > 0 \end{align} and inserting \eqref{term1}--\eqref{term4} into \eqref{innerP4} we find \eqref{asymp:Pn:inner}, where we also use \eqref{Wnestimate}, \eqref{Dinftylimit} to simplify the first factor. A similar calculation leads to the same formula \eqref{asymp:Pn:inner} for $z \in E$ with $\Re z > 0$ and $\Im z < 0$. \end{proof} \subsection{Proof of Theorem \ref{Th0}} \label{section43} \begin{proof} It follows from \eqref{proof:outer} and \eqref{N11asympexplicit} that the leading factor in the outer asymptotics of $P_n(in\pi z)$ does not vanish for $z \in \mathbb C \setminus [-1,1]$. Let $\widetilde{P}_n(z) = (in\pi)^{-n} P_n(in\pi z)$ be the monic polynomial. Then we find from \eqref{gfunction} that \begin{equation} \label{eq:logPnconvergence} \lim_{n\to\infty}\frac{1}{n} \log| \widetilde{P}_n(z)| = \Re g(z) = \int_{-1}^1 \log |z-x| \psi(x) dx, \end{equation} uniformly for $z$ in compact subsets of $\mathbb C \setminus [-1,1]$. This implies that for any given compact subset $K \subset \overline{\mathbb{C}}\setminus[-1,1]$, the polynomial $\widetilde{P}_n$ does not have any zeros in $K$ for $n$ large enough. In other words, all zeros of $\widetilde{P}_n$ tend to the interval $[-1,1]$ as $n \to \infty$. In addition we find from \eqref{eq:logPnconvergence} that the zeros of $\widetilde{P}_n$ have $\psi(x)$ as limiting density. This follows from standard arguments in potential theory, see e.g.\ \cite{ST}. This proves Theorem \ref{Th0}. \end{proof} \subsection{Proof of Theorem \ref{Th2}} \label{section44} Let $E$ be the neighborhood of $(-1,1)$ as in Theorem \ref{Th3}. Theorem \ref{Th2} will follow from the asymptotic approximation \eqref{innerP} that is valid uniformly for $z$ in \[ E_{\delta} = E \setminus \left(D(-1, \delta) \cup D(0,\delta) \cup D(1, \delta)\right) \] with $\Re z \geq 0$. \begin{lemma} \label{lem:allzeros} There is a constant $C > 0$ such that for large $n$ all zeros in $E_{\delta}$ satisfy \begin{equation} \label{Allzeros} \left| \Re \frac{\nu \pi}{2} \psi(z) - \Im \theta_n(z) \right| < C \epsilon_n. \end{equation} \end{lemma} \begin{proof} It is enough to consider $\Re z \geq 0$. Let \[ F_n(z) = \exp \left( \frac{\nu \pi}{2} \psi(z) + i \theta_n(z)\right) \] Then by \eqref{asymp:Pn:inner} we have that zeros of $\widetilde{P}_n$ in $E_{\delta}$ with $\Re z > 0$ are in the region where \[ F_n(z) \left(1+ \mathcal{O}\left(\frac{\log n}{n}\right) \right) + F_n(z)^{-1}\left(1 + \mathcal{O}\left(\frac{\log n}{n}\right)\right) = \mathcal{O}(\epsilon_n). \] This leads to \[ F_n(z) + F_n(z)^{-1}= \mathcal{O}(\epsilon_n), \] and so there is a constant $C > 0$ such that all zeros in $E_{\delta}$ satisfy \begin{equation} \label{Allzeros2} |F_n(z) + F_n(z)^{-1}| \leq C \epsilon_n \end{equation} if $n$ is large enough. Note that \[ |F_n(z)| = \exp\left(\Re \frac{\nu \pi}{2} \psi(z) - \Im \theta_n(z) \right). \] Thus if \eqref{Allzeros} is not satisfied then either $ |F_n(z)| \geq \exp(C \epsilon_n)$ or $|F_n(z)| \leq \exp(-C \epsilon_n)$. In both cases it follows that \[ |F_n(z) + F_n(z)^{-1}| \geq e^{C \epsilon_n} - e^{-C \epsilon_n} \geq 2 C \epsilon_n. \] Because of \eqref{Allzeros2} this cannot happen for zeros of $\widetilde{P}_n$ in $E_{\delta}$ if $n$ is large enough, and the lemma follows. \end{proof} The lemma is the main ingredient to prove Theorem \ref{Th2}. \begin{proof}[Proof of Theorem \ref{Th2}] In the proof we use $c_1, c_2, \ldots$, to denote positive constants that do not depend on $n$ or $z$. The constants will depend on $\delta > 0$. It is easy to see from the definition \eqref{defthetan} that $\theta_n'(x) \leq c_1 n < 0$ for $x \in (0, 1- \delta)$ This implies that for some constant $c_2 > 0$ \begin{equation} \label{zeros1} \Im \theta_n(z) \begin{cases} \leq - c_2 n \Im z & \text{ for } z \in E_{\delta}, \Re z >0, \Im z \geq 0 \\ \geq c_2 n |\Im z| & \text{ for } z \in E_{\delta}, \Re z > 0, \Im z < 0 \end{cases} \end{equation} There are also constants $c_3, c_4 > 0$ such that \begin{equation} \label{zeros2} c_3 < \Re \frac{\nu \pi}{2} \psi(z) < c_4, \qquad z \in E_{\delta}, \Re z > 0, \end{equation} see \eqref{complexpsi}. Thus if $\Im z \geq 0$ then by \eqref{zeros1} and \eqref{zeros2} \[ \left| \Re \frac{\nu \pi}{2} \psi(z) - \Im \theta_n(z) \right| \geq c_2 n \Im z + c_3 \geq c_3 > 0\] and thus there are no zeros in $E_{\delta}$ with $\Im z \geq 0$ by Lemma \ref{lem:allzeros} if $n$ is large enough. For $\Im z \leq 0$ we have by \eqref{zeros1} and \eqref{zeros2} \[ \left| \Re \frac{\nu \pi}{2} \psi(z) - \Im \theta_n(z) \right| \geq c_2 n |\Im z| - c_4 \] It follows from this and Lemma \ref{lem:allzeros} that for large $n$, there are no zeros with $\Im z \leq -\frac{c_5}{n}$ if $c_5 > c_4/c_2$. Now assume $z\in E_{\delta}$ with $ - \frac{c_5}{n} < \Im z < 0$ and $\Re z > 0$. Write $ z = x + i y$. Then by Taylor expansion \[ \frac{\nu \pi}{2} \psi(z) = \frac{\nu \pi}{2} \psi(x) + \mathcal{O}(1/n) \] and, see also \eqref{defthetan}, \begin{align*} \theta_n(z) & = \theta_n(x) + iy \theta_n'(x) + \mathcal{O}(1/n) \\ & = \theta_n(x) - iy n \pi \psi(x) + \mathcal{O}(1/n) \end{align*} and $\mathcal{O}$ terms are uniform for $z$ in the considered region. Then since $\psi(x)$ and $\theta_n(x)$ are real, we have \begin{align*} \Re \frac{\nu \pi}{2} \psi(z) - \Im \theta_n(z) & = \frac{\nu \pi}{2} \psi(x) + yn \pi \psi(x) + \mathcal{O}(1/n) \\ &= \left(\frac{\nu}{2} + ny \right) \pi \psi(x) + \mathcal{O}(1/n) \end{align*} Thus if $|\frac{\nu}{2 } + ny| \geq c_6 \epsilon_n$ then by the above and \eqref{zeros2} \[ \left| \Re \frac{\nu \pi}{2} \psi(z) - \Im \theta_n(z) \right| \geq \frac{2 c_6 c_3}{\nu} \epsilon_n + \mathcal{O}(1/n) \] and from Lemma \ref{lem:allzeros} it follows that $z = x + iy$ is not a zero if $c_6$ is large enough. Thus for large $n$ all zeros $z = x + i y$ of $\widetilde{P}_n$ in $E_{\delta}$ satisfy \[ \left|\frac{\nu}{2 } + n y\right| \leq c_6 \epsilon_n. \] Then $in \pi z$ is a zero of $P_n$, see \eqref{tildePn}, and the real part of this zero is $-n \pi y$ which differs from $\frac{\nu \pi}{2}$ by an amount less than $\pi c_6 \epsilon_n$. This proves Theorem \ref{Th2}. \end{proof} \section*{Acknowledgements} We thank Daan Huybrechs for suggesting the problem and for stimulating conversations. A. Dea\~{n}o gratefully acknowledges financial support from projects FWO G.0617.10 and FWO G.0641.11, funded by FWO (Fonds Wetenschappelijk Onderzoek, Research Fund Flanders, Belgium), and projects MTM2012--34787 and MTM2012-36732--C03--01, from Ministerio de Econom\'ia y Competitividad de Espa\~{n}a (Spanish Ministry of Economy and Competitivity). A.B.J. Kuijlaars is supported by KU Leuven Research Grant OT/12/073, the Belgian Interuniversity Attraction Pole P07/18, FWO Flanders projects G.0641.11 and G.0934.13, and by Grant No. MTM2011-28952-C02 of the Spanish Ministry of Science and Innovation. P. Rom\'an was supported by the Coimbra Group Scholarships Programme at KULeuven in the period February-May 2014.
2,869,038,155,788
arxiv
\section{#1}} \newcommand{\renewcommand{\thethm}{\thesection.\arabic{thm}}{\renewcommand{\thethm}{\thesection.\arabic{thm}} \setcounter{thm}{0} } \newcommand{\renewcommand{\thethm}{\thesubsection.\arabic{thm}}{\renewcommand{\thethm}{\thesubsection.\arabic{thm}} \setcounter{thm}{0} } \newcommand{\renewcommand{\thethm}{\arabic{thm}}{\renewcommand{\thethm}{\arabic{thm}} \setcounter{thm}{0} } \newcommand{\cref}[3]{(\ref{#1}, #2 \ref{#3})} \newcommand{\ingloss}[2]{\glossary{#1!#2}} \date{\today} \usepackage{amssymb,amsmath} \setboolean{probleme}{true} \setboolean{xlabels}{false} \usepackage{graphicx} \newcommand{\secemail}{ \setlength{\unitlength}{1pt} bothmer \begin{picture}(0,1) \put(0,0){m} \put(-5,0){@} \end{picture} ath.uni-hannover.de} \begin{document} \title{Focal values of plane cubic centers} \address{Courant Research Centre ''Higher Order Structures''\\ Mathematisches Institiut\\ University of G\"ottingen\\ Bunsenstrasse 3-5\\ D-37073 G\"ottingen } \email{\secemail} \urladdr{http://www.uni-math.gwdg.de/bothmer} \thanks{Supported by the German Research Foundation (Deutsche Forschungsgemeinschaft (DFG)) through the Institutional Strategy of the University of G\"ottingen} \author{Hans-Christian Graf v. Bothmer} \author{Jakob K\"oker} \begin{abstract} We prove that the vanishing of $11$ focal values is not sufficient to ensure that a plane cubic system has a center. \end{abstract} \maketitle \newcommand{^*}{^*} \newcommand{\bar{f}}{\bar{f}} \newcommand{\bar{g}}{\bar{g}} \newcommand{\bar{h}}{\bar{h}} \newcommand{\bar{a}}{\bar{a}} \newcommand{\bar{J}}{\bar{J}} \newcommand{\bar{N}}{\bar{N}} \newcommand{\bar{H}}{\bar{H}} \newcommand{\ZZ[x_1,\dots,x_n]}{\mathbb{Z}[x_1,\dots,x_n]} \newcommand{\QQ[x_1,\dots,x_n]}{\mathbb{Q}[x_1,\dots,x_n]} \newcommand{\CC[x_1,\dots,x_n]}{\mathbb{C}[x_1,\dots,x_n]} \newcommand{\FF_p}{\mathbb{F}_p} \newcommand{\FFp[x_1,\dots,x_n]}{\FF_p[x_1,\dots,x_n]} \renewcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\.Zo\l\c adek\,}{\.Zo\l\c adek\,} \newcommand{\.Zo\l\c adek's\,}{\.Zo\l\c adek's\,} \section{Introduction} \renewcommand{\thethm}{\thesection.\arabic{thm} In 1885 Poincar\'e asked when the differential equation \[ y' = - \frac{x + p(x,y)}{y+q(x,y)} =: - \frac{P(x,y)}{Q(x,y} \] with convergent power series $p(x,y)$ and $q(x,y)$ starting with quadratic terms, has stable solutions in the neighborhood of the equilibrium solution $(x,y)=(0,0)$. This means that in such a neighborhood the solutions of the equivalent plane autonomous system \begin{align*} \dot{x} &= y + q(x,y) = Q(x,y)\\ \dot{y} &= -x - p(x,y) = -P(x,y) \end{align*} are closed curves around $(0,0)$. Poincar\'e showed that one can iteratively find a formal power series $F = x^2+y^2+f_3(x,y)+f_4(x,y)+\dots$ such that \[ \det \begin{pmatrix} F_x & F_y \\ P & Q \end{pmatrix} = \sum_{j=1}^\infty s_j(x^{2j+2}+y^{2j+2}) \] with $s_j$ polynomials in the coefficients of $P$ and $Q$. If all $s_j$ vanish, and $F$ is convergent then $F$ is a constant of motion, i.e. its gradient field satisfies $Pdx+Qdy=0$. Since $F$ starts with $x^2+y^2$ this shows that close to the origin all integral curves are closed and the system is stable. Therefore the $s_j$'s are called the {\sl focal values} of $Pdx+Qdy$. Often also the notation $\eta_{2j} := s_j$ is used, and the $\eta_i$ are called {\sl Liapunov quantities}. Poincar\'e also showed, that if an analytic constant of motion exists, the focal values must vanish. Later Frommer \cite{Frommer} proved that the systems above are stable if and only if all focal values vanish even without the assumption of convergence of $F$. (Frommer's proof contains a gap which can be closed \cite{vWahlGap}) Unfortunately it is in general impossible to check this condition for a given differential equation because there are infinitely many focal values. In the case where $P$ and $Q$ are polynomials of degree at most $d$, the $s_j$ are polynomials in finitely many unknowns. Hilbert's Basis Theorem then implies that the ideal $I_\infty = (s_1,s_2,\dots)$ is finitely generated, i.e there exists an integer $m := m(d)$ such that \[ s_1 = s_2 = \dots = s_{m(d)} = 0 \implies s_j = 0 \quad\forall j. \] This shows that a finite criterion for stability exists, but due to the indirect proof of Hilbert's Basis Theorem no value for $m(d)$ is obtained. In fact even today only $m(2)=3$ is known. \.Zo\l\c adek\, \cite{ZoladekEleven} and Christopher \cite{ChristopherEleven} showed that $m(3) \ge 11$. Since the number of variables for $d=2$ is six and $m(2)=6-3$ it has been conjectured that for $d=3$ with $14$ variables one has $m(3)=14-3=11$. It is the purpose of this note to prove $m(3) \ge 12$. The most naive approach to this problem is to calculate a Gr\"obner Basis of $I_{11}= (s_1,\dots,s_{11})$ and prove that $s_{12} \not\in I_{11}$ by the usual ideal membership test. Unfortunately this is not feasible, since the $s_j$ are very complicated. They involve $14$ variables and are of weighted degree $2j$. For example $s_5$ has already $5348$ terms and takes about $1.5$ hours on a Powerbook G4 to calculate. The polynomials $s_j$, $j\ge 6$ can not at the moment be determined by computer algebra systems. \.Zo\l\c adek\, and Christopher therefore deduce their result geometrically. They exhibit a component $Y_{11} \subset X_\infty = V(I_\infty)$ that has codimension $11$ in the space of all possible $(P,Q)$ of degree at most three. Finding a component of codimension $12$ is not an easy task, and indeed we choose a different approach. We prove that there exist a codimension $11$ family plane autonomous system of degree $3$ with a {\sl focus} for which nevertheless the first $11$ focal values vanish, but 12th one doesn't. For this we look at the system \begin{align*} \dot{x} &=y+3x^2 + 8xy + 5y^2+3x^3 + 25x^2y + 20xy^2 + 18y^3\\ \dot{y} &= -(x+27x^2 + 9xy + 22y^2+11x^3 + 20x^2y + 4xy^2 + 3y^3) \end{align*} and prove that for this system $s_j = 0 \mod 29$ for $j \le 11$ while $s_{12} \not= 0 \mod 29$. Checking that furthermore the Jacobian matrix of $s_1,\dots,s_{11}$ has full rank modulo $29$ for this system, we can apply a theorem of Schreyer \cite{smallFields} to show the existence of the desired family of foci over $\mathbb{C}$. From this we deduce that $s_{12} \not\in I_{11} = (s_1,\dots,s_{11})$. If fact we even prove the stronger result $s_{12} \not \in \rad I_{11}$. Since for given a given system one can evaluate the $s_j$ using Frommers algorithm \cite{martin} without knowing the complete Polynomials, this approach is feasible. We found the above system by performing a random search. Heuristically each $s_i$ vanishes mod $29$ for about one of every $29$ differential equations \cite{irred}. So we expect to find an example as above after checking $29^{11} \approx 10^{16}$ random examples. By parametrizing $s_1$ and $s_2$ we can improve this to $29^9 \approx 10^{13}$ random examples. Indeed we found the example after about $8 \times 10^{12}$ trials. Using an improved version \cite{centerfocusweb} of the program \cite{strudelweb} this took 1246 CPU-days. Since this search is easily parallelizable we could do this calculation in about one month by distributing the work to several computers. We would like to thank the {\sl Regionales Rechenzentrum f\"ur Niedersachsen (RRZN)} and the {\sl Institut f\"ur Systems Engineering, Fachgebiet Simulation} for providing the necessary CPU time. Also we are grateful to Colin Christopher who checked our example using REDUCE \cite{reduce}. \section{The Proof} \renewcommand{\thethm}{\thesection.\arabic{thm} \begin{notation} If $I \subset \ZZ[x_1,\dots,x_n]$ is an ideal and $X_{\mathbb{Z}}=V(I) \subset \mathbb{A}^n_\mathbb{Z}$ is the variety over $\spec \mathbb{Z}$ defined by $I$, then we denote by $X_{\FF_p}$ the fiber of $X_\mathbb{Z}$ over $\FF_p$ for any prime $p$. Furthermore we donote by $X_\mathbb{C}$ the variety defined by $I$ over $\mathbb{C}$. \end{notation} \begin{thm}[Schreyer] \label{tSchreyer} Let $I=(f_1,\dots,f_k) \subset \ZZ[x_1,\dots,x_n]$ be an ideal and $X_\mathbb{Z} = V(I)$. If $x \in X_{\FF_p}$ is a point with $\codim T_{X_{\FF_p},x} = k$ then there exists an irreducible component $Y_\mathbb{Z} \subset X_\mathbb{Z}$ with $x \in Y_\mathbb{Z}$ and $Y_\mathbb{Z} \not\subset X_{\FF_p}$. In particular $Y_\mathbb{C} \not= \emptyset$ \end{thm} \begin{proof} This is a special case of a theorem of Schreyer \cite{smallFields}. See also \cite{newFamily} for a proof. \end{proof} \begin{figure}[h] \includegraphics*[width=10cm]{SpecZZ.pdf} \caption{A variety over $\spec \mathbb{Z}$} \label{fSpecZZ} \end{figure} \begin{example} Consider $X_\mathbb{Z} = V(3x) \subset \mathbb{A}_\mathbb{Z}^1$. This variety has two components over $\mathbb{Z}$ namely $Y_\mathbb{Z}=V(x)$ and $Z_\mathbb{Z} = V(3)$. Since $3=0$ is true only in $\mathbb{F}_3$ we have $Z_\mathbb{Z} = Z_{\mathbb{F}_3}$. Furthermore $Z_\mathbb{C} = \emptyset$. On the other hand $x=0$ is possible over all $\FF_p$ and $Y_\mathbb{C} \not=\emptyset$. See Figure \ref{fSpecZZ}. Indeed, if we consider the point $x=0\in X_{\FF_p}$, $p\not=3$ then we have that the derivative $(3x)'=3 \not=0$ and the tangent space $T_{0,X_{\mathbb{F}_p}}$ has codimension $1$. Therefore the Theorem applies and the component $Y_\mathbb{Z}$ containig $x=0_{\FF_p}$ is not contained in $X_{\FF_p}$. Since $3\cdot 1 = 0 \in \mathbb{F}_3$ we can also consider the point $x=1 \in X_{\mathbb{F}_3}$. Here we have $(3x)'=3=0$ and the tangent space $T_{1,X_{\mathbb{F}_3}}$ has codimension $0$. Hence the Theorem does not apply, and indeed the component $Z_\mathbb{Z} = Z_{\mathbb{F}_3}$ containing $x=1_{\mathbb{F}_3}$ is completely contained in $X_{\mathbb{F}_3}$. \end{example} \begin{cor} \label{cNotVanish} If in the situation of Theorem \ref{tSchreyer} we have a further polynomial $g \in \ZZ[x_1,\dots,x_n]$ satisfying $g(x) \not=0 \in \FF_p$ then $g$ does not vanish on $X_\mathbb{C}$. \end{cor} \begin{proof} Assume to the contrary that $g$ vanishes on $X_\mathbb{C}$. By Theorem \ref{tSchreyer} we have a component $Y_\mathbb{Z} \subset X_\mathbb{Z}$ with $x\in Y_\mathbb{Z}$ and $Y_\mathbb{C} \not= \emptyset$. Since $g$ vanishes on $X_\mathbb{C}$ and $Y_\mathbb{C} \not= \emptyset$ is also vanishes on $Y_\mathbb{C}$ and therefore on $Y_\mathbb{Z}$ and $Y_{\FF_p}$. But this contradicts our assumption $g(x) \not=0$. \end{proof} \begin{thm} $m(3) \ge 12$. \end{thm} \begin{proof} Use our implementation of Frommers algorithm \cite{martin}, \cite{strudelweb}, \cite{centerfocusweb} or REDUCE \cite{reduce} to check that the example in the introduction satisfies the conditions of Corollary \ref{cNotVanish}. \end{proof} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
2,869,038,155,789
arxiv
\section{Introduction} Consider the assignment problem in which $n$ agents expresses linear orders over $n$ objects and each agent is to be allocated one object~\citep{AbSo99a,ACMM05a,AMXY15a,BoMo01a,Gard73b, Sven94a,Sven99a}. The most famous mechanism for the problem is \emph{random serial dictatorship (RSD)}~\citep{ABB13b, BoMo01a}. In RSD, a permutation of the agents is chosen uniformly at random and then agents in the permutation are given the most preferred object that is still not allocated. The reason RSD is a compelling mechanism for the assignment problem is because it is strategyproof and also ex post efficient (the outcome can be represented as convex combination of deterministic Pareto optimal outcomes). In fact, it has even been conjectured that RSD is the only mechanism that satisfies anonymity, strategyproofness and ex post efficiency~\citep[see e.g.,\xspace][]{LeSe11a}. Although RSD is a desirable mechanism, \citet{BoMo01a} showed that RSD is not SD-efficient (efficiency with respect to stochastic dominance).\footnote{\citet{BoMo01a} used the term ordinal efficiency for SD-efficient. \citet{BoMo01a} also presented the probabilistic serial mechanism that is SD-efficient. However, the mechanism is not stratefyproof. } The observation was surprising because SD-efficiency is a very undemanding property. To highlight this point, note that if an assignment is not SD-efficient, then there exists another assignment in which for all cardinal utilities consistent with the ordinal preferences, all agents get at least as much utility and one agent gets strictly more utility. In this note, we explore the lack of SD-efficiency of RSD and present an elemetary and detailed argument for the following theorem. \begin{theorem*}[Proposition 3 of \citet{Mane09a}] For a given preference profile, the RSD assignment is not SD-efficient if and only if there exists an ex post assignment that is not SD-efficient. \end{theorem*} The theorem highlights the fact not only is RSD SD-\emph{inefficient} in general but is never SD--efficient if there some ex post efficient assignment is not SD-efficient. \section{Preliminaries} The model we consider is the \emph{random assignment problem}~\citep{BoMo01a} which is a triple $(N,O,\succ)$ where $N$ is the set of $n$ agents $\{1,\ldots, n\}$, $O=\{o_1,\ldots, o_n\}$ is the set of objects, and $\succ=(\succ_1,\ldots,\succ_n)$ specifies complete, anti-symmetric and transitive preferences $\succ_i$ of agent $i$ over $O$. We will denote by $\mathcal{R}(O)$ as the set of all complete and transitive relations over the set of objects $O$. A random assignment $p$ is a $n\times n$ matrix $[p(i)(o_j)]_{1\leq i\leq n, 1\leq j\leq n}$ such that for all $i\in N$, and $o_j\in O$, $ p(i)(o_j) \in [0,1]$; $\sum_{i\in N}p(i)(o_j)= 1$ for all $j\in \{1,\ldots, n\}$; and $\sum_{o_j\in O}p(i)(o_j)= 1$ for all $i\in N$. The value $p(i)(o_j)$ represents the probability of object $o_j$ being allocated to agent $i$. Each row $p(i)=(p(i)(o_1),\ldots, p(i)(o_n))$ represents the allocation of agent $i$. The set of columns correspond to probability vectors of the objects $o_1,\ldots, o_n$. A feasible random assignment is \emph{discrete} if $p(i)(o)\in \{0,1\}$ for all $i\in N$ and $o\in O$. A discrete Pareto optimal assignment $p$ is \emph{Pareto optimal} if there does not exists another Pareto optimal assignment $q$ such that each agent gets the same or more preferred object in $q$ and at least one agent gets a more preferred object in $q$. In order to reason about preferences over random allocations, we extend preferences over objects to preferences over random allocations. One standard extension is \emph{SD (stochastic dominance)} Given two random assignments $p$ and $q$, it holds that $p(i) \succsim_i^{\sd} q(i)$ i.e., a player $i$ \emph{$\sd$~prefers} allocation $p(i)$ to allocation $q(i)$ if for all $o\in O$. \[ \sum_{o_j\in \set{o_k\mathbin{:} o_k\succsim_i o}}p(i)(o_j) \ge \sum_{o_j\in \set{o_k\mathbin{:} o_k\succsim_i o}}q{(i)(o_j)}.\] An assignment $p$ is \emph{$\sd$-efficient} is there exists no assignment $q$ such that $q(i) \succsim_i^{\sd} p(i)$ for all $i\in N$ and $q(i) \succ_i^{\sd} p(i)$ for some $i\in N$. An assignment is \emph{ex post efficient} if it can be represented as a probability distribution over the set of $\sd$-efficient discrete assignments. Serial dictatorship (also called priority) is defined as follows. Each agent in permutation $\pi$ of $N$ gets a turn according to the permutation. When an agent's turn comes, the agent is allocated the most preferred object that is not yet allocated. The outcome of serial dictatorship is a deterministic assignment. We will refer to the outcome of serial dictatorship with respect to permutation $\pi$ as $\mathrm{Prio}(N,A,\succ,\pi)$. RSD is a random assignment rule in which a permutation of agents is chosen uniformly at random and then serial dictatorship is run with respect to the permutation~\citep{AbSo98a,BoMo01a,ABB13b}: \begin{equation*} \textit{RSD}(N,O,\succ)=\sum_{\pi\in \Pi^N} \frac{1}{n!}(\mathrm{Prio}(N,A,\succ,\pi)) \end{equation*} where $\Pi^N$ denote the set of all permutations of $N$. We say a random assignment $p$ is a \emph{proper} convex combination of a set of assignments $B$ if $p$ can be represented as a convex combination of assignments in $B$ such that the weight of each assignment in $B$ is non-zero. Note that the RSD assignment is a proper convex combination of the serial dictatorship outcomes. \begin{example Consider an assignment problem in which $N=\{1,2,3,4\}$, $O=\{o_1,o_2,o_3,o_4\}$ and the preferences $\succ$ are as follows.\footnote{The profile in this example is the same one that was used by \citet{BoMo01a} to show that the RSD outcome is not SD-efficient.} \begin{align*} 1:&\quad o_1,o_2,o_3,o_4& 3:&\quad o_2,o_1,o_4,o_3\\ 2:&\quad o_1,o_2,o_3,o_4& 4:&\quad o_2,o_1,o_4,o_3 \end{align*} \[\mathrm{Prio}(N,O,\succ,1234)=\begin{pmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\\ \end{pmatrix}, ~~ RSD(N,O,\succ)=\begin{pmatrix} \nicefrac{5}{12}&\nicefrac{1}{12}&\nicefrac{5}{12}&\nicefrac{1}{12}\\ \nicefrac{5}{12}&\nicefrac{1}{12}&\nicefrac{5}{12}&\nicefrac{1}{12}\\ \nicefrac{1}{12}&\nicefrac{5}{12}&\nicefrac{1}{12}&\nicefrac{5}{12}\\ \nicefrac{1}{12}&\nicefrac{5}{12}&\nicefrac{1}{12}&\nicefrac{5}{12} \end{pmatrix}.\] In the RSD assignment, the probability of agent $1$ getting $o_1$ is $5/12$. \end{example} % % % % \section{Inefficiency of RSD} Before we proceed, we present a characterization of SD-efficiency~ \citep[Lemma 3, ][]{BoMo01a}. An assignment $p$ admits a \emph{trading cycle} $o_0,i_0,o_1,i_1,\ldots, o_{k-1},i_{k-1},o_0$ in which $p(i_j)(o_j)>0$ for all $j\in \{0,\ldots, k-1\}$, $o_{j+1 \mod k} \succ_j o_{j\mod k}$ for all $j\in \{0,\ldots, k-1\}$. \citet{BoMo01a} proved that an assignment is SD-efficient if and only if it does not admit a trading cycle. \begin{fact}[\citet{BoMo01a}]\label{fact:sdeff} An assignment is SD-efficient if and only if it does not admit a trading cycle. \end{fact} We will also use the following characterization of Pareto optimal discrete assignments~\citep{AbSo98a}. Fact~\ref{fact:AbSo} also follows from Proposition 1 by \citet{BrKi05a}. \begin{fact}[\citet{AbSo98a}]\label{fact:AbSo} A discrete assignment is Pareto optimal if and only if it is an outcome of serial dictatorship. \end{fact} We first present a simple lemma. \begin{lemma}\label{lemma:convex-non-zero} Consider any assignment $p$ that is a proper convex combination of all discrete Pareto optimal assignments. Then $p(i)(o)>0$ if there exists some Pareto optimal discrete assignment $q$ such that $q(i)(o)>0$. \end{lemma} \begin{proof} Assume that there exists some Pareto optimal discrete assignment $q$ such that $q(i)(o)>0$. Since $q$ has non-zero probability when $r$ is expressed as a convex combination of discrete Pareto optimal assignments, it follows that $q(i)(o)>0$. \end{proof} Next, we rely on Lemma~\ref{lemma:convex-non-zero} and Fact~\ref{fact:sdeff} to prove the following. \begin{lemma}\label{lemma:convex-expost} For a given preference profile, if there exists a random assignment that is ex post efficient but not SD-efficient, then a proper convex combination of all Pareto optimal deterministic assignments is such a random assignment as well. \end{lemma} \begin{proof} Assume that there exists some assignment $s$ that is ex post but not SD efficient. Consider any assignment $p$ that is a proper convex combination of all discrete Pareto optimal assignments. We will show that $p$ is not SD-efficient. Since $s$ is ex post efficient, it can be represented by a convex combination of Pareto optimal discrete assignments. Let the set be $B$. Since $s$ is not SD-efficient, by Fact~\ref{fact:sdeff}, it admits a trading cycle $o_0,i_0,o_1,i_1,\ldots, o_{k-1},i_{k-1},o_0$ in which $s(i_j)(o_j)>0$ for all $j\in \{0,\ldots, k-1\}$, $o_{j+1 \mod k} \succ_j o_{j\mod k}$ for all $j\in \{0,\ldots, k-1\}$. Since $s$ is ex post efficient, it can be represented as a convex combination of Pareto optimal assignments. Therefore if $s(i_j)(o_j)>0$, then there exist some discrete Pareto optimal assignment $r$ such that $r(i_j)(o_j)>0$. As $p$ is proper convex combination of all discrete Pareto optimal assignments, if there exists a discrete assignment $r$ such that $r(i_j)(o_j)>0$, then by Lemma~\ref{lemma:convex-non-zero}, $p(i_j)(o_j)>0$ for all $j\in \{0,\ldots, k-1\}$. By this argument, it follows that $p$ admits a trading cycle $o_0,i_0,o_1,i_1,\ldots, o_{k-1},i_{k-1},o_0$ in which $p(i_j)(o_j)>0$ for all $j\in \{0,\ldots, k-1\}$, $o_{j+1 \mod k} \succ_j o_{j\mod k}$ for all $j\in \{0,\ldots, k-1\}$. Hence $p$ is not SD-efficient. \end{proof} Next, we use Fact~\ref{fact:AbSo} to prove the following. \begin{lemma}\label{lemma:rsd-convex} The outcome of RSD is a proper convex combination of all Pareto optimal deterministic assignments. \end{lemma} \begin{proof} RSD can be viewed as applying serial dictatorship with respect to all the $n!$ permutations and then aggregating each of the outcomes weighted with the probability $1/n!$ By Fact~\ref{fact:AbSo}, each discrete Pareto optimal assignment is a result of serial dictatorship with respect to some permutation. Hence the RSD assignment is a proper convex combination of all Pareto optimal deterministic assignments. \end{proof} By using Lemmas~\ref{lemma:convex-expost} and \ref{lemma:rsd-convex}, we can get the following statement. \begin{theorem*}[\citet{Mane09a}] For a given preference profile, the RSD assignment is not SD-efficient if and only if there exists an ex post assignment that is not SD-efficient. \end{theorem*} \begin{proof} First assume that for the given preference profile, every ex post assignment is SD-efficient. Since the RSD assignment is ex post efficient, it follows that the RSD assignment is SD-efficient. We now assume that there exists an ex post assignment that is not SD-efficient (1). By Lemma~\ref{lemma:rsd-convex}, the RSD assignment is a proper convex combination of all Pareto optimal deterministic assignments (2). By (1) and (2), it follows from Lemma~\ref{lemma:convex-expost}, that the RSD assignment is not SD-efficient. \end{proof} % \paragraph{Acknowledgments} Data61 is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. Thanks to Mark Wilson for pointing out that the characterization was already present in a proposition in the paper by \citet{Mane09a}. \renewcommand{\bibfont}{\normalfont\small}
2,869,038,155,790
arxiv
\section{Introduction} \label{sec:intro} In regularized risk minimization, we consider optimization problems of the form \begin{equation} \label{eq:compfunc} \min_x F(x) \equiv f(x) + g(x), \end{equation} where $f$ is the loss, and $g$ is the regularizer. Typically, $f$ is smooth and convex (e.g., square and logistic losses), and $g$ is convex but may not be differentiable (e.g., $\ell_1$ and nuclear norm regularizers). The proximal gradient (PG) algorithm \cite{parikh2014proximal}, together with its accelerated variant (APG) \cite{beck2009fast,nesterov2013gradient}, have been popularly used for solving this convex problem. Its crux is the proximal step $\Px{g}{\cdot} = \arg\min_x \frac{1}{2}\NM{x - \cdot}{2}^2 + \eta g(x)$, which can often be easily computed in closed-form. While convex regularizers are easy to use, the resultant predictors may be biased \cite{zhang2010analysis}. Recently, there is growing interest in the use of nonconvex regularizers, such as the log-sum-penalty \cite{candes2008enhancing} and capped $\ell_1$-norm \cite{zhang2010analysis} regularizers. It has been shown that these often lead to sparser and more accurate models \cite{gong2013gist,canyi2014,zhong2014gradient,yao2015fast}. However, the associated proximal steps become more difficult to compute analytically, and cheap closed-form solutions exist only for some simple nonconvex regularizers \cite{gong2013gist}. This is further aggravated by the fact that the state-of-the-art PG algorithm for nonconvex optimization, namely the nonmonotone accelerated proximal gradient (nmAPG) algorithm \cite{li2015accelerated}, needs more than one proximal steps in each iteration. When the optimization objective is convex, one can reduce the computational complexity of the proximal step by only computing it inexactly (i.e., approximately). Significant speedup has been observed in practice, and the resultant inexact PG algorithm has the same convergence guarantee as the exact algorithm under mild conditions \cite{schmidt2011convergence}. However, on nonconvex problems, the use of inexact proximal steps has not been explored. Moreover, convergence of nmAPG hinges on the use of exact proximal steps. In this paper, we propose a new PG algorithm for nonconvex problems. Unlike nmAPG, it performs only one proximal step in each iteration. Moreover, the proximal step can be inexact. The algorithm is guaranteed to converge to a critical point of the nonconvex objective. Experimental results on nonconvex total variation models and nonconvex low-rank matrix learning show that the proposed algorithm is much faster than nmAPG and other state-of-the-art, while still producing solutions of comparable quality. The rest of the paper is organized as follows. Section~\ref{sec:rel} provides a brief review on the PG algorithm and its accelerated variant. The proposed algorithm is described in Section~\ref{sec:alg}, and its convergence analysis studied in Section~\ref{sec:conv}. Experimental results are presented in Section~\ref{sec:expt}, and the last section gives some concluding remarks. \vspace{-5px} \section{Related Work} \label{sec:rel} In this paper, we assume that $f$ in (\ref{eq:compfunc}) is $L$-Lipschitz smooth (i.e., $\NM{\nabla f(x) - \nabla f(y)}{2} \le L \NM{x - y}{2}$), and $g$ is proper, lower semi-continuous. Besides, $F=f+g$ in (\ref{eq:compfunc}) is bounded from below, and $\lim_{\NM{x}{2} \rightarrow \infty} F(x) = \infty$. Moreover, both $f$ and $g$ can be nonconvex. First, we consider the case where $f$ and $g$ in (\ref{eq:compfunc}) are convex. At iteration $k$, the accelerated proximal gradient (APG) algorithm generates $x_{k + 1}$ as \begin{eqnarray} y_k & = & x_k + \theta_k (x_k - x_{k - 1}), \label{eq:yk} \\ x_{k + 1} & = & \Px{\eta g}{y_k - \eta \nabla f(y_k)}, \label{eq:grad} \end{eqnarray} where $\theta_k = \frac{k - 1}{k + 2}$ and $\eta$ is the stepsize \cite{beck2009fast,nesterov2013gradient}. When $\theta_k = 0$, APG reduces to the plain PG algorithm. On nonconvex problems, $y_k$ can be a bad extrapolation and the iterations in \eqref{eq:yk}, \eqref{eq:grad} may not converge \cite{beck2009tv}. Recently, a number of PG extensions have been proposed to alleviate this problem. The iPiano \cite{ochs2014ipiano}, NIPS \cite{ghadimi2015accelerated}, and UAG \cite{ghadimi2015accelerated} algorithms allow $f$ to be nonconvex, but still requires $g$ to be convex. The GD algorithm \cite{attouch2013convergence} also allows $g$ to be nonconvex, but does not support acceleration. The current state-of-the-art is the nonmonotone APG (nmAPG) algorithm\footnote{A less efficient monotone APG (mAPG) algorithm is also proposed in \cite{li2015accelerated}.} \cite{li2015accelerated}, shown in Algorithm~\ref{alg:nmapg}. It allows both $f$ and $g$ to be nonconvex, and also uses acceleration. To guarantee convergence, nmAPG ensures that the objective is sufficiently reduced in each iteration: \begin{align} F(x_{k + 1}) \le F(x_k) - \frac{\delta}{2}\NM{v_{k + 1} - x_k}{2}^2, \label{eq:cond} \end{align} where $v_{k + 1} = \Px{\eta g}{x_k - \eta \nabla f(x_k)}$ and $\delta > 0$ is a constant. A second proximal step has to be performed (step~8) if a variant of \eqref{eq:cond} is not met (step~5). \begin{algorithm}[ht] \caption{Nonmonotone APG (nmAPG).} \begin{algorithmic}[1] \REQUIRE choose $\eta \in (0, 1/L)$, a positive constant $\delta$, $\Delta_1 = F(x_1)$, $q_1 = 1$, and $\nu \in (0, 1)$; \STATE $x_0 = x_1 = x^a_1 = 0$ and $t_0 = t_1 = 1$; \FOR{$k = 1, \dots, K $} \STATE $y_k = x_k + \frac{t_{k-1}}{t_k}(x^a_k - x_{k - 1})+\frac{t_{k-1}-1}{t_k}(x_k - x_{k - 1})$; \STATE $x^a_{k + 1} = \Px{\eta g}{y_k - \eta \nabla f(y_k)}$; \IF{$F(x^a_{k + 1}) \le \Delta_k - \frac{\delta}{2}\NM{x^a_{k + 1} - y_k}{2}^2$} \STATE $x_{k + 1} = x^a_{k + 1}$; \ELSE \STATE $x^p_{k + 1} = \Px{\eta g}{x_k - \eta \nabla f(x_k)}$; \STATE $x_{k + 1} = \begin{cases} x^a_{k + 1} & F(x^a_{k + 1}) \le F(x^p_{k + 1}) \\ x^p_{k + 1} & \text{otherwise} \end{cases} $; \ENDIF \STATE $q_{k + 1} = \nu q_k + 1$; \STATE $t_{k + 1} = \frac{1}{2}\left((4 t_k^2 + 1)^{1/2} + 1\right)$; \STATE $\Delta_{k + 1} = \frac{1}{q_{k + 1}} ( \nu q_k \Delta_k + F(x_{k + 1}) )$; \ENDFOR \RETURN $x_{K + 1}$. \end{algorithmic} \label{alg:nmapg} \end{algorithm} \vspace{-10px} \section{Efficient APG for Nonconvex Problems} \label{sec:alg} The proposed algorithm is shown in Algorithm~\ref{alg:ours}. Following \cite{schmidt2011convergence}, we use a simpler acceleration scheme in step~3. Efficiency of the algorithm comes from two key ideas: reducing the number of proximal steps to one in each iteration (Section~\ref{sec:reduce}); and the use of inexact proximal steps (Section~\ref{sec:iextpx}). Besides, we also allow nonmonotone update on the objective (and so $F(y_k)$ may be larger than $F(x_k)$). This helps to jump from narrow curved valley and improve convergence \cite{grippo2002nonmonotone,wright2009sparse,gong2013gist}. Note that when the proximal step is inexact, nmAPG does not guarantee convergence as its Lemma~2 no longer holds. \begin{algorithm}[ht] \caption{Noconvex inexact APG (niAPG) algorithm.} \begin{algorithmic}[1] \REQUIRE choose $\eta \in (0, \frac{1}{L})$ and $\delta \in (0, \frac{1}{\eta} - L)$; \STATE $x_0 = x_1 = 0$; \FOR{$k = 1, \dots, K $} \STATE $y_k = x_k + \frac{k - 1}{k + 2} (x_k - x_{k - 1})$; \STATE $\Delta_k = \max_{t = \max(1, k - q), \dots, k} F(x_t)$; \IF{$F(y_k) \le \Delta_k$} \STATE $v_k = y_k$; \ELSE \STATE $v_k = x_k$; \ENDIF \STATE $z_k = v_k - \eta \nabla f(v_k)$; \STATE $x_{k + 1} = \Px{\eta g}{z_k}$; \text{ // possibly inexact} \ENDFOR \RETURN $x_{K + 1}$. \end{algorithmic} \label{alg:ours} \end{algorithm} \vspace{-10px} \subsection{Using Only One Proximal Step} \label{sec:reduce} Recall that on extending APG to nonconvex problems, the key is to ensure a sufficient decrease of the objective in each iteration. Let $\rho=1/\eta$. For the standard PG algorithm with exact proximal steps, the decrease in $F$ can be bounded as follows. \begin{proposition}[\cite{gong2013gist,attouch2013convergence}] \label{pr:exactpx} $F(x_{k + 1}) \le F(x_k) - \frac{ \rho - L}{2}\NM{x_{k + 1} - x_k}{2}^2$. \end{proposition} In nmAPG (Algorithm~\ref{alg:nmapg}), there is always a sufficient decrease after performing the (non-accelerated) proximal descent from $x_k$ to $x^p_{k + 1}$ (Proposition~\ref{pr:exactpx}), but not necessarily the case for the accelerated descent from $x_k$ to $x^a_{k + 1}$ (which is generated by a possibly bad extrapolation $y_k$). Hence, nmAPG needs to perform extra checking at step~5. If the condition fails, $x^p_{k + 1}$ is used instead of $x^a_{k + 1}$ in step~9. As the main problem is on $y_k$, we propose to check $F(y_k)$ (step~5 in Algorithm~\ref{alg:ours}) {\bf before} the proximal step (step~11), instead of checking after the proximal steps. Though this change is simple, the main difficulty is how to guarantee convergence while simultaneously maintaining acceleration and using only one proximal step. As will be seen in Section~\ref{sec:extprox}, existing proofs do not hold even with exact proximal steps. The following shows that a similar sufficient decrease condition can still be guaranteed after this modification. \begin{proposition} \label{pr:suffdesc} With exact proximal steps in Algorithm~\ref{alg:ours}, $F(x_{k + 1}) \le \min\left( F(y_k), \Delta_k \right) - \frac{\rho - L}{2} \NM{x_{k + 1} - v_k}{2}^2$. \end{proposition} In step~4, setting $q=0$ is the most straightforward. The $F(y_k)$ value is then checked w.r.t. the most recent $F(x_k)$. The use of a larger $q$ is inspired from the Barzilai-Borwein scheme for unconstrained smooth minimization \cite{grippo2002nonmonotone}. This allows $y_k$ to occasionally increase the objective, while ensuring $F(y_k)$ to be smaller than the largest objective value from the last $q$ iterations. In the experiments, $q$ is set to $5$ as in \cite{wright2009sparse,gong2013gist}. \subsection{Inexact Proximal Step} \label{sec:iextpx} Proposition~\ref{pr:suffdesc} requires exact proximal step, which can be expensive. Inexact proximal steps are much cheaper, but the inexactness has to be carefully controlled to ensure convergence. We will propose two such schemes depending on whether $g$ is convex. Note that $f$ is not required to be convex. \noindent \textbf{Convex $g$.} As $g$ is convex, the optimization problem associated with the proximal step is also convex. Let $h_{\eta g}(x) \equiv \frac{1}{2}\NM{x - z_k}{2}^2 + \eta g(x)$ be the objective in the proximal step. For any $z_k$, the dual of the proximal step at $z_k$ can be obtained as \begin{equation} \label{eq:dual} \max_{w} \mathcal{D}_{\eta g}(w) \equiv \eta \left(z_k^{\top} w - g^{*}(w)\right) - \frac{\eta^2}{2}\NM{w}{2}^2, \end{equation} where $g^*$ is the convex conjugate of $g$. In an inexact proximal step, the obtained dual variable $\tilde{w}_k$ only approximately maximizes (\ref{eq:dual}). The duality gap $\varepsilon_k \equiv h_{\eta g}(x_{k + 1}) - \mathcal{D}_{\eta g}(\tilde{w}_k)$, where $x_{k + 1} = z_k - \eta \tilde{w}_k$, upper-bounds the approximation error of the inexact proximal step $\epsilon_k \equiv h_{\eta g}(x_{k + 1}) - h_{\eta g}\left( \Px{\eta g}{z_k} \right)$. To ensure the inexactness $\epsilon_k$ to be smaller than a given threshold $\tau_k$, we can control the duality gap as $\varepsilon_k\leq\tau_k$ \cite{schmidt2011convergence}. The following shows that $x_{k + 1}$ satisfies a similar sufficient decrease condition as in Proposition~\ref{pr:suffdesc}. Note that this cannot be derived from \cite{schmidt2011convergence}, which relies on the convexity of $f$. \begin{proposition} \label{pr:funcvale} $ \!\! F(x_{k + 1}) \! \le \! F(v_k) - \frac{\rho - L}{2} \NM{x_{k + 1} \! - \! v_k}{2}^2+ \rho \varepsilon_k$. \end{proposition} \noindent \textbf{Nonconvex $g$.} When $g$ is nonconvex, the GD algorithm \cite{attouch2013convergence} allows inexact proximal steps. However, it does not support acceleration, and nonmonotone update. Thus, its convergence proof cannot be used here. As $g$ is nonconvex, it is difficult to derive the dual of the corresponding proximal step, and the optimal duality gap may also be nonzero. Thus, we monitor the progress of $F$ instead. Inspired by Proposition~\ref{pr:exactpx}, we require $x_{k + 1}$ from an inexact proximal step to satisfy the following weaker condition: \begin{align} F(x_{k + 1}) \le F(v_k) - \frac{\delta}{2}\NM{x_{k + 1} - v_k}{2}^2, \label{eq:sheother} \end{align} where $\delta \in (0, \rho - L)$. This condition has also been used in the GD algorithm. However, it requires checking an extra condition which is impractical.\footnote{Specifically, the condition is: $\exists$ $w_{k + 1} \in \partial g(x_{k + 1})$ such that $\NM{w_{k + 1} + \nabla f(v_k)}{2}^2 \le b \NM{x_{k + 1} - v_k}{2}^2$ for some constant $b > 0$. However, the subdifferential $\partial g(x_k)$ is in general difficult to compute.} We could have also used condition \eqref{eq:sheother} when $g$ is convex. However, Proposition~\ref{pr:funcvale} offers more precise control, as it can recover \eqref{eq:sheother} by setting $\varepsilon_k = \frac{\rho - L - \delta}{2 \rho}\NM{x_{k + 1} - v_k}{2}^2$ (note that $\delta < \rho - L$). Besides, the duality gap $\varepsilon_k$ is readily produced by primal-dual algorithms, and is often less expensive to compute than $F$. \section{Convergence Analysis} \label{sec:conv} \begin{definition}[\cite{attouch2013convergence}] \label{def:subdifferential} The {\em Frechet subdifferential} of $F$ at $x$ is \begin{align*} \hat{\partial} F(x) = \left\{ u : \lim_{y \neq x}\inf_{y \rightarrow x} \frac{F(y) - F(x) - u^{\top} (y - x)}{\NM{y - x}{2}} \ge 0\right\}. \end{align*} The {\em limiting subdifferential} (or simply {\em subdifferential}) of $F$ at $x$ is $\partial F(x) = \{ u : \exists x_k \rightarrow x, F(x_k) \rightarrow F(x), u_k \in \hat{\partial} F(x_k) \!\rightarrow\! u, \text{ as } k \rightarrow \infty \}$. \end{definition} \begin{definition}[\cite{attouch2013convergence}] $x$ is a {\em critical point} of $F$ if $0 \in \nabla f(x) + \partial g(x)$. \end{definition} \subsection{Exact Proximal Step} \label{sec:extprox} In this section, we will show that Algorithm~\ref{alg:ours} (where both $f$ and $g$ can be nonconvex) converges with a $O(1/K)$ rate. This is the best known rate for nonconvex problems with first-order methods \cite{nesterov2004introductory}. A similar $O(1/K)$ rate for $\NM{\Gm{v_k}}{2}^2$ is recently established for APG with nonconvex $f$ but only convex $g$ \cite{ghadimi2015accelerated}. Note also that no convergence rate has been proved for nmAPG \cite{li2015accelerated} and GD \cite{attouch2013convergence} in this case. Besides, their proof techniques cannot be used here as their nonmonotone updates are different. \begin{theorem} \label{thm:exact} The sequence $\{x_k\}$ generated from Algorithm~\ref{alg:ours} (with exact proximal step) have at least one limit point, and all limit points are critical points of \eqref{eq:compfunc}. \end{theorem} Let $\Gm{v} = v - \Px{\eta g}{v - \eta \nabla f(v)}$, the proximal mapping at $v$ \cite{parikh2014proximal}. The following Lemma suggests that $\NM{\Gm{v}}{2}^2$ can be used to measure how far $v$ is from optimality \cite{ghadimi2015accelerated}. \begin{lemma}[\cite{gong2013gist,attouch2013convergence}] \label{lem:prox} $v$ is a critical point of \eqref{eq:compfunc} if and only if $\Gm{v} = 0$. \end{lemma} The following Proposition shows that the proposed Algorithm~\ref{alg:ours} converges with a $O(1/K)$ rate. \begin{proposition} \label{pr:exact:rate} Let $\phi(k) = \arg\min_{t = \max(k - q, 1), \dots, k}$ $\NM{x_{t + 1} - v_t}{2}^2$. (i) $\lim_{k \rightarrow \infty} \NM{ \Gm{v_{\phi(k)} }}{2}^2 = 0$; and (ii) $\min_{k = 1, \dots, K} \NM{\Gm{ v_{{\phi(k)}} }}{2}^2 \le \frac{2 (q + 1) c_1}{(\rho - L) K}$, where $c_1 = \max_{t = 1, \dots, q + 1}F(x_t) - \inf F$. \end{proposition} \begin{table*}[ht] \centering \small \vspace{-5px} \caption{Results on the image inpainting experiment (CPU time is in seconds). } \label{tab:tvimg} \begin{tabular}{cc | c| c| c | c| c | c} \hline & & \multicolumn{2}{c|}{$\lambda=0.01$} & \multicolumn{2}{c|}{$\lambda=0.02$} & \multicolumn{2}{c}{$\lambda=0.04$} \\ & & RMSE & CPU time & RMSE & CPU time & RMSE & CPU time \\ \hline \multirow{4}{*}{(nonconvex)} & GDPAN & 0.0326$\pm$0.0001 & 212.1$\pm$50.9 & 0.0301$\pm$0.0001 & 172.6$\pm$28.4 & 0.0337$\pm$0.0001 & 151.6$\pm$57.0 \\ \cline{2-8} & nmAPG & \textbf{0.0323$\pm$0.0001} & 600.5$\pm$35.8 & \textbf{0.0299$\pm$0.0001} & 461.7$\pm$33.3 & \textbf{0.0335$\pm$0.0001} & 535.6$\pm$29.7 \\ \cline{2-8} & niAPG(exact) & \textbf{0.0323$\pm$0.0001} & 307.4$\pm$26.8 & \textbf{0.0299$\pm$0.0001} & 297.2$\pm$35.3 & \textbf{0.0335$\pm$0.0001} & 282.7$\pm$19.3 \\ \cline{2-8} & niAPG & \textbf{0.0323$\pm$0.0002} & \textbf{91.6$\pm$10.8} & \textbf{0.0299$\pm$0.0001} & \textbf{77.1$\pm$7.4} & \textbf{0.0335$\pm$0.0001} & \textbf{56.5$\pm$9.4} \\ \hline (convex) & ADMM & 0.0377$\pm$0.0001 & 55.7$\pm$5.1 & 0.0337$\pm$0.0001 & 54.7$\pm$1.4 & 0.0362$\pm$0.0001 & 33.2$\pm$1.5 \\ \hline \end{tabular} \end{table*} \begin{table*}[ht] \centering \small \vspace{-15px} \caption{Matrix completion performance on the synthetic data (CPU time in seconds). Here, NMSE is scaled by $\times 10^{-2}$. Group (I) is based on convex nuclear norm regularization; group (II) on factorization model; and group (III) on nonconvex model \eqref{eq:promc}.} \begin{tabular}{cc|ccc|ccc|ccc} \hline && \multicolumn{3}{c|}{$m=500$ (observed: $12.43\%$)} & \multicolumn{3}{c|}{$m=1000$ (observed: $6.91\%$)} & \multicolumn{3}{c}{$m=2000$ (observed: $3.80\%$)} \\ & & NMSE & rank & CPU time & NMSE & rank & CPU time & NMSE & rank & CPU time \\ \hline \multirow{2}{*}{(I)} & active & 4.10$\pm$0.16 & 42 & 11.8$\pm$1.1 & 4.08$\pm$0.11 & 55 & 77.6$\pm$8.4 & 3.92$\pm$0.04 & 71 & 507.3$\pm$25.4 \\ \cline{2-11} &ALT-Impute & 3.99$\pm$0.15 & 42 & 1.9$\pm$0.2 & 3.87$\pm$0.09 & 55 & 29.4$\pm$1.2 & 3.68$\pm$0.03 & 71 & 143.1$\pm$3.9 \\ \hline \multirow{2}{*}{(II)} &AltGrad & 2.99$\pm$0.45 & 5 & 0.2$\pm$0.1 & 2.73$\pm$0.21 & 5 & \textbf{0.4$\pm$0.1} & 2.67$\pm$0.27 & 5 & \textbf{1.2$\pm$0.2} \\ \cline{2-11} & R1MP & 23.04$\pm$1.27 & 45 & 0.3$\pm$0.1 & 21.39$\pm$0.94 & 54 & 0.9$\pm$0.1 & 20.11$\pm$0.28 & 71 & 2.7$\pm$0.2 \\ \hline \multirow{5}{*}{(III)} &IRNN & \textbf{1.96$\pm$0.05} & 5 & 19.2$\pm$1.2 & \textbf{1.88$\pm$0.04} & 5 & 215.1$\pm$4.3 & \textbf{1.80$\pm$0.03} & 5 & 3009.5$\pm$35.9 \\ \cline{2-11} & FaNCL & \textbf{1.96$\pm$0.05} & 5 & 0.4$\pm$0.1 & \textbf{1.88$\pm$0.04} & 5 & 1.4$\pm$0.1 & \textbf{1.80$\pm$0.03} & 5 & 5.6$\pm$0.2 \\ \cline{2-11} & nmAPG & \textbf{1.96$\pm$0.05} & 5 & 2.3$\pm$0.2 & \textbf{1.88$\pm$0.03} & 5 & 6.9$\pm$0.3 & \textbf{1.80$\pm$0.03} & 5 & 27.1$\pm$4.0 \\ \cline{2-11} & niAPG(exact) & \textbf{1.96$\pm$0.04} & 5 & 1.8$\pm$0.2 & \textbf{1.88$\pm$0.03} & 5 & 5.3$\pm$0.5 & \textbf{1.80$\pm$0.04} & 5 & 18.4$\pm$2.2 \\ \cline{2-11} & niAPG & \textbf{1.96$\pm$0.05} & 5 & \textbf{0.1$\pm$0.1} & \textbf{1.88$\pm$0.03} & 5 & \textbf{0.4$\pm$0.1} & \textbf{1.80$\pm$0.04} & 5 & \textbf{1.2$\pm$0.2} \\ \hline \end{tabular} \label{tab:sythmatcomp} \vspace{-8px} \end{table*} \subsection{Inexact Proximal Step} \label{sec:cvx_conv} \noindent \textbf{Convex $g$.} As in \cite{schmidt2011convergence}, we assume that the duality gap $\varepsilon_k$ decays as $O(1/k^{1 + \varsigma})$ for some $\varsigma > 0$. Let $c \equiv \sum_{k = 1}^{\infty} \varepsilon_k$. Note that $c < \infty$. \begin{theorem} \label{thm:conv} The sequence $\{x_k\}$ generated from Algorithm~\ref{alg:ours} have at least one limit point, and all limit points are critical points of \eqref{eq:compfunc}. \end{theorem} \begin{proposition} \label{pr:inextpx} Let $e_k \equiv x_{k + 1} - \Px{\eta g}{x_k - \eta \nabla f(x_k)}$, the difference between the inexact and exact proximal step solutions at iteration $k$. We have $\NM{e_k}{2}^2 \le 2 \varepsilon_k$. \end{proposition} Note that the proof techniques in \cite{schmidt2011convergence} cannot be used here as $f$ is not required to be convex. As in Proposition~\ref{pr:exact:rate}, we also use $\NM{\Gm{v_{\phi(k)}}}{2}^2$ to measure how far $v_{\phi(k)}$ is from optimality. \begin{proposition} \label{pr:ietcvx} (i) $\lim_{k \rightarrow \infty} \NM{\Gm{v_{\phi(k)}}}{2}^2 = 0$; and (ii) $\min_{k = 1, \dots, K} \NM{\Gm{v_{\phi(k)}}}{2}^2 \le \frac{2}{K} ( 4 c + \frac{(q + 1)(c_1 + \rho c)}{\rho - L} )$. \end{proposition} When all $\varepsilon_k$'s are zero, Proposition~\ref{pr:ietcvx} reduces to Proposition~\ref{pr:exact:rate}. In general, the bound of $\min_{k = 1, \dots, K} \NM{\Gm{v_{\phi(k)}}}{2}^2$ in Proposition~\ref{pr:ietcvx} is larger due to the inexact proximal step. \noindent \textbf{Nonconvex $g$.} With inexact proximal steps, nmAPG no longer guarantees convergence, and its proof cannot be easily extended. On the other hand, GD allows inexact proximal steps but uses a different approach to control inexactness. Moreover, it does not support acceleration. The following shows that Algorithm~\ref{alg:ours} generates a bounded sequence, and Corollary~\ref{cor:temp1} shows that the limit points are critical points. \begin{theorem} \label{the:temp1} The sequence $\{x_k\}$ generated from Algorithm~\ref{alg:ours} has at least one limit point. \end{theorem} \begin{corollary} \label{cor:temp1} Let $\{ x_{k_j} \}$ be a subsequence of $\{ x_k \}$ with $ \lim_{k_j \rightarrow \infty} x_{k_j} = x_* $. If (i) $x_{k + 1} \neq v_k$ unless $v_k = \Px{\eta g}{v_k - \eta \nabla f(v_k)}$, and (ii) $\lim_{k_j \rightarrow \infty} F(x_{k_j}) = F(x_*)$, then $x_*$ is a critical point of \eqref{eq:compfunc}. \end{corollary} Assumption (i), together with Lemma~\ref{lem:prox}, ensures that the sufficient decrease condition in \eqref{eq:sheother} will not be trivially satisfied by $x_{k + 1} = v_k$, unless $v_k$ is a critical point. Assumption (ii) follows from Definition~\ref{def:subdifferential}, as the subdifferential is defined by a limiting process. \begin{proposition} \cite{attouch2013convergence} \label{pr:assii} Assumption (ii) is satisfied when (i) the proximal step is exact; or (ii) $g$ is continuous or is the indicator function of a compact set. \end{proposition} When the proximal step is exact or when $g$ is convex, $\Gm{\cdot}$ has been used to measure the distance from optimality. However, this is inappropriate when $g$ is nonconvex and the proximal step is inexact, as the inexactness can no longer be directly controlled. Instead, we will measure optimality via $a_k \equiv \NM{x_{k + 1} - v_k}{2}^2$. \begin{proposition} \label{pr:ietncvx2} (i) $\lim_{k \rightarrow \infty} a_k = 0$; and (ii) $\min_{k = 1, \dots, K} a_{\phi(k)} \le \frac{2 (q + 1) c_1}{\delta K}$. \end{proposition} When the proximal step is exact, $x_{k + 1} = \Px{\eta g}{v_k - \eta \nabla f(v_k)}$, and $a_k = \NM{\Gm{v_k}}{2}^2$. Proposition~\ref{pr:ietncvx2} then reduces to Proposition~\ref{pr:exact:rate} (but with a looser bound). \begin{table*}[ht] \centering \small \vspace{-5px} \caption{Results on the \textit{MovieLens} data sets (CPU time in seconds). Here, RMSE is scaled by $10^{-1}$. Group (I) is based on convex nuclear norm regularization; group (II) on factorization model; and group (III) on nonconvex model \eqref{eq:promc}.} \begin{tabular}{c c|ccc|ccc|ccc} \hline & & \multicolumn{3}{c|}{\textit{MovieLens-1M}} & \multicolumn{3}{c|}{\textit{MovieLens-10M}} & \multicolumn{3}{c}{\textit{MovieLens-20M}} \\ & & RMSE & rank & CPU time & RMSE & rank & CPU time & RMSE & rank & CPU time \\ \hline \multirow{2}{*}{(I)} & active & 8.20$\pm$0.01 & 68 & 50.5$\pm$1.6 & 8.14$\pm$0.01 & 101 & 1520.8$\pm$18.2 & 8.02$\pm$0.01 & 197 & 7841.9$\pm$666.3 \\ \cline{2-11} & ALT-Impute & 8.18$\pm$0.01 & 68 & 34.0$\pm$1.1 & 8.14$\pm$0.01 & 101 & 821.7$\pm$34.5 & 8.01$\pm$0.01 & 197 & 3393.2$\pm$220.3 \\ \hline \multirow{2}{*}{(II)} & AltGrad & 8.02$\pm$0.03 & 6 & 4.0$\pm$1.1 & 7.97$\pm$0.04 & 9 & 94.5$\pm$30.8 & 7.94$\pm$0.04 & 10 & 298.3$\pm$54.1 \\ \cline{2-11} & R1MP & 8.53$\pm$0.02 & 13 & \textbf{1.3$\pm$0.2} & 8.52$\pm$0.04 & 23 & \textbf{58.8$\pm$11.0} & 8.54$\pm$0.02 & 26 & \textbf{139.2$\pm$23.7} \\ \hline \multirow{4}{*}{(III)} & FaNCL & \textbf{7.88$\pm$0.01} & 5 & 12.5$\pm$0.9 & \textbf{7.79$\pm$0.01} & 8 & 703.5$\pm$18.3 & \textbf{7.84$\pm$0.03} & 9 & 2296.9$\pm$176.4 \\ \cline{2-11} & nmAPG & \textbf{7.87$\pm$0.01} & 5 & 12.5$\pm$0.9 & \textbf{7.80$\pm$0.01} & 8 & 627.5$\pm$16.4 & \textbf{7.85$\pm$0.01} & 9 & 1577.9$\pm$103.2 \\ \cline{2-11} & niAPG(exact) & \textbf{7.87$\pm$0.01} & 5 & 11.1$\pm$0.8 & \textbf{7.79$\pm$0.01} & 8 & 403.1$\pm$19.6 & \textbf{7.84$\pm$0.01} & 9 & 1111.9$\pm$65.3 \\ \cline{2-11} & niAPG & \textbf{7.87$\pm$0.01} & 5 & 2.7$\pm$0.3 & \textbf{7.79$\pm$0.01} & 8 & 90.2$\pm$2.6 & \textbf{7.85$\pm$0.01} & 9 & 257.6$\pm$33.4 \\ \hline \end{tabular} \label{tab:mvlens} \end{table*} \begin{table*}[ht] \centering \small \vspace{-15px} \caption{Number of proximal steps on the synthetic and recommender system data sets.} \label{tab:callpxrec} \begin{tabular}{c| c |c | c| c| c| c |c | c} \hline & \multicolumn{3}{c|}{\textit{Synthetic}} & \multicolumn{3}{c|}{\textit{MovieLens}} & \multirow{2}{*}{\textit{Netflix}} & \multirow{2}{*}{\textit{Yahoo}} \\ & $m=$500 & $m=$1000 & $m=$2000 & \textit{1M} & \textit{10M} & \textit{20M} & & \\ \hline nmAPG & 77 & 104 & 145 & 95 & 221 & 236 & 183 & 579 \\ \hline niAPG(exact) & 64 ($\downarrow \!\! 17\%$) & 85 ($\downarrow \!\! 18\%$) & 115 ($\downarrow \!\! 21\%$) & 82 ($\downarrow \!\! 13\%$) & 143 ($\downarrow \!\! 35\%$) & 165 ($\downarrow \!\! 31\%$) & 133 ($\downarrow \!\! 27\%$) & 425 ($\downarrow \!\! 26\%$) \\ \hline niAPG & 64 ($\downarrow \!\! 17\%$) & 85 ($\downarrow \!\! 18\%$) & 115 ($\downarrow \!\! 21\%$) & 81 ($\downarrow \!\! 15\%$) & 140 ($\downarrow \!\! 36\%$) & 160 ($\downarrow \!\! 32\%$) & 132 ($\downarrow \!\! 28\%$) & 413 ($\downarrow \!\! 29\%$) \\ \hline \end{tabular} \vspace{-8px} \end{table*} \section{Experiments} \label{sec:expt} In this section, we perform experiments when $g$ is convex (Section~\ref{sec:tvmdl}) and nonconvex (Section~\ref{sec:colfilter}). \subsection{Image Inpainting} \label{sec:tvmdl} The total variation (TV) model \cite{beck2009tv} has been popularly used in image processing. Let $y \in \R^d$ be the vectorized input image and $x \in \R^d$ be the recovered one. We consider the TV model with nonconvex log-sum-penalty regularizer \cite{candes2008enhancing}. \begin{equation} \min_x \frac{1}{2}\NM{M \odot (x - y)}{2}^2 + \lambda \sum_{i = 1}^d \kappa([ D_h x ]_i) + \kappa([ D_v x ]_i), \label{eq:imgtv} \end{equation} where $M \in \{0, 1\}^d$ is a mask such that $M_{ij}=1$ indicates that the corresponding pixel is observed, $D_h$ and $D_v$ are the horizontal and vertical partial derivative operators, $\odot$ is the elementwise multiplication, and $\kappa(\alpha) = \log(1 + |\alpha|)$. As suggested in \cite{qyao2016icml}, \eqref{eq:imgtv} can be transformed as the minimization of $f(x) + \lambda \TV{x}$, where $ f(x) = \frac{1}{2}\NM{M \odot (x - y)}{2}^2 - \lambda [ \TV{x} + \sum_{i = 1}^d \kappa([ D_h x ]_i) + \kappa([ D_v x ]_i) ]$ is nonconvex but smooth, and $\TV{x} = \NM{D_h x}{1} + \NM{D_v x}{1}$ is the standard (convex) TV regularizer. Thus, we only need to handle the proximal step of the TV regularizer, which will be computed numerically by solving its dual using L-BFGS \cite{beck2009tv}. The following solvers on the transformed problems are compared: (i) GDPAN \cite{zhong2014gradient}, which performs gradient descent with the proximal average; (ii) nmAPG; (iii) the proposed niAPG, in which inexactness of the proximal step is controlled by decaying the duality gap $\varepsilon_k$ at a rate of $O(1/k^{1.5})$; and (iv) the exact version of niAPG(exact), which simulates an exact proximal step with a small duality gap (${10}^{-4}$). We do not compare with the GD algorithm \cite{attouch2013convergence}, as its inexactness condition is difficult to check and it does not use acceleration. As a further baseline, we compare with the convex TV model: $\min_{x} \frac{1}{2}\NM{M \odot (x - y)}{2}^2 + \lambda (\NM{D_v x}{1} + \NM{D_h x}{1})$, which is solved using ADMM \cite{boyd2011distributed}. We do not compare with CCCP \cite{yuille2002concave}, which is slow in practice \cite{qyao2016icml}. Experiments are performed on the ``Lena'' image\footnote{\url{http://www.cs.tut.fi/~foi/GCF-BM3D/images/image_Lena512rgb.png}}. We normalize the pixel values to $[0,1]$, and then add Gaussian noise from $\mathcal{N}(0, 0.05)$. $50\%$ of the pixels are randomly sampled as observed. For performance evaluation, we report the CPU time and root-mean-squared error (RMSE) on the whole image. The experiment is repeated five times. Results are shown in Table~\ref{tab:tvimg}.\footnote{In all the tables, the boldface indicates the best and comparable results (according to the pairwise t-test with 95\% confidence).} Figure~\ref{fig:imgtv} plots convergence of the objective.\footnote{ Because of the lack of space, the plot for $\lambda = 0.01$ is not shown.} As can be seen, the nonconvex TV model has better RMSE than the convex one. Among the nonconvex models, niAPG is much faster, as it only requires a small number of cheap inexact proximal steps (Table~\ref{tab:callprox1}). \begin{figure}[ht] \centering \subfigure[$\lambda = 0.02$.] {\includegraphics[height = 0.18\textwidth]{figures/TV/002}}\quad \subfigure[$\lambda = 0.04$.] {\includegraphics[height = 0.18\textwidth]{figures/TV/004}} \vspace{-10px} \caption{Objective value vs CPU time on the image data.} \label{fig:imgtv} \vspace{-10px} \end{figure} \begin{table}[ht] \centering \small \vspace{-10px} \caption{Number of proximal steps on image data. Number in brackets is the percentage reduction w.r.t. nmAPG.} \label{tab:callprox1} \begin{tabular}{c| c| c| c} \hline & $\lambda = 0.01$ & $\lambda = 0.02$ & $\lambda = 0.04$ \\ \hline nmAPG & 87 & 46 & 43 \\ \hline niAPG(exact) & 47 ($\downarrow \!\! 46\%$) & 35 ($\downarrow \!\! 24\%$) & 29 ($\downarrow \!\! 33\%$) \\ \hline niAPG & 57 ($\downarrow \!\! 35\%$) & 41 ($\downarrow \!\! 11\%$) & 28 ($\downarrow \!\! 35\%$) \\ \hline \end{tabular} \vspace{-10px} \end{table} \begin{figure*}[ht] \centering \subfigure[\textit{MovieLens-20M}.] {\includegraphics[height = 0.18\textwidth]{figures/recsys/20M-RMSE} \label{fig:netflix:20M}} \qquad\qquad \subfigure[\textit{Netflix}.] {\includegraphics[height = 0.18\textwidth]{figures/recsys/netflix} \label{fig:netflix:netflix}} \qquad\qquad \subfigure[\textit{Yahoo}.] {\includegraphics[height = 0.18\textwidth]{figures/recsys/yahoo} \label{fig:netflix:yahoo}} \vspace{-10px} \caption{Testing RMSE vs CPU time on the recommendation system data sets.} \label{fig:netflix} \vspace{-10px} \end{figure*} \subsection{Matrix Completion} \label{sec:colfilter} In this section, we consider matrix completion with a nonconvex low-rank regularizer. As shown in \cite{canyi2014,yao2015fast}, it gives better performance than nuclear-norm based and factorization approaches. The optimization problem can be formulated as \begin{align} \min_{\rank(X) \le r} \frac{1}{2} \NM{\SO{X_{ij}- O_{ij}}}{F}^2 + \lambda \sum_{i = 1}^r \kappa\left( \sigma_i(X) \right), \label{eq:promc} \end{align} where $O_{ij}$s are the observations, $\Omega_{ij} = 1$ if $O_{i,j}$ is observed, and 0 otherwise, $\sigma_i(X)$ is the $i$th leading singular value of $X$, and $r$ is the desired rank. The associated proximal step can be solved with rank-$r$ SVD \cite{canyi2014}. \noindent \textbf{Synthetic Data.} The observed $m\times m$ matrix is generated as $O = U V + G$, where the entries of $U \in \R^{m \times k}, V \in \R^{k \times m}$ (with $k = 5$) are sampled i.i.d. from the normal distribution $\mathcal{N}(0, 1)$, and entries of $G$ sampled from $\mathcal{N}(0, 0.1)$. A total of $\|\Omega\|_1 = 2 m k \log(m)$ random entries in $O$ are observed. Half of them are used for training, and the rest as validation set. In the proposed niAPG algorithm, its proximal step is approximated by using power method \cite{halko2011finding}, and inexactness of the proximal step is monitored by condition~\eqref{eq:sheother}. Its variant niAPG(exact) has exact proximal steps computed by the Lancoz algorithm \cite{larsen1998lanczos}. They are compared with the following solvers on the nonconvex model \eqref{eq:promc}: (i) Iterative reweighted nuclear norm (IRNN) \cite{canyi2014} algorithm; (ii) Fast nonconvex low-rank learning (FaNCL) algorithm \cite{yao2015fast}, using the power method to approximate the proximal step; (iii) nmAPG, in which the proximal step is exactly computed by the Lancoz algorithm. We also compare with other matrix completion algorithms, including the well-known (convex) nuclear-norm-regularized algorithms: (i) active subspace selection \cite{hsieh2014nuclear} and (ii) ALT-Impute \cite{hastie2015matrix}. We also compare with state-of-the-art factorization models (where the rank is tuned by the validation set): (i) R1MP \cite{wang2015rankone}; and (ii) state-of-the-art gradient descent based AltGrad \cite{zhao2015nonconvex}. We do not compare with the Frank-Wolfe algorithm \cite{zhang2012accelerated}, which has been shown to be slower \cite{hsieh2014nuclear}. Testing is performed on the non-observed entries (denoted $\bar{\Omega}$). Three measures are used for performance evaluation: (i) normalized mean squared error (NMSE): $\sqrt{\NM{P_{\bar{\Omega}} (X - UV)}{F}^2} / \sqrt{\NM{P_{\bar{\Omega}} (UV) }{F}^2}$; (ii) rank of $X$; and (iii) training time. Each experiment is repeated five times. Table~\ref{tab:sythmatcomp} shows the performance. Convergence for algorithms solving \eqref{eq:promc} is shown\footnote{Because of the lack of space, the plot for $m = 500$ is not shown.} in Figure~\ref{fig:sythmatcomp}. As has also been observed in \cite{canyi2014,yao2015fast}, nonconvex regularization yields lower NMSE than nuclear-norm-regularized and factorization models. Again, niAPG is the fastest. Its speed is comparable with AltGrad, but is more accurate. Table~\ref{tab:callpxrec} compares the numbers of proximal steps. As can be seen, both nmAPG(exact) and nmAPG require significantly fewer proximal steps. \noindent \textbf{Recommender Systems.} We first consider the \textit{MovieLens} data sets (Table~\ref{tab:recsys:dataset}), which contain ratings of different users on movies or musics. We follow the setup in \cite{wang2015rankone,yao2015fast}, and use $50\%$ of the observed ratings for training, $25\%$ for validation and the rest for testing. For performance evaluation, we use the root mean squared error on the test set $\bar{\Omega}$: $\text{RMSE} = \sqrt{\NM{P_{\bar{\Omega}} (O - X)}{F}^2}/\sqrt{\|\bar{\Omega}\|_1}$, rank of the recovered matrix $X$, and CPU time. The experiment is repeated five times. \begin{table}[ht] \centering \small \vspace{-15px} \caption{Recommender system data sets used.} \begin{tabular}{c c |c|c|c} \hline & & \#users & \#items & \#ratings \\ \hline \multirow{3}{*}{\textit{MovieLens}} & \textit{1M} & 6,040 & 3,449 & 999,714 \\ \cline{2-5} & \textit{10M} & 69,878 & 10,677 & 10,000,054 \\ \cline{2-5} & \textit{20M} & 138,493 & 26,744 & 20,000,263 \\ \hline \multicolumn{2}{c|}{\textit{Netflix}} & 480,189 & 17,770 & 100,480,507 \\ \hline \multicolumn{2}{c|}{\textit{Yahoo}} & 1,000,990 & 624,961 & 262,810,175 \\ \hline \end{tabular} \vspace{-5px} \label{tab:recsys:dataset} \end{table} Table~\ref{tab:mvlens} shows the recovery performance. IRNN is not compared as it is too slow. Again, the nonconvex model consistently outperforms nuclear-norm-regularized and factorization models. R1MP is the fastest, but its recovery performance is poor. Figure~\ref{fig:netflix:20M} shows the convergence, and Table~\ref{tab:callpxrec} compares the numbers of proximal steps. niAPG(exact) is faster than nmAPG due to the use of fewer proximal steps. niAPG is even faster with the use of inexact proximal steps. \begin{figure}[ht] \centering \subfigure[$m = 1000$.] {\includegraphics[height = 0.18\textwidth]{figures/matcomp-syn/1000}}\quad \subfigure[$m = 2000$.] {\includegraphics[height = 0.18\textwidth]{figures/matcomp-syn/2000}} \vspace{-10px} \caption{Objective value vs CPU time on the synthetic matrix completion data set.} \label{fig:sythmatcomp} \vspace{-5px} \end{figure} Finally, we perform experiments on the large \textit{Netflix} and \textit{Yahoo} data sets (Table~\ref{tab:recsys:dataset}). We randomly use $50\%$ of the observed ratings for training, $25\%$ for validation and the rest for testing. Each experiment is repeated five times. We do not compare with nuclear-norm-regularized methods as they yield higher rank and RMSE than others. Table~\ref{tab:netflix} shows the recovery performance, and Figures~\ref{fig:netflix:netflix}, \ref{fig:netflix:yahoo} show the convergence. Again, niAPG is the fastest and most accurate. \begin{table}[ht] \centering \small \vspace{-10px} \caption{Results on the \textit{Netfix} and \textit{Yahoo} data sets. Here, RMSE is scaled by $\times 10^{-1}$.} \begin{tabular}{c c|ccc} \hline & & RMSE & rank & CPU time (min) \\ \hline \multirow{5}{*}{\textit{Netflix}} & AltGrad & 8.16$\pm$0.02 & 15 & 221.7$\pm$5.6 \\ \cline{2-5} & FaNCL & 7.94$\pm$0.01 & 13 & 240.8$\pm$22.7 \\ \cline{2-5} & nmAPG & \textbf{7.92$\pm$0.01} & 13 & 132.8$\pm$2.1 \\ \cline{2-5} & niAPG(exact) & \textbf{7.92$\pm$0.01} & 13 & 97.7$\pm$1.8 \\ \cline{2-5} & niAPG & \textbf{7.92$\pm$0.01} & 13 & \textbf{25.2$\pm$0.6} \\ \hline \multirow{5}{*}{\textit{Yahoo}} & AltGrad & 6.69$\pm$0.01 & 14 & 112.9$\pm$4.2 \\ \cline{2-5} & FaNCL & \textbf{6.54$\pm$0.01} & 9 & 487.6$\pm$32.0 \\ \cline{2-5} & nmAPG & \textbf{6.53$\pm$0.01} & 9 & 184.3$\pm$6.3 \\ \cline{2-5} & niAPG(exact) & \textbf{6.53$\pm$0.01} & 9 & 140.7$\pm$5.8 \\ \cline{2-5} & niAPG & \textbf{6.53$\pm$0.01} & 9 & \textbf{38.7$\pm$2.3} \\ \hline \end{tabular} \label{tab:netflix} \vspace{-10px} \end{table} \section{Conclusion} In this paper, we proposed an efficient accelerated proximal gradient algorithm for nonconvex problems. Compared with the state-of-the-art \cite{li2015accelerated}, the proximal step can be inexact and the number of proximal steps required is significantly reduced, while still ensuring convergence to a critical point. Experiments on image denoising and matrix completion problems show that the proposed algorithm has comparable (or even better) prediction performance as the state-of-the-art, but is much faster. \section*{Acknowledgments} This research project is partially funded by Microsoft Research Asia and the Research Grants Council of the Hong Kong Special Administrative Region (Grant 614513). The first author would thank helpful discussion and suggestions from Lu Hou and Yue Wang. { \small \bibliographystyle{named}
2,869,038,155,791
arxiv
\section{Introduction} Object detection is one of the fundamental and important issues in the field of computer vision. Remote sensing image object detection is a process of detecting the location of the object of interest from a remote sensing image and identifying its category. Because of the difficulty of feature extraction, position regression, and object classification, it is very challenging. Deep learning technology is a powerful method that can automatically learn feature representation from data, which has developed very rapidly in recent years. The method of deep learning has demonstrated extraordinary effects in solving difficulties of object detection, prompting many efficient methods and outperforming traditional methods. The deep learning method parses the input data by constructing a hierarchical nonlinear learning unit to learn the end-to-end mapping between the image and its semantic label. It is this learning method that has made a great breakthrough in the field of remote sensing image object detection. Although deep learning is very effective, traditional object detection methods are limited due to the passive nature. Firstly, in the process of in-orbit imaging, the image acquisition process is based on the human visual perception as a reference for checking its quality, and it does not consider the specific requirements of tasks such as object detection. Secondly, the images are directly used for training or testing without proper image quality evaluation, or the images are simply evaluated and manually pre-processed by the visual inspection. In short, evaluating the acquired image in terms of human visual perception is not necessarily the optimal for object detection task. In fact, there is a gap in imaging configuration requirements for visual inspection and object detection, and such difference will impact the performance of the detection model. In other words, adaptive image attribute learning is very important for the in-orbit imaging procedure or the subsequent object detection step, however, it is rarely considered in the literature. Image attributes mean spatial resolution, color, scale (the ratio of the distance on an image to the distance on the ground), hue, saturation, brightness and so on. This paper only takes brightness and scale learning as examples. In order to overcome the above limitations, this paper proposes an active object detection method based on deep reinforcement learning. The role of reinforcement learning is to optimize imaging conditions and improve object detection performance. It is worth noting that the application of deep reinforcement learning in image processing is a new topic, and the proposed method is different from traditional detection models described in the next section. The novelty of this paper is the combination of deep reinforcement learning with the current mainstream object detection algorithms. By adjusting the image attribute, the image quality is actively improved to adapt to the well-trained detectors, thus, the detection performance will be improved, as shown in Fig. \ref{fig:fig1}. In short, it is useful for offline detection and online imaging. For convenience, the framework in this paper is named active object detection with reinforcement learning (\emph{RL-AOD}). The most important difference between \emph{RL-AOD} and the mainstream method is that the mainstream detector locates the object in one step through the regression algorithm, but \emph{RL-AOD} can adaptively select the appropriate brightness and scale through sequence decision-making in the process of locating the object. This method can adaptively learn the image attributes with the best object detection performance, which is of great significance for remote sensing image object detection. Our contributions in this paper are summarized as follows: (1) An active object detection framework \emph{RL-AOD} is proposed, by combining deep reinforcement learning and mainstream deep learning object detection method. This method is used to solve the problem that the imaging configuration and detection tasks do not match. (2) This paper proposes strategies for adaptively adjusting brightness and scale of images, and combines them together to improve the detection performance of low-quality images. \begin{figure}[tb] \centering \includegraphics[width=8.8cm]{f1} \caption{Motivation. Due to the limitation of imaging configuration and environmental changes, the detection performance of low-quality images is not good. Therefore, it is necessary to adaptively learn image attributes to improve detection performance.} \label{fig:fig1} \end{figure} \section{Related Work} Active object detection consists of two parts, reinforcement learning and object detection. For convenience, related technologies will be reviewed below. \textbf{Object Detection.} Before detectors based on deep learning are proposed, DPM \cite{pedro2010object} is a very successful object detection algorithm. DPM method first calculates the gradient direction histogram, then uses SVM (Surpport Vector Machine) to train to obtain the gradient model of the object, and finally uses target matching technology to detect specific objects. Object detection based on deep learning includes position regression and object classification. In the past few years, various new algorithms have constantly been proposed, and there is a strong correlation between these algorithms. In general, deep learning based detection models can be divided into the following two categories, two-stages methods (e.g., Faster RCNN \cite{ren2015faster}, FPN \cite{lin2017feature}, R-FCN \cite{dai2016r}, Cascade RCNN \cite{cai2018cascade}) and one-stage methods (e.g., YOLO \cite{redmon2017yolo9000}, SSD \cite{liu2016ssd}, DSSD \cite{fu2017dssd}, RetinaNet \cite{lin2017focal}, CornerNet \cite{law2018cornernet}). The main difference between the two-stages framework and the one-stage ones is the pre-processing step for generating regional proposals. The former pays more attention to precision, while the latter pays more attention to speed. Compared to two-stage detectors, one-stage detectors simplify the detection process and increasing the speed since regression and classification are performed only once, and the accuracy is thus being impacted. Future trends will focus more on the combination of the two (e.g., RefineDet \cite{zhang2018single}, RFBNet \cite{liu2018receptive}). This type of method has two or more regression and classification processes. Not only is the accuracy not worse than the two-stage method, but the speed can also be close to the one-stage method. Despite the great success of deep learning, there is still a huge gap between the performance of the current best methods and requirements from practical applications. Traditional object detection methods only passively detect the object, but cannot actively learn the attributes (brightness, scale, etc.) of images. Traditional active object recognition method (e.g., \cite{denzler2002information,wilkes1992active}) is to perform viewpoint control by controlling the camera. The method in this paper does not directly control the camera, but learns images' attributes. \textbf{Deep Reinforcement Learning.} Reinforcement learning (RL) is a powerful and effective tool for an agent to learn how to make serial sequence decisions based on the external environment. The overall decision made by the agent will be optimal since RL aims to maximize the accumulating rewards. In recent years, traditional RL algorithms have been incorporated into the deep learning framework, thus producing a series of deep reinforcement learning (DRL) models (e.g. DQN \cite{mnih2015human}, DDPG \cite{lillicrap2015continuous}, TRPO \cite{schulman2015trust}), which outperform traditional RL methods. In the early days, RL methods are mainly used for robot control \cite{kormushev2010robot,hester2010generalized}. In recent years, DRL methods have been successfully applied in many fields such as game agents \cite{silver2016mastering,silver2017mastering} and neural network architecture design \cite{baker2017designing,zoph2017neural}. DRL has also attracted people's attention in the field of computer vision (CV). For examples, some scholars use DRL to continuously narrow the detection window to the final object by sequence decision \cite{caicedo2015active,bellver2016hierarchical,jie2016tree}. In detail, they use DRL method alone to locate objects, with categories of objects not being considered and image attributes (such as brightness and scale) unchanged. However, the results of most of this kind of methods has not improved much, but it is also an attempt. In contrast, this paper combines DRL method with the current mainstream object detection methods together to adaptively learn the best image attributes. In addition, there are some other meaningful methods using DRL for basic image processing (e.g., enhancement \cite{park2018distort-and-recover}, recovery \cite{yu2018crafting}). Although these methods can adaptively learn the attributes of images step by step, they are not related to the detection task, but only to meet the visual inspection. Moreover, there are many other works that combine DRL and CV \cite{liang2017deep,yoo2017action,ba2015multiple,he2018merge,huang2017learning}, but all of them are different from the method in this paper. Since human thinking is often a sequential decision-making process, algorithms with deep reinforcement learning methods are closer to human behavior than traditional methods. In short, the image processing method based on deep reinforcement learning is a topic worth studying. In this paper, an adaptive adjustment strategy of image attribute is learned in the framework of Double DQN \cite{van2016deep} combined with Faster RCNN. Both image quality and detection performance can be improved by applying this strategy. To our best knowledge, the problem being considered in this paper is new, and it is rarely being studied in the literature. \section{Methodology} Imaging configuration is an important factor affecting image quality and object detection performance. In particular, brightness and scale are the two most important factors. In addition, the indicators used to evaluate the imaging configuration are different for different tasks, such as visual inspection and object detection. To this end, this paper takes brightness learning and scale learning as examples to study the active imaging configuration learning in the context of object detection task. Below, active object detection (\emph{RL-AOD}) will be formulated, and proposed approach will be elaborated step by step. \begin{figure}[tb] \centering \includegraphics[width=8.5cm, height=9cm]{f4} \caption{Overview of RL-AOD. Firstly, $D$ is used to extract features and detect objects from ${img}(t)$. Then ${Ag}^b$ and ${Ag}^s$ are utilized to select the optimal action $a^b(t)$ and $a^s(t)$ according to the state $s^b(t)$ and $s^s(t)$ respectively. Finally, act is performed on ${img}(t)$ in order to obtain ${img}(t+1)$. } \label{fig:fig4} \end{figure} \subsection{Problem Formulation} Deep reinforcement learning consists of five key elements, namely environment, agent, state, action and reward. Below, we explain them in the context of image attribute learning. \textbf{Environment.} The role of environment is to receive a series of actions performed by the agent, to evaluate the quality of these actions, and to feedback reward to the agent. The environment in this article refers to the object detector, abbreviated as $D$. Since the input image size of the one-stage detection method is fixed, it is difficult to adjust the scale. Therefore, this paper uses the Faster RCNN method to construct \emph{RL-AOD} framework. The detector $D$ is trained in advance on a high quality dataset. \textbf{Agent.} Agent is the core of the entire reinforcement learning system, whose task is to learn a series of state-to-action mappings based on the reward provided by the environment. The agent in this framework is expected to select the appropriate brightness adjustment actions and scale adjustment actions to transform the image according to the current image feature, and finally adapt to the detector $D$, and improve the overall performance. In brightness adjustment and scale adjustment, two independent agents were trained respectively, named ${Ag}^b$ and ${Ag}^s$. \begin{algorithm}[tb] \caption{RL-AOD} \label{alg:alg1} \textbf{Input}: Low quality images\\ \textbf{Networks}: Detector $D$, Agent ${Ag}^b$ and ${Ag}^s$\\ \textbf{Parameters}: Feature $f^c$, $f^b$, $f^s$, State $s^b$, $s^s$, Action $a^b$, $a^s$, Action Set $A^b$, $A^s$, Reward $r^b$, $r^s$ \\ \textbf{Output}: High quality images \begin{algorithmic}[1] \STATE Pretrain Faster RCNN $D$ on high quality image sets. \STATE Use $r^b$ calculated by $D$ as a guide to train DQN Agent ${Ag}^b$ on both low and high quality image sets. \STATE Use $r^s$ calculated by $D$ as a guide to train DQN Agent ${Ag}^s$ on both low and high quality image sets. \WHILE{ there are still unprocessed images } \STATE Let step $t=0$. \STATE Get a low quality image ${img}(0)$ \WHILE{current step $t < T$} \STATE Extract $f^c(t)$, $f^b(t)$, $f^s(t)$ from ${img}(t)$ using $D$. \STATE Combine $f^c(t)$, $f^b(t)$ to get $s^b(t)$. \STATE Combine $f^c(t)$, $f^s(t)$ to get $s^s(t)$. \STATE Select $a^b(t)$ from $A^b$ based on $s^b(t)$ using ${Ag}^b$. \STATE Select $a^s(t)$ from $A^s$ based on $s^s(t)$ using ${Ag}^s$. \STATE Apply $a^b(t)$ and $a^s(t)$ to ${img}(t)$ to get ${img}(t+1)$. \STATE Step $t$++. \ENDWHILE \ENDWHILE \STATE \textbf{return} High quality image set $\{{img}(T)\}$ \end{algorithmic} \end{algorithm} \textbf{State.} State refers to the current status of the agent and contains all the information used to make the action selection. In this paper, the state mainly consists of two parts, one part is the contextual feature of the image, which is used to describe the overall background of the image and denoted as $f^c$. Another part of the feature is used to judge the level of a certain attribute (brightness, scale) of the image, written as $f^b$ and $f^s$, respectively. Therefore, the states corresponding to ${Ag}^b$ and ${Ag}^s$ are $s^b=\{f^c,f^b\}$ and $s^s=\{f^c,f^s\}$, respectively. \textbf{Action.} Action refers to the best action that the agent picks from the action set $A$. For the brightness-adjusting agent ${Ag}^b$, the action set includes two actions: brightening and darkening. $A^b=\{a^{b}_1,a^{b}_2\}$. Similarly, the scale-adjusting agent ${Ag}^s$, the action set includes two actions: zoom in and zoom out. $A^s=\{a^{s}_1,a^{s}_2\}$. Two agents will select the best action from their respective action sets based on the state. \textbf{Reward.} Reward is used to evaluate performance of the agent at a certain time step, and it is provided by the environment. In this paper, the reward $r$ is based on the detection performance. \begin{equation}\label{eq:eq1} \begin{array}{l} r(t)=sign(p(t+1)-p(t))\\ \end{array} \end{equation} Where $p$ stands for detection performance, and $p=\frac{1}{2}(F+mIoU)$. $mIoU$ is the average $IoU$ of all correct detection boxes, and $F$ is the F-measure of detection boxes with $IoU>0.5$. By experiments, we find that the pure usage of $F$ indicator and $mIoU$ does not work well. $mIoU$ alone is less robust to multiple objects, and $F$ indicator alone often leads to a small reward. In general, the process of the \emph{RL-AOD} algorithm is reflected in Alg. \ref{alg:alg1} and Fig. \ref{fig:fig4}. After the detector and the two agents are trained separately, $D$ is used to extract features and detect objects of ${img}(t)$. Then features $f^c(t)$, $f^b(t)$ and $f^s(t)$ can be obtained. Next, ${Ag}^b$ and ${Ag}^s$ are utilized to select the optimal action $a^b(t)$ and $a^s(t)$ according to the state $s^b(t)$ and $s^s(t)$ ($s^b=\{f^c,f^b\}$, $s^s=\{f^c,f^s\}$) respectively. Finally, act is performed on ${img}(t)$ in turn, in order to obtain ${img}(t+1)$. The detailed process of feature extraction, state transition, and action design will be described below. \begin{figure}[tb] \centering \includegraphics[width=8.5cm]{f3} \caption{Brightness transformation. This figure illustrates how the grayscale value of $V$ is adjusted under different brightness levels $L^b$ when the grayscale value of $V_{base}$ is $0$, $255$ and $C$, respectively. At the same time, the histogram of $V$ under different $L^b$ is also given.} \label{fig:fig3} \end{figure} \subsection{Automatic Brightness Adjustment} Agent will take the extracted features as the state and take the optimal action to transform the state. Feature extraction part will mainly introduce the acquisition of $f^c$ and $f^b$. State transition part will introduce an indicator $L^b$ for roughly estimating the brightness level of an image. Action design part will propose a set of brightness adjustment actions. \textbf{Feature extraction.} $f^c$ is the contextual feature extracted by the detector $D$, which is used to describe the overall background of the image. This part of the feature is extracted from the intermediate output of Faster RCNN(feature maps before RoI Pooling layer). If the backbone of the model is based on VGG16, then the feature maps of the intermediate output will have 512 channels, and the averaged feature map over channels will result in a 512-dimensional vector. If the backbone of the model is ResNet50 or ResNet101, this vector will have 1024 dimensions. Experiments show that this dimension is too high and will impact the final performance. Therefore, it is necessary to perform a max-pooling operation with a stride of 2 for the feature, and a feature of 512 dimensions is then obtained. $f^b$ is the histogram feature and is used to judge the level of image's brightness. To compute the histogram, RGB image needs to be converted into HSV color space. Then a histogram is obtained on the component $V$, where the bin width is 4. Since the quantitative level is 256, a 64-dimension histogram can be obtained. Finally, the two parts are concatenated together to obtain $s^b$, a feature vector of 576 dimensions. \begin{figure}[tb] \begin{minipage}[b]{.49\linewidth} \centering \centerline{\includegraphics[width=4.3cm]{f2a}} \centerline{(a) } \end{minipage} \hfill \begin{minipage}[b]{.49\linewidth} \centering \centerline{\includegraphics[width=4.3cm]{f2b}} \centerline{(b) } \end{minipage} \caption{Geometric meaning of $d$. (a): $-1 \leq L^b < 0$ (Linear mapping between the $V$ component of a dark image and $V_{base}$). (b): $0 \leq L^b \leq 1$ (Linear mapping between the $V$ component of a bright image and $V_{base}$). } \label{fig:fig2} \end{figure} \textbf{State transition.} To describe the brightness level, an indicator $L^b$ is computed on the image brightness component $V$ in HSV color space of RGB image. $L^b$ lies within the range [-1, 1], and changing brightness level $L^b$ means changing the image brightness. The negative number means that the image is dark, and the positive number means that the image is bright. The larger the absolute value, the darker or brighter the image. To calculate $L^b$, the method in this paper attempts to separate $L^b$ from the brightness component $V$ of the current image. In other words, the brightness component $V$ is separated into two parts. One is the brightness level $L^b$ (a scalar) and the other is called $V_{base}$ (a matrix). Eq. \eqref{eq:eq2} describes the decomposition form. \begin{equation}\label{eq:eq2} V(t)=\left\{ \begin{array}{ll} (1+L^b(t))V_{base}\quad\quad\quad\quad\ {-1 \leq L^b < 0}\\ (1-L^b(t))V_{base}+255L(t)\ \ {0 \leq L^b \leq 1}\\ \end{array} \right. \end{equation} In brightness adjustment, only $L^b$ needs to be computed, $V_{base}$ is the constant basis for each image. On this basis, any level of brightness component $V$ can be obtained. The relationship between the grayscale value of $V_{base}$ and the grayscale value of $V$ under different $L^b$ is shown in Fig. \ref{fig:fig3}. So estimating $V_{base}$ is crucial. As shown in Eq. \eqref{eq:eq3}, $L^b(0)$ can be calculated. $V_{base}$ can be got by letting $t=0$ and substituting $L^b(0)$ into Eq. \eqref{eq:eq2}. \begin{equation}\label{eq:eq3} L^b(0)=\frac{d}{255}-1,\quad d\approx\frac{\sum_{i}p_i}{6} \end{equation} Where $\{p_{0},p_{0.1},p_{0.2},\dots,p_{1}\}$ are $11$ quantiles of the $V$ component, which are indicated by red dots in Fig. \ref{fig:fig2}(a) and (b). $d$ in Eq. \eqref{eq:eq3} is indicated by a solid red line in Fig. \ref{fig:fig2}(a) and (b). The principle of using the quantiles to estimate $d$ can be clearly illustrated in the figure. After pairing the quantiles (such as $\{(p_{0},p_{1}),(p_{0.1},p_{0.9} )\cdots\}$), the sum of each pair is an estimate of $d$, and all estimates are averaged to get a more accurate estimate of $d$, ie, Eq. \eqref{eq:eq3}. \textbf{Action design.} The essence of action design is to change the brightness level of the image. The brightness level $L^b(t+1)$ can be updated from $L^b(t)$ by Eq. \eqref{eq:eq4}, and the situation where $L^b$ is outside the range [-1,1] can be avoided. For dark images, the changing scope of $L^b$ is greater in the brightening operation, and for bright images the changing scope of $L^b$ is greater in the darkening operation. In consequence, the agent can choose a good action in the next step even it takes a bad action in this step. \begin{equation}\label{eq:eq4} L^b(t+1)=\left\{ \begin{array}{lr} 0.9L^b(t)+0.1\times 1 & {a^b(t)=a^b_1}\\ 0.9L^b(t)+0.1\times (-1) & {a^b(t)=a^b_2}\\ \end{array} \right. \end{equation} The reason why $V(t+1)$ is not obtained by multiplying $V(t)$ by a coefficient is that in the subsequent brightness adjustment, the truncated grayscale values larger than 255 are difficult to be recovered. In addition, it is easy to enter the situation in which two actions are alternately selected, because if two actions are selected in turn, state will return to origin. \subsection{Automatic Scale Adjustment} Agent will take the extracted features as the state and take the optimal action to transform the state. Feature extraction part will mainly introduce the acquisition of $f^s$. State transition part will introduce an indicator $L^s$ for roughly estimating the scale level of an image. Action design part will propose a set of scale adjustment actions. \textbf{Feature extraction.} $f^s$ is the statistical histogram of objects' area. Firstly, the detector $D$ is used to detect the image, and a series of bounding boxes are obtained. On the area of these bounding boxes, a histogram is obtained. Because it is the area that is being counted, the design of the bin width will show a tendency to widen, and the speed of the increase will be square. The specific form of bin is [$0$, $9^2$, $10^2$ $\cdots$ $24^2$, $27^2$ $\cdots$ $75^2$, $80^2$ $\cdots$ $175^2$, $182^2$ $\cdots$ $245^2$, +$\infty$]. After manual design, this feature $f^s$ will be 64-dimensional. Since not every image has a lot of objects, and some images don't even contain any objects, the extracted histogram feature is a very sparse vector. This is very unfavorable for subsequent training. This sparse feature can be convolved by a Gaussian kernel to make it less sparse. Similarly, $f^c$ and $f^s$ are concatenated together to obtain $s^s$, a 576-dimension vector. \textbf{State transition.} Similarly, the scale level $L^s$ is defined for adjusting the size of images. $L^s$ lies within the range [-1, 1], and changing scale level $L^s$ means changing the size of images. It is worth noting that the resolution of the image is not absolutely related to the scale level $L^s$. The scale level $L^s$ can only describe the average area of all objects in an image. The negative number means that the average area of all objects in this image is small, and the positive number means the average area is large. The larger the absolute value, the smaller or larger the average area. In step $t$, bilinear interpolation can be used to resize the image $img_t^s$ to $\theta^{L^s(t)}$ times of the size to get a new image $img_{t+1}^s$, as expressed by Eq. \eqref{eq:eq5}. After $T$ steps, the scale of the new image is $\theta^{\sum_{t}^{T}L^s(t)}$ times that of the original image. \begin{equation}\label{eq:eq5} img^s(t+1)=Resize(img^s(t), \theta^{L^s(t)}) \end{equation} After defining the scale level $L^s$, it is now necessary to determine the $L^s$ corresponding to the original image, that is, estimating $L^s(0)$. Eq. \eqref{eq:eq6} shows the estimation process. \begin{equation}\label{eq:eq6} L^s(0)=\frac{1}{2}{log}_{\theta}(\frac{\alpha}{\alpha_0}) \end{equation} Where $\alpha$ refers to the average area of objects. Both $\alpha_0$ and $\theta$ are auxiliary parameters. $\alpha_0$ represents a priori average area with a value of $96^2$. This is the threshold for area of medium-size objects and large-size objects in the COCO dataset evaluation criteria. $\theta$ is set to $8$. With this setting, images with the average area of objects between $16^2$ and $768^2$ correspond to $L^s$ in [-1, 1], and the average area outside the range will give $L^s$ a value of 1 or -1. \textbf{Action design.} $L^s$ is adjusted in the similar way with Eq. \eqref{eq:eq4}. Eq. \eqref{eq:eq7} is the adjustment method of $L^s$. \begin{equation}\label{eq:eq7} L^s(t+1)=\left\{ \begin{array}{lr} 0.95L^s(t)+0.05\times 1 \quad \ \ \ {a^s(t)=a^s_1}\\ 0.95L^s(t)+0.05\times (-1) \ {a^s(t)=a^s_2}\\ \end{array} \right. \end{equation} In this way, $L^s$ can be avoided going beyond the range [-1,1]. For images have large average area of objects, the changing scope of $L^s$ will be greater in the zoom out operation, and for images have small average area of objects, the changing scope of $L^s$ is greater in the zoom in operation. In consequence, even if the agent takes a bad action in this step, it can still choose a good action in the next step. \section{Experiments} \subsection{Datasets and Setting} In order to verify the effectiveness of the method, this paper carried out experiments on a remote sensing image dataset. The details of the dataset and the settings of parameters in the network will be introduced next. \textbf{Dataset.} Our interests focus mainly on aircraft, and we collect many very high resolution remote sensing images and build a dataset for aircraft detection. The dataset is consisted of $13,078$ training images and $5,606$ test images. These images are high quality images obtained under normal conditions. In fact, due to the complexity of the environment, it is impossible to always get high quality images through imaging devices in the real world. For example, if the environment is too dark, the quality of the image obtained must not be high. In this paper, the degradation of the test set based on brightness and scale is performed by simulation. As shown in Fig. \ref{fig:fig1}, each test image can be subjected to four degradation operations, and thereby five images can be obtained. In this way, $28,030$ test images can be obtained totally. In the following, all models are trained in the degraded dataset. \textbf{Settings.} In this paper, the detector refers to Faster RCNN model. This model is trained in the training set containing $13,078$ images mentioned above. Whether the model backbone is VGG16, ResNet50 or ResNet101, the scales of the anchor is set to $(4, 8, 16, 32)$, and the number of iterations is set to $110000$. The rest of the settings are set according to the model default settings. The structures of the agents corresponding to brightness and scale adjustment are exactly the same, both using Double DQN model. Double DQN is a variant of the DQN algorithm, which is used to eliminate the overestimation of Q-values and is more stable. The agent network is a six-layer fully connected neural network. The neurons in each layer are $512, 512, 512, 512, 512, 2$. When training agent networks, $13,078$ training images are randomly degraded (changing brightness and scale). The degraded images account for 80\% of the total training image set. Adam optimizer is used for agent network learning with a basic learning rate of $0.001$. The brightness-adjusted agent network will train $120,000$ iterations, and the scale-adjusted agent network trains $40,000$ iterations. During the training process, the action selection adopts the $\epsilon$-greedy method. This method will randomly select actions with a small probability, and select the optimal action in the rest. \begin{table}[tb] \small \centering \caption{Performance comparison of different methods.} \setlength{\tabcolsep}{0.9mm}{ \begin{tabular}{lcccccc} \toprule Method+Backbone & $AP$ & $AP^{50}$ & $AP^{75}$ & $AP^S$ & $AP^M$ & $AP^L$ \\ \midrule DPMv5 (benchmark) & - & 0.338 & - & - & - & - \\ \midrule RetinaNet+VGG16 & 0.376 & 0.585 & 0.431 & 0.249 & 0.492 & 0.526 \\ RetinaNet+ResNet50 & 0.446 & 0.674 & 0.515 & 0.297 & 0.591 & 0.588 \\ RetinaNet+ResNet101 & 0.503 & 0.732 & 0.596 & 0.338 & 0.654 & 0.681 \\ SSD321+ResNet101 & 0.417 & 0.661 & 0.475 & 0.200 & 0.594 & 0.703 \\ DSSD321+ResNet101 & 0.426 & 0.666 & 0.485 & 0.196 & 0.610 & 0.739 \\ YOLOv2+DarkNet19 & 0.407 & 0.632 & 0.472 & 0.202 & 0.573 & 0.701 \\ YOLOv3+DarkNet53 & 0.491 & 0.808 & 0.553 & \textbf{0.441} & 0.574 & 0.401 \\ \midrule R-FCN+ResNet50 & 0.422 & 0.705 & 0.461 & 0.223 & 0.576 & 0.690 \\ R-FCN+ResNet101 & 0.427 & 0.713 & 0.464 & 0.225 & 0.582 & 0.694 \\ Faster RCNN+VGG16 & 0.441 & 0.750 & 0.469 & 0.273 & 0.587 & 0.652 \\ Faster RCNN+Res50 & 0.455 & 0.768 & 0.482 & 0.273 & 0.601 & 0.700 \\ Faster RCNN+Res101 & 0.479 & 0.784 & 0.525 & 0.300 & 0.626 & 0.703 \\ \midrule RL-AOD+VGG16 & 0.530 & \textbf{0.822} & 0.608 & 0.355 & \textbf{0.674} & \textbf{0.751} \\ RL-AOD+ResNet50 & 0.519 & 0.815 & 0.590 & 0.346 & 0.661 & 0.734 \\ RL-AOD+ResNet101 & \textbf{0.531} & \textbf{0.822} & \textbf{0.612} & 0.361 & 0.664 & 0.750 \\ \bottomrule \end{tabular}} \label{tab:tab1} \end{table} \begin{table}[tb] \normalsize \centering \caption{Performance comparison of different parameter settings of \emph{RL-AOD}. FR refers to the original Faster RCNN method. $B$ refers to the brightness adjustment. $S$ refers to the scale adjustment. $2$ and $4$ represent the maximum step $T$ ($T$ is defined in Alg. \ref{alg:alg1}). $\ast$ represents the result of testing on the undamaged normal dataset.} \setlength{\tabcolsep}{0.9mm}{ \begin{tabular}{lcccccc} \toprule Method+Backbone & $AP$ & $AP^{50}$ & $AP^{75}$ & $AP^S$ & $AP^M$ & $AP^L$ \\ \midrule FR+VGG16 & 0.441 & 0.750 & 0.469 & 0.273 & 0.587 & 0.652 \\ B2+VGG16 & 0.498 & 0.821 & 0.542 & 0.312 & 0.650 & 0.741 \\ BS2+VGG16 & 0.515 & 0.811 & 0.585 & 0.339 & 0.662 & 0.736 \\ B4+VGG16 & 0.503 & 0.823 & 0.554 & 0.314 & 0.655 & 0.754 \\ BS4+VGG16 & 0.530 & 0.822 & 0.608 & 0.355 & 0.674 & 0.751 \\ \midrule FR+Res50 & 0.455 & 0.768 & 0.482 & 0.273 & 0.601 & 0.700 \\ B2+Res50 & 0.499 & 0.825 & 0.538 & 0.308 & 0.646 & 0.744 \\ BS2+Res50 & 0.509 & 0.805 & 0.569 & 0.333 & 0.650 & 0.730 \\ B4+Res50 & 0.508 & 0.831 & 0.552 & 0.318 & 0.656 & 0.753 \\ BS4+Res50 & 0.519 & 0.815 & 0.590 & 0.346 & 0.661 & 0.734 \\ \midrule FR+Res101 & 0.479 & 0.784 & 0.525 & 0.300 & 0.626 & 0.703 \\ B2+Res101 & 0.510 & 0.824 & 0.558 & 0.322 & 0.657 & 0.755 \\ BS2+Res101 & 0.524 & 0.813 & 0.602 & 0.352 & 0.661 & 0.745 \\ B4+Res101 & 0.514 & 0.833 & 0.565 & 0.324 & 0.660 & 0.763 \\ BS4+Res101 & 0.531 & 0.822 & 0.612 & 0.361 & 0.664 & 0.750 \\ \midrule FR$\ast$+Res101 & 0.574 & 0.875 & 0.649 & 0.401 & 0.711 & 0.785 \\ BS4$\ast$+Res101 & 0.575 & 0.873 & 0.651 & 0.402 & 0.711 & 0.786 \\ \bottomrule \end{tabular}} \label{tab:tab2} \end{table} \begin{figure*}[thb] \begin{minipage}[t]{0.16\linewidth} \centering \includegraphics[width=2.6cm]{p1o} \centerline{(a)} \end{minipage} \begin{minipage}[t]{0.16\linewidth} \centering \includegraphics[width=2.6cm]{p2o} \centerline{(b)} \end{minipage} \begin{minipage}[t]{0.16\linewidth} \centering \includegraphics[width=2.6cm]{p3o} \centerline{(c)} \end{minipage} \begin{minipage}[t]{0.16\linewidth} \centering \includegraphics[width=2.6cm]{p4o} \centerline{(d)} \end{minipage} \begin{minipage}[t]{0.16\linewidth} \centering \includegraphics[width=2.6cm]{p5o} \centerline{(e)} \end{minipage} \begin{minipage}[t]{0.16\linewidth} \centering \includegraphics[width=2.6cm]{p6o} \centerline{(f)} \end{minipage} \\ \protect\\ \\ \begin{minipage}[t]{0.132\linewidth} \centering \includegraphics[width=2.1cm]{p1a} \centerline{(g)} \end{minipage} \begin{minipage}[t]{0.132\linewidth} \centering \includegraphics[width=2.1cm]{p2a} \centerline{(h)} \end{minipage} \begin{minipage}[t]{0.206\linewidth} \centering \includegraphics[width=3.4cm]{p3a} \centerline{(i)} \end{minipage} \begin{minipage}[t]{0.206\linewidth} \centering \includegraphics[width=3.4cm]{p4a} \centerline{(j)} \end{minipage} \begin{minipage}[t]{0.102\linewidth} \centering \includegraphics[width=1.6cm]{p5a} \centerline{(k)} \end{minipage} \begin{minipage}[t]{0.183\linewidth} \centering \includegraphics[width=3cm]{p6a} \centerline{(l)} \end{minipage} \caption{Results comparison. The images (a, b, c, d, e, f) in the first row are the detection results on the degraded images with respect to brightness and scale. The images (g, h, i, j, k, l) in the second row are the results after adaptive learning by \emph{RL-AOD}. (a, b, c) are to simulate the situation of over-exposure, and (d, e, f) are to simulate the situation of under-exposure. \emph{RL-AOD} can find the best image attributes (brightness and scale) through the sequence decision method, so that the detection performance can be improved.} \label{fig:fig5} \end{figure*} \subsection{Results and Discussions} In this section, \emph{RL-AOD} will be compared to different state-of-the-art methods, and \emph{RL-AOD} with different parameter settings will be also compared. The following will be combined with Fig. \ref{fig:fig5}, Tab. \ref{tab:tab1} and Tab. \ref{tab:tab2} for analysis. \textbf{Comparison of different methods.} The performances of the different methods are listed in Tab. \ref{tab:tab1}. DPM is a classic non-deep-learning object detection method, selected as a benchmark over feature-detector-descriptor. It can be observed that \emph{RL-AOD}+ResNet101 method has the largest $AP$ of $0.531$. \emph{RL-AOD} based on VGG16 and ResNet50 are also superior to other classical methods, such as Faster RCNN, YOLO, SSD. In order to ensure the fairness of the comparison, all methods, including the mainstream method and the method in this paper, are trained and tested on the damaged data set. Therefore, \emph{RL-AOD} method can achieve better performances in a dataset where the imaging configuration does not match the detector. Since the detector used by \emph{RL-AOD} is Faster RCNN, the two methods can be further compared. With the same backbone, \emph{RL-AOD} outperforms Faster RCNN by $25.7\%$, $10.3\%$ and $8.9\%$ with respect to the averaged indicators $AP^S$, $AP^M$ and $AP^L$. It can be indicated that \emph{RL-AOD} is the most promising for detecting small objects. Faster RCNN performs multiple poolings in the feature extraction process, and rounding in RoI pooling layer leads to a precision loss. In consequence, Faster RCNN is limited in detecting small objects. In contrast, \emph{RL-AOD} compensates for this defect to some extent through adaptive scale adjustment. Although YOLOv3+DarkNet53 is optimal in detecting small objects, it has the lowest detection accuracy of $0.401$ for large objects. Among all methods, \emph{RL-AOD}+ResNet101 is sub-optimal in detecting small objects. Some results of \emph{RL-AOD} are shown in Fig. \ref{fig:fig5}. It can be found that after adaptive attribute learning, many missed objects can be detected. In Fig. \ref{fig:fig5}, images (a, b, c) are to simulate the situation of over-exposure, and images (d, e, f) are to simulate the situation of under-exposure. From the results in Fig. \ref{fig:fig5}, it can be seen that both over-exposure and under-exposure will cause missed alarm, thereby reducing performance. Images (g, h, i, j, k, l) show the results of adaptive learning by \emph{RL-AOD} in this paper. The number of missed alarms has been significantly reduced, and the performance has been improved. In summary, based on the traditional method, \emph{RL-AOD} gradually adjusts the image attributes (brightness and scale) with the help of deep reinforcement learning, so that the damaged image with poor detection effect is more suitable for detection. This is a very meaningful work for remote sensing images. \textbf{Comparison of different parameter settings.} To understand how \emph{RL-AOD} works, the testing images are evaluated by setting different parameters. The performances are listed in Tab. \ref{tab:tab2}. Firstly, with the same backbone, the performance improvement caused by the brightness adjustment is greater than that brought by the scale adjustment. For example, the difference between AP of B4+Res101 and FR+Res101 ($0.035$) is greater than the difference between AP of BS4+Res101 and B4+Res101 ($0.017$). In brightness and scale adjustment, averaged AP improvements are $0.050$ and $0.018$ respectively, at the maximum step $T$ of $4$. Secondly, as for the performance of the maximum step size of $4$, all indicators are better than the result in the maximum step $T$ of $2$. This shows that \emph{RL-AOD} is effective in adjusting the attributes of images step by step, making them more adaptable to the detector. Thirdly, the sequence decision operation does not reduce detection accuracy of normal images that are not damaged. It can be seen from Tab. \ref{tab:tab2} that the AP values of FR$\ast$+Res101 and BS4$\ast$+Res101 are very close, 0.574 and 0.575 respectively. Fourthly, compared to FR+Res101 and FR$\ast$+Res101, when images are damaged and not suitable for detection, the performance of faster rcnn will be greatly reduced, and the AP will drop by 9.5 points. This also confirms the fact that during the orbit imaging process, if the image acquisition process does not take into account the specific requirements of object detection and other tasks, and the evaluation is not carried out, the detection effect may not be optimal. Finally, it is worth noting that although the scale adjustment is able to improve the detection performance of small and medium-size objects, it will reduce the detection accuracy of some large-size objects. The reason is that in remote sensing image dataset, small and medium-size objects occupy the majority. Generally, it is easier to improve the performance by zoom in than zoom out. Therefore, this will cause the imbalance between large and small-size objects. When training the agent, the agent will be more inclined to enlarge the image. In general, however, the advantages of \emph{RL-AOD} outweigh the disadvantages, as AP is still improved. The introduction of serialized decision-making methods, such as deep reinforcement learning, makes it possible to perform detection while adjusting image attributes. \section{Conclusion} This paper proposes an active object detection method \emph{RL-AOD}, which uses deep reinforcement learning to help the object detection module actively adjust image attributes (such as brightness and scale). Traditional object detection methods are limited due to the passive nature, but our active method in this paper can adapt to various situations (such as insufficient brightness, etc.). Experiments demonstrate the necessity of adaptive brightness and scale adjustment, and the effectiveness of \emph{RL-AOD} . Future work will focus on DDPG that produces continuous actions. At the same time, it will also pay more attention to real-time and improve model speed. \section*{Acknowledgments} This research was supported by the Major Project for New Generation of AI under Grant No. $2018AAA0100400$, and the National Natural Science Foundation of China under Grants $62071466$ and $91646207$. \newpage \bibliographystyle{IEEEtran}
2,869,038,155,792
arxiv
\section{Introduction}\label{sec:introduction} Algebraic approaches play a fundamental role in mathematics and computing. Algebraic axioms for groups, rings, modules or lattices, for instance, capture certain features of concrete models in an abstract uniform fashion. Fundamental constructions, such as products, quotients or adjunctions, can be presented and investigated in algebra in simple generic ways. This article investigates the notion of \emph{convolution} or \emph{Cauchy product} from formal language theory~\cite{Handbook,BerstelReutenauer} as such a fundamental notion, supporting the generic construction of various models and calculi that are interesting to computing. This provides a unified structural view on various computational models known from the computer science literature. Questions of summability and divergence aside, the operational content of convolution is simple: an entity is separated in all possible ways into two parts, two functions are simultaneously applied to these parts, their outputs are combined, and the sum over all possible combinations is taken. Suppose two functions $f$ and $g$ from an algebra $S$ (with suitable multiplication $\circ$) into an algebra $Q$ (with suitable multiplication $\odot$ and suitable summation ${\rm \Sigma}$). Using the nomenclature of formal language theory, the convolution of $f$ and $g$ for an element $x\in S$ is defined as \begin{equation*} (f\otimes g)\, x \ = \sum_{x=y\circ z} f\, y \odot g\, z. \end{equation*} Hence $x$ is first separated in all possible ways into parts $y$ and $z$. The function $f$ is then applied to $y$ and $g$ to $z$. After that, the results of these applications are combined in $Q$. The convolution is indeed the sum of all possible splittings of $x$. In formal language theory, functions $f:S\to Q$ are also known as power series---more precisely as formal or rational power series. This notion is slightly different from that commonly used in algebra, as are the notions of convolution or Cauchy product. In formal language theory, moreover, power series usually map elements of the free monoid $S=X^\ast$ over the finite alphabet $X$---the set of words or strings over $X$---into a semiring $(Q,+,\odot,0)$. Since every word can only be split into finitely many prefix/suffix pairs, the summation occurring in convolution is finite and therefore well defined. A simple example of $Q$ is the boolean semiring with $+$ as disjunction and $\odot$ as conjunction. Power series then become characteristic functions representing languages, telling us whether or not some word is in some language, and convolution becomes language product. In more general settings, $Q$ can model probabilities or weights associated to words; a Handbook has been devoted to the subject~\cite{Handbook}. This example alone underpins the power of power series and convolution. Complementing this body of work, we generalise the typeof power series, rebalancing the assumptions on source algebras $S$ and target algebras $Q$ and thus shifting the focus to other applications. Among those, we show that, for suitable algebras $S$ and $Q$, convolution becomes \emph{separating conjunction} of separation logic (cf.~\cite{COY07}), or alternatively the \emph{chop} operator of interval temporal logics~\cite{Mos00}. Both can in fact be combined, for instance within interval logics, to provide new notions of concurrency for this setting. In addition, we use power series to capture, in a generic manner, the algebraic properties of convolution for wide classes of instances and show how Hoare-style compositional inference systems can be derived uniformly for all of them. More concretely, the main contributions of this article are as follows. \begin{itemize} \item Considering power series that map arbitrary partial semigroups into quantales, we prove a generic lifting result showing that spaces of power series form quantales as well. \item This lifting result is generalised by making the target quantale partial, by considering bi-semigroups and bi-quantales with two multiplication operations, by mapping two separate semigroups into a bi-quantale, and by setting up source semigroups suitable for distinguishing between finite and infinite system behaviours. \item We show that algebras of state and predicate transformers arise as instances of the generic lifting theorem. \item Propositional Hoare calculi (without assignment axioms) are derived within the power series quantale in a generic fashion; and we discuss some ramifications of deriving concurrency rules in this setting. \item We provide a series of instances of the lifting result, showing how quantales of languages, binary relations, matrices and automata, sets of paths and traces as well as interval functions and predicates arise from a non-commutative notion of convolution. \item In the commutative case, we present the assertion quantales of separation logic with separation based on general resource monoids as well as multisets, sets with disjoint union and heaplets. We also present a separation operation on finite vectors, which leads to a notion of convolution-based parallelism for linear transformations. \item Both kinds of instances are combined into a new algebraic approach to stream interval functions and predicates, which allow the logical analysis of trajectories of dynamic and real time systems. This provides a convolution-based spatial concurrency operation in addition to the conventional temporal chop operator. \item We illustrate how convolution as separating conjunction allows us to derive the frame rule of separation logic by simple equational reasoning. \end{itemize} Our lifting results are generic in the following sense: after setting up a suitable partial semigroup---words under concatenation, closed intervals under chop, multisets under addition or resource monoids under resource aggregation---the space of all functions into a quantale automatically forms a quantale with convolution as multiplication. When the target quantale is formed by the booleans, power series can be identified with and predicates and characteristic functions for sets, as their extensions. Multiplication in the booleans becomes conjunction and convolution then reduces to \begin{equation*} (f\otimes g)\, x = \sum_{x=y\circ z} f\, y \sqcap g\, z. \end{equation*} If $S$ is a set of resources and $\circ$ a (commutative) notion of resource aggregation, then convolution is separating conjunction. If $S$ is a set of closed intervals and $\circ$ splits an interval into two disjoint parts, then convolution is chop. In that sense, separating conjunction can be seen as a language product over resources and chop as a language product over intervals. Here and in all similar cases, our lifting result implies that the predicates of type $S \to \mathbb{B}$ form an assertion quantale; in the first case that of separation logic; in the second one that of interval logics. But our results cover models beyond the booleans, for instance probabilistic or weighted predicates or other kinds of functions. In general, the convolution has a strongly spatial and concurrent flavour whenever the operations $\circ$ and $\odot$ are commutative. Similarly, for all instances of this lifting, the construction of Hoare logics is generic because it works for abitrary quantales~\cite{HMSW11}. Finally, due to the emphasis on functions instead of sets, the approach is constructive so long as the underlying source and target algebras are. The remainder of this article is organised as follows. Section~\ref{sec:algebr-prel} recalls the basic algebraic structures needed. Section~\ref{sec:fpsquantale} introduces our approach to power series with partial semigroups as source algebras and quantales as target algebras; it also proves our basic lifting result. Section~\ref{sec:booleancase} discusses the case of power series into the boolean quantale, when convolution becomes a possibly non-commutative notion of separating conjunction. Section~\ref{sec:fpsquantaleexamples} and~\ref{sec:fpscomquantaleexamples} present non-commutative and commutative instances of our lifting lemma; Section~\ref{sec:fpsquantaleexamples} discussing, among others, the chop operation over intervals and Section~\ref{sec:fpscomquantaleexamples} focusing on variants of separating conjunction. Section~\ref{sec:transformers} shows how state and predicate transformers arise in the power series setting. Section~\ref{sec:partial-formal-power} presents a lifting result for power series into partial quantales with an example. Section~\ref{sec:formal-power-series} generalises the lifting result to bi-semigroups and bi-quantales and presents two examples. Section~\ref{sec:fpsbiquantale} generalises the result to power series from two semigroups into a bi-quantale; Section~\ref{sec:biquantale-examples} presents in particular the quantale of stream interval functions, which is based on this generalisation. Section~\ref{sec:futuristic} further generalises the approach to applications with finite and infinite behaviours. Section~\ref{sec:interchange} shows that the interchange laws of concurrent Kleene algebras fail in general power series quantales. Based on this, Section~\ref{sec:hoare} discusses how generic Hoare logics can be developed over power series quantales. Section~\ref{sec:frame} shows how the approach can be used for deriving the frame rule of separation logic, using convolution as the algebraic notion of separating conjunction. Section~\ref{sec:conclusion} contains a conclusion. \section{Algebraic Preliminaries} \label{sec:algebr-prel} In this section, we briefly recall the most important mathematical structures used in this article: partial semigroups and monoids, their commutative variants, semigroups and dioid as well as quantales. We also consider such structures with two operations of composition or multiplication, that is, bi-semigroups, bi-monoids, bi-semirings and bi-quantales. \paragraph{Semigroups.} A \emph{partial semigroup} is a structure $(S,\cdot,\bot)$ such that $(S,\cdot)$ is a semigroup and $x\cdot \bot =\bot = \bot \cdot x$ holds for all $x \in S$. It follows that $\bot \notin S$, which is significant for various definitions in this article. A \emph{partial monoid} is a partial semigroup with multiplicative unit $1$. We often write $(S,\cdot)$ for partial semigroups and $(S,\cdot, 1)$ for partial monoids, leaving $\bot$ implicit. A (partial) semigroup $S$ is \emph{commutative} if $x\cdot y=y\cdot x$ for all $x,y\in S$. Henceforth, we use $\cdot$ for a general multiplication and $\ast$ for a commutative one. An important property of semigroups is \emph{opposition duality}. For every semigroup $(S,\cdot)$, the structure $(S,\odot)$ with $x\odot y=y\cdot x$ for all $x,y\in S$ forms a semigroup; the \emph{opposite} of $S$. Similarly, the opposite of a monoid is a monoid. The definitions of semigroups and monoids generalise to $n$ operations, but we are mainly interested in the case $n=2$. A \emph{partial bi-semigroup} is a structure $(S,\circ,\bullet)$ such that $(S,\circ)$ and $(S,\bullet)$ are partial semigroups. \emph{Partial bi-monoids} $(S,\circ,\bullet,1,1')$ can be obtained from them as standard. \paragraph{Semirings.} A \emph{semiring} is a structure $(S,+,\cdot,0)$ such that $(S,+,0)$ is a commutative monoid, $(S,\cdot)$ a semigroup, and the distributivity laws $x\cdot (y+z)=x\cdot y+x\cdot z$ and $(x+y)\cdot z= x\cdot z+y\cdot z$ as well as the annihilation laws $0\cdot x= 0$ and $x\cdot 0=0$ hold. A semiring is \emph{unital} if the multiplicative reduct is a monoid (with unit $1$). A \emph{dioid} is an additively idempotent semiring $S$, that is, $x+x=x$ holds for all $x\in S$. The additive reduct of a dioid thus forms a semilattice with order defined by $x\le y\Leftrightarrow x+y=y$. Obviously, the classes of semirings and dioids are closed under opposition duality. A \emph{bi-semiring} is a structure $(S,+,\circ,\bullet,0)$ such that $(S,+,\circ,0)$ and $(S,+,\bullet,0)$ are semirings; a \emph{trioid} is an additively idempotent bi-semiring. A bi-semiring or trioid is \emph{unital} if the underlying bi-semigroup is a bi-monoid. \paragraph{Quantales.} A \emph{quantale} is a structure $(Q,\le,\cdot)$ such that $(Q,\le)$ is a complete lattice, $(Q,\cdot)$ is a semigroup and the distributivity axioms \begin{equation*} x\cdot (\sum_{i\in I}y_i) = \sum_{i\in I} (x\cdot y_i),\qquad (\sum_{i\in I} x_i)\cdot y = \sum_{i\in I}(x_i\cdot y) \end{equation*} hold, where $\sum X$ denotes the supremum of a set $X\subseteq Q$. Similarly, we write $\prod X$ for the infimum of $X$. The distributivity laws imply, in particular, the isotonicity laws \begin{equation*} x \le y \Rightarrow z\cdot x \le z\cdot y, \qquad x\le y \Rightarrow x\cdot z \le y\cdot z. \end{equation*} A quantale is \emph{commutative} and \emph{partial} if the underlying semigroup is as well; \emph{unital} if the underlying semigroup is a monoid; and \emph{distributive} if the infinite distributivity laws \begin{equation*} x\sqcap (\sum_{i\in I} y_i) = \sum_{i\in I} (x\sqcap y_i),\qquad x+ (\prod_{i\in I} y_i) = \prod_{i\in I} (x+ y_i) \end{equation*} hold. A \emph{boolean quantale} is a distributive quantale in which every element has a complement. The boolean unital quantale $\mathbb{B}$, where multiplication $\cdot$ coincides with meet, plays an important role in this article. A \emph{bi-quantale} is a structure $(Q,\le,\circ,\bullet)$ such that $(Q,\le,\circ)$ and $(Q,\le,\bullet)$ are quantales. It is unital if the two underlying semigroups are monoids. It is easy to see that every (unital) quantale is a (unital) dioid and every (unital) bi-quantale a (unital) trioid. In particular, $0=\sum\emptyset =\sum_{i\in\emptyset} x_i$ and annihilation laws as in dioids follow from this as special cases of distributivity. \section{Power Series Quantales}\label{sec:fpsquantale} Formal (or rational) power series ~\cite{BerstelReutenauer} have been studied in formal language theory for decades. For brevity, we call them \emph{power series} in this article. In formal language theory, a power series is simply a function from the free monoid $X^\ast$ over a finite alphabet $X$ into a suitable algebra $Q$, usually a semiring or dioid $(Q,+,\cdot,0,1)$. Operations on $f,g:X^\ast\to Q$ are defined as follows. Addition is lifted pointwise, that is, $(f+g)\, x = f\, x + g\, x$. Multiplication is given by the \emph{convolution} or \emph{Cauchy product} \begin{equation*} (f\cdot g)\, x =\sum_{x=yz} f\, y\cdot g\, z, \end{equation*} where $yz$ denotes word concatenation and the sum in the convolution is finite since finite words can only be split in finitely many ways into prefix/suffix pairs. Furthermore, the \emph{empty power series} $\mathbb{O}$ maps every word to $0$, whereas the \emph{unit power series} $\mathbb{1}$ maps the empty word to $1$ and all other words to $0$. We write $Q^{X^\ast}$ for the set of power series from $X^\ast$ to $Q$ and, more generally, $Q^S$ for the class of functions of type $S\to Q$. The following lifting result is well known. \begin{proposition}\label{prop:fpslifting} If $(Q,+,\cdot,0,1)$ is a semiring (dioid), then so is $(Q^{X^\ast},+,\cdot,\mathbb{O},\mathbb{1})$. \end{proposition} This construction generalises from free monoids over finite alphabets to arbitrary partial semigroups or monoids. The sums in convolutions then become infinite due to infinitely many possible decompositions of elements. Here, due to potential divergence, these sums may not exist. However, we usually consider target algebras in which addition is idempotent and sums corresponds to suprema. The existence of arbitrary suprema can then be covered by completeness assumptions. We fix suitable algebraic structures $S$ and $Q$. First, we merely assume that $S$ is a set, but for more powerful lifting results it is required to be a partial semigroup or partial monoid. For a family of functions $f_i:S\to Q$ and $i\in I$ we define \begin{equation*} (\sum_{i\in I} f_i)\, x = \sum_{i\in I} f_i\, x, \end{equation*} whenever the supremum in $Q$ at the right-hand side exists. This comprises \begin{equation*} (f+g)\, x = f\, x + g\, x \end{equation*} as a special case. Since $x$ ranges over $S$, the constant $\bot$ is excluded as a value. Another special case is \begin{equation*} (\sum_{i\in\emptyset} f_i)\, x = (\sum \emptyset)\, x = \sum_{i\in\emptyset} f_i\, x = 0. \end{equation*} Hence, in particular, $\sum_{i\in\emptyset} f_i=\lambda x.\ 0$ and we write $\mathbb{O}$ for this function. We define the convolution \begin{equation*} (f\cdot g)\, x = \sum_{x=y\cdot z} f\, y\cdot g\, z, \end{equation*} where the multiplication symbol is overloaded to be used on $S$, $Q$ and $Q^S$. Again, this requires that the supremum in the right-hand side exists in $Q$. In the expression $x=y\cdot z$, the constant $\bot$ is again excluded as a value. Undefined splittings of $x$ are thus excluded from contributing to convolutions. Finally, whenever $S$ and $Q$ are endowed with suitable units, we define $\mathbb{1} : S\to Q$ as \begin{equation*} \mathbb{1}\, x = \begin{cases} 1, & \text{if } x = 1,\\ 0, & \text{otherwise}, \end{cases} \end{equation*} as for formal languages. Theorem~\ref{thm:quantale-lifting}, the main result in this section, shows that quantale laws lift from the algebra $Q$ to the function space $Q^S$ of power series under these definitions. On the way to this result we recall that semilattice and lattice structures lift to function spaces, a fundamental result of domain theory~\cite{AbramskyJung}. \begin{lemma}\label{lem:semilattice-lifting} Let $S$ be a set. If $(L,+,0)$ is a semilattice with least element $0$ then so is $(L^S,+,\mathbb{O})$. If $L$ is a complete lattice, then so is $L^S$. \end{lemma} \begin{proof}~ The semilattice lifting is covered by Proposition~\ref{prop:fpslifting}. As usual, $L^S$ is ordered by $f\le g\Leftrightarrow f+g=g$, and $\mathbb{O} \le f$ for all $f\in L^S$. If arbitrary suprema exist in $L$, then completeness lifts to $L^S$ by definition of $\sum_{i\in I}f_i$. Finally, every complete join-semilattice is a complete lattice. \end{proof} Infima, if they exist, are defined like suprema by pointwise lifting as \begin{equation*} (\prod_{i\in I}f_i)\, x = \prod_{i\in I}f_i\, x, \end{equation*} thus $(f\sqcap g)\, x = f\l x\sqcap g\ x$. \reflem{lem:semilattice-lifting} can then be strengthened. \begin{lemma}\label{lem:lattice-lifting} Let $S$ be a set. If $(D,+,\sqcap,0)$ is a (distributive) lattice with least element $0$, then so is $(D^S,+,\sqcap,\mathbb{O})$. Completeness and infinite distributivity laws between infima and suprema lift from $D$ to $D^S$. \end{lemma} \begin{proof} The join- and meet-semilattice laws for $+$ and $\sqcap$ follow from Lemma~\ref{lem:semilattice-lifting}. We need to verify absorption and distributivity. Let $f,g,h:S\to D$ and $x \in S$. \begin{itemize} \item $(f\sqcap (f+g))\, x = f\, x \sqcap (f\, x + g\, x) = f\, x$ by absorption on $D$. The proof of $f+(f\sqcap g)=f$ is lattice dual. \item The finite distributivity laws are special cases of the infinite ones below. \end{itemize} Completeness is covered by \reflem{lem:semilattice-lifting}. For infinite distributivity, \begin{equation*} (f\sqcap \sum_{i\in I} g_i)\, x = f\, x \sqcap \sum_{i\in I} g_i\, x = \sum_{i\in I} f\, x \sqcap g_i\, x = \sum_{i\in I} (f\sqcap g_i)\, x = (\sum_{i\in I} f\sqcap g_i)\, x. \end{equation*} The other distributivity law then follows from lattice duality. \end{proof} The final lifting result in this section deals with multiplicative structure as well. This requires $S$ to be a partial semigroup instead of a set. \begin{theorem}\label{thm:quantale-lifting} Let $(S,\cdot)$ be a partial semigroup. If $(Q,\le,\cdot)$ is a (distributive) quantale, then so is $(Q^S,\le,\cdot)$. In addition, commutativity in $Q$ lifts to $Q^S$ if $S$ is commutative; unitality in $Q$ lifts to $Q^S$ if $S$ is a partial monoid. \end{theorem} \begin{proof} Since $Q$ is a quantale, all infinite suprema and infima exist; in particular those needed for convolutions. The lifting to complete (distributive) lattices is covered by Lemma~\ref{lem:lattice-lifting}. It therefore remains to check the multiplicative monoid laws, distributivity of multiplication and annihilation. For left distributivity, for instance, \begin{align*} (f\cdot \sum_{i\in I} g_i)\, x = \sum_{x=y\cdot z} f\, y\cdot \sum_{i\in I} g_i\, z = \sum_{\substack{x=y\cdot z,\\ i\in I}} f\, y\cdot g_i\, z = \sum_{i\in I} (f\cdot g_i)\, x. \end{align*} The proof of right distributivity is opposition dual. Left distributivity ensures associativity, the proof of which lifts as with rational power series (Proposition~\ref{prop:fpslifting}). The restriction to partial semigroups is insignificant as, in $x= y\cdot z$, the constraint $x\in S$ only rules out contributions of $y\cdot z=\bot$. The same holds for unitality proofs. Commutativity lifts from $S$ and $Q$ as follows: \begin{equation*} (f\cdot g)\, x = \sum_{x=y\cdot z} f\, y \cdot g\, z = \sum_{x=z\cdot y} g\, z \cdot f\, y =(g\cdot f)\, x. \end{equation*} \end{proof} Once more the distributivity laws on $Q^S$ imply the annihilation laws $\mathbb{O} \cdot f= \mathbb{O}$ and $f\cdot \mathbb{O} = \mathbb{O}$ for all $f:S\to Q$. When only finite sums are needed, $Q$ can be assumed to be a semiring or dioid instead of a quantale. The following corollary to \refthm{thm:quantale-lifting} provides an example. \begin{corollary}\label{cor:quantale-lifting-finite} Let $(S,\cdot)$ be a finite partial semigroup. If $(Q,+,\cdot,0)$ is a semiring, then so is $(Q^S,+,\cdot,\mathbb{O})$. In addition, idempotency in $Q$ lifts to $Q^S$; commutativity in $Q$ lifts to $Q^S$ if $S$ is commutative; unitality in $Q$ lifts to $Q^S$ if $S$ is a partial monoid. \end{corollary} As another specialisation, \refprop{prop:fpslifting} is recovered easily when $S$ is the free monoid over a given alphabet and $Q$ a semiring or dioid. \section{Power Series into the Boolean Quantale}\label{sec:booleancase} In many applications, the target quantale $Q$ is formed by the booleans $\mathbb{B}$. Power series are then of type $S\to\mathbb{B}$ and can be interpreted as characteristic functions or predicates. In fact, $\mathbb{B}^S$ is isomorphic to the power set of $S$, which, in turn is in one to one correspondence with the set of all predicates over $S$, identifying predicates with their extensions. In this context, \refthm{thm:quantale-lifting} specialises to the powerset lifting of a partial semigroup or monoid $S$. For each $x\in S$, the boolean value $f\, x$ expresses whether or not $x$ is in the set corresponding to $f$. Powerset liftings have been studied widely in mathematics~\cite{Goldblatt,Brink}. They have various applications in program semantics, for instance as power domains (cf.~\cite{AbramskyJung}). \begin{corollary}\label{cor:powerset-lifting} Let $S$ be a partial (commutative) semigroup. Then $\mathbb{B}^S$ forms a (commutative) distributive quantale where $\mathbb{B}^S\cong 2^S$, $\le$ corresponds to $\subseteq$ and convolution $\cdot$ to the complex product \begin{equation*} X\cdot Y =\{x\cdot y \mid x\in X\wedge y\in Y\} \end{equation*} for all $X,Y\subseteq S$. If $S$ has unit $1$, then $\mathbb{B}^S$ has unit $\{1\}$. \end{corollary} Various instances of Corollary~\ref{cor:powerset-lifting} are discussed in Sections~\ref{sec:fpsquantaleexamples} and~\ref{sec:fpscomquantaleexamples}. The quantale $\mathbb{B}^S$ carries a natural logical structure with elements of $\mathbb{B}^S$ corresponding to predicates, suprema to existential quantification, infima to universal quantification and the lattice order to implication. In particular, $+$ corresponds to disjunction and $\sqcap$ to conjunction. More interesting is the logical interpretation of convolution \begin{equation*} (f\cdot g)\, x =\sum_{x=y\cdot z} f\, y\cdot g\, z \end{equation*} in the boolean quantale $\mathbb{B}^S$. The expression $x= y\cdot z$ denotes the decomposition or separation of the semigroup element $x$ into parts $y$ and $z$. The composition $f\ y\cdot g\ z=f\ y\sqcap g\ z$ in $\mathbb{B}$ models the conjunction of predicate $f$ applied to $y$ with predicate $g$ applied to $z$. Finally, the supremum $\sum$ models the existential quantification over these conjunctions with respect to all possible decompositions of $x$. The commutative case of Corollary~\ref{cor:powerset-lifting} is immediately relevant to separation logic. In this context, the partial commutative semigroup $(S,\ast)$ is know as the \emph{resource semigroup}~\cite{COY07}; it provides an algebraic abstraction of the heap. Its powerset lifting $\mathbb{B}^S$ captures the algebra of resource predicates that form the assertions of an extended Hoare logic---the assertion quantale of separation logic. In this assertion quantale, separating conjunction is precisely convolution: the product $x=y\ast z$ on the resource semigroup $S$ decomposes or separates the resource or heap $x$ into parts of heaplets $y$ and $z$ and the product $f\ y\ast g\ z=f\ y\sqcap g\ z$ in $\mathbb{B}$ once more conjoins $f\ y$ and $g\ z$; hence $x=y\ast z$ separates whereas $f\ y\ast g\ z=f\ y\sqcap g\ z$ conjoins. The concrete case of the heap is considered in more detail in Example~\ref{ex:separating-conjunction-heaplets}. The power series approach thus yields a simple algebraic view on a lifting to function spaces in which the algebraic operation of convolution into the booleans allows various interpretations, including that of a complex product, that of separating conjunction---commutative or non-commutative---and that of separating conjunction as a complex product. In the commutative setting it gives a simple account of the category-theoretical approach to O'Hearn and Pym's logic of bunched implication~\cite{OHearnP99} in which convolution corresponds to coends and the quantale lifting is embodied by Day's construction~\cite{Day}. \section{Non-Commutative Examples}\label{sec:fpsquantaleexamples} After the conceptual development of the previous sections we now discuss a series of examples which underpin the universality and relevance of the notion of convolution in computing. All of them can be obtained as instances of \refthm{thm:quantale-lifting} after setting up partial semigroups or monoids appropriately. For all these structures, the lifting to the function space is then generic and automatic. The booleans often form a particularly interesting target quantale. This section considers only examples with a non-commutative notion of convolution; for commutative examples see Section~\ref{sec:fpscomquantaleexamples}. \begin{example}[Formal Languages]\label{ex:formal-languages} Let $(X^\ast,\cdot,\varepsilon)$ be the free monoid generated by the finite alphabet $X$ with $\varepsilon$ denoting the empty word. Let $Q$ form a distributive unital quantale. Then $Q^{X^\ast}$ forms a distributive unital quantale as well by Theorem~\ref{thm:quantale-lifting}. More precisely, since suprema in convolutions are always finite, one obtains the unital dioid $(Q^{X^\ast},+,\cdot,\mathbb{O},\mathbb{1})$ by lifting from a dioid $(Q,+,\cdot,0,1)$. This is the well known rational power series dioid of formal language theory. For $Q=\mathbb{B}$ one obtains, by Corollary~\ref{cor:powerset-lifting}, the quantale $\mathbb{B}^{X^\ast}$ of formal languages over $X$. \qed \end{example} \begin{example}[Binary Relations]\label{ex:binary-relations} For a set $A$ consider the partial semigroup $(A\times A,\cdot)$ with $\cdot$ defined, for all $a,b,c,d\in A$, by \begin{equation*} (a,b)\cdot (c,d)= \begin{cases} (a,d), & \text{ if } b=c,\\ \bot, & \text{ otherwise}. \end{cases} \end{equation*} For $Q=\mathbb{B}$, Theorem~\ref{thm:quantale-lifting} (or its Corollary~\ref{cor:powerset-lifting}) ensures that $(\mathbb{B}^{A\times A},\le,\cdot)$, which is isomorphic to $(2^{A\times A},\subseteq,\cdot)$, is the quantale of binary relations under union, intersection, relational composition and the empty relation. More specifically, with every power series $f$ we associate a binary relation $R_f$ defined by $(a,b)\in R_f \Leftrightarrow f\ (a,b)=1$. The empty relation $\emptyset$ obviously corresponds to the power series defined by $\mathbb{O}\, (a,b) = 0$ for all $a,b\in A$. Relational composition is given by convolution \begin{equation*} (f\cdot g)\, (a,b)=\sum_{c\in A} f\, (a,c)\cdot g\, (c,b). \end{equation*} It can then be checked that $R_{f\cdot g}=R_f\cdot R_g=\{(a,b) \mid \exists c. (a,c) \in R_f\wedge (c,b)\in R_g\}$. The unit relation cannot be lifted from a unit in $A\times A$ because $A\times A$ has no unit. Instead it can be defined on $\mathbb{B}^{A\times A}$ directly as \begin{equation*} \mathbb{1}\, (a,b) = \begin{cases} 1, &\text{ if } a=b,\\ 0, &\text{ otherwise}. \end{cases} \end{equation*} \qed \end{example} The constructions for relations generalise, for instance, to probabilistic or fuzzy relations where $Q\neq\mathbb{B}$, but this is not explored any further. Instead we consider the case of matrices. \begin{example}[Matrices]\label{ex:matrices} Matrices are functions $f:A_1\times A_2\to B$, where $A_1$ and $A_2$ are index sets and $Q$ is a suitable coefficient algebra. For the sake of simplicity we restrict our attention to square matrices with $A_1=A_2= A$. General non-square matrices require more complex partiality conditions. The development is similar to binary relations, but uses coefficient algebras beyond $\mathbb{B}$. It is easy to check that matrix addition is modelled by \begin{equation*} (f+g)\, (i,j)= f\, (i,j)+g\, (i,j), \end{equation*} whereas matrix multiplication is given by convolution \begin{equation*} (f\cdot g)\, (i,j)=\sum_{k\in A}f\, (i,k)\cdot g\, (k,j), \end{equation*} under suitable restrictions to guarantee the existence of sums, such as finiteness of $A$ or idempotency of additionin $Q$. The zero and unit matrices are defined as in the relational case. \begin{equation*} \mathbb{1}\, (i,j) = \begin{cases} 1,& \text{ if } i=j,\\ 0, & \text{ otherwise}, \end{cases} \qquad\qquad \mathbb{O}\, (i,j) = 0. \end{equation*} \refthm{thm:quantale-lifting} then shows that quantales are closed under matrix formation. It can easily be adapted to showing that square matrices of finite dimension over a semiring form a semiring or that matrices over a dioid form a dioid.\qed \end{example} This example not only links matrices with power series, it also yields a simple explanation of the well known relationship between binary relations and boolean matrices. If a relation $R\subseteq A\times A$ is modelled as $f_R:A\times A\to \mathbb{B}$ defined by $f_R\ (a,b)=1 \Leftrightarrow (a,b)\in R$ as indicated above, then it \emph{is} a boolean matrix. \begin{example}[Finite Automata]\label{ex:finite-automata} Suppose $V$ is a set of state symbols, $X$ an alphabet, $i \in V$ the initial state and $F \subseteq V$ a set of final states. Conway~\cite{Conway71} has shown that transition relations $\delta$ of finite automata $(V,X,\delta,i,F)$ can be modelled in terms of finite matrices of type $V\times V\to \mathsf{Rex}(X)$ into the algebra of regular expressions $\mathsf{Rex}(X)$ over $X$, for instance a Kleene algebra with constants from $X$. Consider the following automaton and transition matrix as an example. \begin{equation*} \def\normalsize{\normalsize} \xymatrix{ {}\ar[r] &*++[o][F-]{1} \ar^{a,b}@(ul,ur)\ar^b[r] & *++[o][F-]{2} \ar^a[r] & *++[o][F=]{3} } \qquad\qquad \begin{pmatrix} a+b&b&0\\ 0&0&a\\ 0&0&0 \end{pmatrix} \end{equation*} More generally, the full automaton, including its initial and final state information, is captured by the following triple. \begin{equation*} \left[ \begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix}, \begin{pmatrix} a+b&b&0\\ 0&0&a\\ 0&0&0 \end{pmatrix}, \begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} \right] \end{equation*} It is well known that the algebra of regular expressions forms a dioid, hence Theorem~\ref{thm:quantale-lifting} applies, showing that transition matrices over the dioid of regular expressions form a dioid, as in Example~\ref{ex:matrices}. Other kinds of automata, such as probabilistic or weighted ones, can be modelled along this line.\qed \end{example} In fact, it has been shown that Kleene algebras are closed under matrix formation~\cite{Kozen91}, but the neccessary treatment of the Kleene star is beyond the scope of this article. In addition, it is well known that regular languages need not be closed under general unions, hence do not form quantales. \begin{example}[Trace Functions]\label{ex:traces} Let $V$ be a finite set of state symbols and $X$ a finite set of transition symbols, as in a finite automaton. A \emph{trace}~\cite{Eilenberg} is a finite word over $(V\cupX)^\ast$ in which state and transition symbols alternate, starting and ending with state symbols. We write $T(V,X)$ for the set of traces over $V$ and $X$. It is endowed with a partial monoid structure by defining, for $p_1\alpha_1q_1,p_2\alpha_2q_2\in T(V,X)$, the \emph{fusion product} \begin{equation*} p_1\alpha_1q_1\cdot p_2\alpha_2q_2 = \begin{cases} p_1\alpha_1q_1\alpha_2q_2, & \text{ if } q_1=p_2,\\ \bot, & \text{ otherwise}. \end{cases} \end{equation*} Then convolution becomes \begin{equation*} (f\cdot g)\, \tau = \sum_{\tau=p\alpha_1 r\cdot r\alpha_2 q} f\, p\alpha_1 r\cdot g\, r\alpha_2 q \end{equation*} and Theorem~\ref{thm:quantale-lifting} implies that the set $Q^{T(V,X)}$ of trace functions into the distributive quantale $Q$ forms a distributive quantale. If $Q$ is unital, then $Q^{T(V,X)}$ becomes unital by defining \begin{equation*} \mathbb{1}\, x = \begin{cases} 1, & \text{ if } x\in V,\\ 0, & \text{ otherwise}. \end{cases} \end{equation*} For $Q=\mathbb{B}$ we obtain the well known quantale of sets of traces. Trace functions $\mathbb{B}^{T(X,V)}$ have a natural interpretation as trace predicates. Convolution $(f\cdot g)\, \tau$ indicates the various ways in which property $f$ holds on a prefix of trace $\tau$ whereas property $g$ holds conjunctively on the consecutive suffix, as for instance in temporal logics over computation traces or paths.\qed \end{example} Sets of traces generalise both languages and binary relations, which are obtained by forgetting structure in the underlying partial monoid. Another special case is given by sets of paths in a graph, which is obtained by forgetting state labels. The explicit construction of the corresponding paths quantale is straightforward and therefore not shown. \begin{example}[Interval Functions]\label{ex:interval-functions} Let $(P,\le)$ be a linear order and $I_P$ the set of all closed intervals over $P$---the empty interval being open by definition. For an interval $x$, let $x_{min}$ and $x_{max}$ represent respectively the minimum and maximum value in $x$. We impose a partial semigroup structure on $I_P$ be defining the \emph{fusion product} on $I_P$, similar to the case of binary relations, traces and matrices, as \begin{equation*} x \cdot y = \begin{cases} x \cup y, & \text{if } x_{max} = y_{min}, \\ \bot, & \text{otherwise}. \end{cases} \end{equation*} An \emph{interval function} is a function $f:I_P\to Q$ into a suitable algebra. Whenever $Q$ is a (distributive) quantale, Theorem~\ref{thm:quantale-lifting} applies and $Q^{I_P}$ forms a (distributive) quantale, too. Convolution of interval functions is given by \begin{equation*} (f\cdot g)\, x = \sum_{x = y \cdot z} f\, y \cdot g\, z. \end{equation*} Like in the case of relations, the unit interval function is not lifted from $I_P$, but defined directly as \begin{equation*} \mathbb{1}\, [a,b] = \begin{cases} 1, & \text{ if } a=b,\\ 0, & \text{ otherwise}. \end{cases} \end{equation*} The quantale of interval functions then becomes unital. \emph{Interval predicates} are functions of type $I_P\to \mathbb{B}$. Convolution of interval predicates is known as the \emph{chop} operation~\cite{Mos00}, where $(f \cdot g)\, [a, c]$ holds if it is possible to split interval $[a, c]$ into $[a, b]$ and $[b, c]$ such that $f\, [a, b]$ and $g\, [b, c]$ hold in conjunction. \begin{center} \scalebox{1}{\input{chop.pspdftex}} \end{center} The meaning of an interval predicate $f\, x$ can be defined in various ways. For instance $f$ can hold somewhere (at some point) in $x$ or (almost) everywhere (see \cite{Mos00,ZH04}), and it is even possible to define and use non-deterministic evaluators \cite{HBDJ13} that enable calculations of apparent states (see \cite{DHD14}). \qed \end{example} Naive use of interval predicates may have undesired effects: If $f\, x$ means that $f$ holds at each point in interval $x$, then $(f\cdot\neg f)$ is always false, since both $f$ and $\neg f$ would have to hold in at least one fusion point, which is impossible. An alternative definition of interval composition without fusion therefore seems desirable. The duration calculus presents a solution in terms of an `almost everywhere' operator, such that a property holds almost everywhere in an interval if it is false in the interval for a set of points of measure zero \cite{ZH04}. Others have defined `jump conditions' leaving the possibility of both $f$ and $\neg f$ holding at the fusion point open \cite{HM09}. Here we model a third approach \cite{DHD14}, with chop formalised over non-overlapping intervals, in the power series setting. \begin{example}[Intervals without Fusion]\label{ex:intervals-no-fusion} We define a composition of contiguous intervals that avoids fusion. To this end we consider the set $I_P$ of intervals of the form $(a,b)$, $(a,b]$, $[a,b)$ and $[a,b]$, for $a,b\in P$. We include the empty interval $\emptyset$, which is by definition equal to $(a,a)$, $(a,a]$ and $[a,a)$ for all $a\in P$. The interval $x$ precedes the interval $y$, written $x \prec y$, if $\forall a \in x, b \in y.\ a < b$. The composition of intervals is defined as \begin{equation*} x \cdot y = \begin{cases} x \cup y, & \text{if } x \cup y \in I_P \text{ and } x \prec y, \\ \bot, & \text{otherwise}. \end{cases} \end{equation*} Convolution $f\cdot g$ is then defined as usual. Theorem~\ref{thm:quantale-lifting} ensures once more that $Q^{I_P}$ forms a distributive quantale whenever $Q$ does. The unit $\mathbb{1} :I_P\to Q$, however, requires modification. Defining \begin{equation*} \mathbb{1}\, x = \begin{cases} 1, &\text{ if } x=\emptyset,\\ 0, &\text{ otherwise}, \end{cases} \end{equation*} it is easy to check that $(\mathbb{1} \cdot f)\ x = f\, x = (f\cdot \mathbb{1})\, x$ for any interval $x$ and the new definition of interval composition. This makes the quantale $Q^{I_P}$ unital. \qed \end{example} The examples in this section show that the generic lifting construction in Theorem~\ref{thm:quantale-lifting} allows a uniform treatment of a variety of mathematical objects, including relations, formal languages, matrices and sets of intervals. In each case, a (partial) composition on the underlying objects needs to be defined, e.g., on words, ordered pairs, index pairs of matrices, traces, paths or intervals. Lifting to the function space is then generic. Such a generic lifting has been discussed previously for languages, relations, paths and traces in the context of an Isabelle/HOL library with models of Kleene algebras~\cite{ArmstrongSW-JLAMP,ArmstrongSW13}. Theorem~\ref{thm:quantale-lifting} has, in fact, already been implemented in Isabelle. Based on this, the existing implementation of models of Kleene algebras can be unified and simplified considerably. \section{Commutative Examples}\label{sec:fpscomquantaleexamples} This section provides instances of Theorem~\ref{thm:quantale-lifting} and Corollary~\ref{cor:powerset-lifting} for the commutative case. As discussed in Section~\ref{sec:booleancase}, this situation typically arises when the composition of the underlying semigroup $(S,\ast)$ is used to split resources, heaps, states, etc, in a spatial fashion, which is in contrast to the previous section where $f \cdot g$ meant that there was a dependency between $f$ and $g$, which often carries a temporal meaning. One can often think of convolution instantiated to such a spatial separation in terms of parallelism or concurrency. In particular we instantiate Theorem~\ref{thm:quantale-lifting} to four kinds of resource monoids based on multisets under multiset union, sets under disjoint union, partial functions under union and vectors. Notions of separating conjunction as convolution arises in all these examples in a natural way. In the disjoint union and vector examples, the relationship between convolution, separation and concurrency becomes most apparent. Previously, this observation of separating conjunction as a notion of concurrency with a strongly spatial meaning has been one of the motivations for concurrent separation logic~\cite{COY07} and concurrent Kleene algebra~\cite{HMSW11}. As a preparation we show how multisets with multiset union and sets with disjoint union arise in the power series setting. \begin{example}[Multisets]\label{ex:multisets} Let $S$ be a set and let $f:S\to\mathbb{N}$ assign a multiplicity to elements of $S$. Consider the max/min-plus algebra over $\mathbb{N}$~\cite{GondranMinoux}, which forms a commutative distributive quantale. Define, rather artificially, a partial semigroup on $S$ by stipulating \begin{equation*} x\ast y= \begin{cases} x, &\text{ if } x=y,\\ \bot, &\text{ otherwise}. \end{cases} \end{equation*} Then $\mathbb{N}^S$ is the set of multisets over the set $S$ which, by Theorem~\ref{thm:quantale-lifting}, forms a commutative distributive quantale under the operations \begin{gather*} (f\uplus g)\, x = (f\ast g)\, x= \sum_{x=x\ast x} f\, x + g\, x = f\, x + g\, x,\\ (\sum_{i\in I}f_i)\, x = \max_{i\in I}(f_i\, x),\qquad (\prod_{i\in I}f_i)\, x = \min_{i\in I}(f_i\, x). \end{gather*} The ``convolution'' $\uplus$ is the usual multiset addition. For example, \begin{align*} a^2b^5c\uplus ab^3d^2 &=a^3b^8cd^2,\\ a^2b^5c+ ab^3d^2 &=a^2b^5cd^2,\\ a^2b^5c\sqcap ab^3d^2 &=ab^3. \end{align*} \end{example} \begin{example}[Powersets]\label{ex:powersets} Under the same conditions as in Example~\ref{ex:multisets}, suppose that $f:S\to\mathbb{B}$ is the characteristic function which determines the subsets of $S$. Then $\mathbb{B}^S\cong 2^S$ reduces to the complete distributive lattice of powersets of $S$; the ring of sets over $S$. In particular, $f\uplus g= \max(f,g)$. This lifting implements the powerset functor. \qed \end{example} Theorem~\ref{thm:quantale-lifting} shows that the function space $Q^S$ from a partial commutative semigroup $S$ into a commutative quantale $Q$ forms a commutative quantale. In addition, we have seen in Section~\ref{sec:booleancase}, that, in that case, $\mathbb{B}^S$ may yield the quantale of resource predicates in which convolution is separating conjunction. We now discuss four special cases of separating conjunction. \begin{example}[Separating Conjunction on Multisets]\label{ex:separating-conjunction-ms} The free commutative monoid $(X^\ast,\ast,0)$ generated by the alphabet $X$ is isomorphic to the set of all multisets over $X$ with $\ast$ being multiset addition $\uplus$. By Theorem~\ref{thm:quantale-lifting}, $Q^{X^\ast}$ forms a commutative quantale if $Q$ does; distributivity and unitality lift as usual. Convolution $ (f\ast g)\, x =\sum_{x=y\ast z}f\, y\ast g\, z$ separates the multiset or resource $x$ in all possible ways and then applies the functions $f$ and $g$ to the result, depending on the interpretation of multiplication in $Q$. For $Q=\mathbb{B}$, $\mathbb{B}^{X^\ast}$ forms the resource predicate quantale over multisets. Convolution $f\ast g$ is separating conjunction as a complex product on sets of multisets based on multiset addition as a separator: \begin{equation*} (f\ast g)\, x =\sum_{x=y\uplus z} f\, y\sqcap g\, z. \end{equation*} \qed \end{example} In many contexts, multisets form a paradigmatic data type for resources. \begin{example}[Separating Conjunction on Sets]\label{ex:separating-conjunction-sets} The free commutative idempotent monoid $(X^\ast,\ast,0)$ generated by the alphabet $X$ is isomorphic to $2^X$ with $\ast$ being union. More interesting in our context is the consideration of disjoint union, which is defined as \begin{equation*} x\oplus y = \begin{cases} x\cup y, & \text{ if } x\cap y=0,\\ \bot, & \text{ otherwise}. \end{cases} \end{equation*} Then $(X^\ast,\oplus,0,\bot)$ forms a partial commutative monoid and, by Theorem~\ref{thm:quantale-lifting}, $Q^{X^\ast}$ forms a commutative quantale. Convolution $(f\ast g)\, x$ now separates the set $x$ into disjoint subsets and then applies the functions $f$ and $g$ to these subsets, depending on the interpretation of $\ast$ in the target quantale. For target quantale $\mathbb{B}$ we obtain the resource predicate quantale $\mathbb{B}^{X^\ast}$ on power sets based on disjoint union as a separator: \begin{equation*} (f\ast g)\, x =\sum_{x=y\oplus z} f\, y\sqcap g\, z. \end{equation*} \qed \end{example} This kind of separating conjunction is particularly appropriate for (indexed) families. \begin{example}[Separating Conjunction on Heaplets]\label{ex:separating-conjunction-heaplets} Let $(S,\ast,0)$ be the partial commutative monoid of partial functions $\eta:A\to B$ with empty function $0:A\to B$ and composition defined by \begin{equation*} \eta_1\ast \eta_2 = \begin{cases} \eta_1\cup \eta_2,& \text{ if } \mathit{dom}(\eta_1)\cap\mathit{dom}(\eta_2)=\emptyset,\\ \bot, & \text{ otherwise}. \end{cases} \end{equation*} The functions $\eta$ are sometimes called \emph{heaplets} and used to model a memory heap. As usual, by Theorem~\ref{thm:quantale-lifting}, $Q^S$ forms a commutative distributive unital quantale whenever $Q$ does. In particular, $\mathbb{B}^S$ forms an algebra of heap assertions with convolution as separating conjunction over the heap.\qed \end{example} \begin{example}[Separating Conjunction on Vectors]\label{ex:separating-conjunction-vectors} Consider a set $S$ of vectors $x$ of fixed dimension $|x|= n$. We turn this into a partial commutative semigroup by defining composition as \begin{equation*} (x\ast y)_i= \begin{cases} x_i,&\text{ if } y_i=0,\\ y_i,&\text{ if } x_i=0,\\ \bot,& \text{ otherwise}. \end{cases} \end{equation*} Also let $x=\bot$ if $x_i=\bot$ for some $1\le i\le n$. It is obvious from this definition that the zero vector $0$ is a unit with respect to $\ast$. For example, \begin{equation*} \begin{pmatrix} 5 \\ 0 \\ 7 \end{pmatrix} \ast \begin{pmatrix} 0 \\ 4 \\ 0 \end{pmatrix} = \begin{pmatrix} 5 \\ 4 \\ 7 \end{pmatrix} \qquad\qquad \begin{pmatrix} 5 \\ 0 \\ 7 \end{pmatrix} \ast \begin{pmatrix} 0 \\ 4 \\ 4 \end{pmatrix} = \bot \end{equation*} Then Theorem~\ref{thm:quantale-lifting} implies that $Q^S$ forms a commutative distributive unital quantale whenever $Q$ does, and $\mathbb{B}^S$ forms an assertion algebra with a vector-based notion of separating conjunction.\qed \end{example} The notion of separation on vectors, which splits vectors into disjoint blocks, lends itself to transforming such vectors in parallel fashion. This is further elaborated in Example~\ref{ex:lin-trafos}. In separation logic, a magic wand operation is often used. It is the upper adjoint of separating conjunction. In the quantale setting, this adjoint exists because separating conjunction distributes over arbitrary suprema by definition. Additional notions of resource monoids and liftings to assertion algebras have been studied within the Views framework~\cite{D-YBGPY13}. Whether their generic soundness results for Hoare logics can be reconstructed in the power series setting is left for future work. \section{Transformers and Bi-Quantales}\label{sec:transformers} The powerset lifting discussed in Section~\ref{sec:fpsquantale} suggests that state and predicate transformers could be modelled as power series as well. This section sketches how this can be achieved. A detailed analysis and the consideration of particular classes of predicate transformers is left for future work. A \emph{state transformer} $f_R:A\to 2^B$ is often associated with a relation $R\subseteq A\times B$ by defining \begin{equation*} f_R\, a=\{b \mid (a,b)\in R\}. \end{equation*} State transformers are turned into \emph{predicate transformers} $\hat{f}_R:2^B\to 2^A$ by the Kleisli lifting \begin{equation*} \hat{f}_R\, Y=\{x \mid f_R\, x\subseteq Y\}. \end{equation*} The following results are well known~\cite{BvW99-book}. \begin{proposition} The state transformers in $(2^B)^A$ and the predicate transformers in $(2^A)^{2^B}$ form complete distributive lattices. \end{proposition} \begin{proof} $2^B\cong\mathbb{B}^B$ forms a complete distributive lattice by Lemma~\ref{lem:lattice-lifting} because $\mathbb{B}$ forms a complete distributive lattice. The same argument applies to $2^A$. It therefore follows that $(2^B)^A$ and $(2^A)^{2^B}$ are again complete distributive lattices by Lemma~\ref{lem:lattice-lifting}. \end{proof} Predicate transformers of type $2^A\to 2^A$ form a monoid with respect to function composition. It is also well known that the subalgebra of \emph{completely additive} predicate transformers, which satisfy $f\, (\sum_{i\in I}X_i)= \sum_{i\in I} (f\, X_i)$, forms a distributive unital quantale in which the identity function is the multiplicative unit. However, the operation of infimum in this algebra is not the one that is lifted pointwise; instead it is induced by the operation of supremum~\cite{BvW99-book}. A dual result holds for \emph{completely multiplicative} predicate transformers, which satisfy $f\, (\prod_{i\in I}X_i)= \prod_{i\in I} (f\, X_i)$. In this case, the monoidal part of the quantale lifting is not obtained with the power series lifting technique either. The cases of resource monoids, where assertion algebras contain a notion of separating conjunction, are more interesting. Let $S$ be a partial monoid. A \emph{monoid transformer} is a function of type $S\to 2^S$. A \emph{monoid predicate transformer} is a function of type $2^S\to 2^S$. Examples are \emph{resource transformers} and \emph{resource predicate transformers}, in which case $S$ is a resource monoid. Such transformers have been studied in the context of abstract separation logic \cite{COY07}. The following results follow immediately in our setting. \begin{proposition}\label{prop:ptquantale} Let $S$ be a partial monoid. Then the monoid transformers in $(2^S)^S$ and the monoid predicate transformers in $(2^S)^{2^S}$ form distributive unital quantales. In both cases, commutativity lifts from $S$. \end{proposition} \begin{proof} $2^S$ forms a distributive unital quantale according to Corollary~\ref{cor:powerset-lifting}. It is commutative whenever $S$ is. Hence $(2^S)^S$ forms a distributive unital quantale by Theorem~\ref{thm:quantale-lifting}. Commutativity lifts again from $S$. Similarly, $(2^S)^{2^S}$ is a distributive unital quantale by Theorem~\ref{thm:quantale-lifting} because $2^S$ is and the multiplicative reduct of $2^S$ is a monoid. Commutativity lifts again from $S$. \end{proof} Proposition~\ref{prop:ptquantale} can be combined with the previous observation about predicate transformer quantales. \begin{theorem}\label{thm:ptbiquantale} Let $S$ be a partial (commutative) monoid. Then $((2^S)^{2^S},\subseteq,\cdot,\circ,\mathit{id},\mathbb{1})$ forms weak a unital bi-quantale with (commutative) convolution $\cdot$ and function composition $\circ$ as well as the unit function $\mathit{id}$ and unit power series $\mathbb{1}$. \end{theorem} In this context, \emph{weak} means that the left distributivity law $f\circ \sum_{i\in I} g_i = \sum_{i\in I} f\circ g_i$ need not hold in the space of predicate transformers. It holds, however, when predicate transformers are completely additive. \section{Partial Power Series Quantales} \label{sec:partial-formal-power} This section generalises Theorem~\ref{thm:quantale-lifting} to situations in which the target algebras $Q$ are assumed to be partial quantales in the sense that their semigroup retracts are partial. In this case, partiality of composition shows up not only in the splitting $x=y\cdot z$, but also in the product $f\, y\cdot g\, z$ in convolutions. It turns out that the quantale structure of the target algebra is preserved at the level of the function space, but the loss of totality in $f\, y\cdot g\, z$ causes the function space to be partial as well. Previous proofs must therefore be reconsidered. As an example we consider linear transformations of vectors implemented by matrices, in which vectors that are separated as in Example~\ref{ex:separating-conjunction-vectors} can be transformed in concurrent fashion by matrices which can be separated into non-zero blocks along the diagonal. This is a particular manifestation of the correspondence between separation and concurrency in the context of convolution. \begin{proposition}\label{prop:partial-quantale-lifting} Let $(S,\cdot)$ be a partial semigroup. If $(Q,\le,\cdot)$ is a (distributive) partial quantale, then so is $(Q^S,\le,\cdot)$. In addition, commutativity lifts from $S$ and $Q$ to $Q^S$ and unitality lifts if $S$ is a partial monoid. \end{proposition} \begin{proof} By Theorem~\ref{thm:quantale-lifting}, the (commutative) monoidal and distributivity laws need to be checked. Suppose $(f\cdot (g \cdot h))\, x$ is defined. Then \begin{equation*} (f\cdot (g \cdot h))\, x = \sum_{x=x_1\cdot (x_2\cdot x_3)} f\, x_1 \cdot (g\, x_2 \cdot h\, x_3). \end{equation*} Thus $x_1\cdot (x_2\cdot x_3)$ is defined and equal to $(x_1\cdot x_2)\cdot x_3$ and $f\, x_1 \cdot (g\, x_2 \cdot h\, x_3)$ is defined and equal to $(f\, x_1 \cdot g\, x_2) \cdot h\, x_3$. Hence \begin{equation*} \sum_{x=x_1\cdot (x_2\cdot x_3)} f\, x_1 \cdot (g\, x_2 \cdot h\, x_3)= \sum_{x=(x_1\cdot x_2)\cdot x_3} (f\, x_1 \cdot g\, x_2) \cdot h\, x_3 = ((f\cdot g)\cdot h)\, x. \end{equation*} The situation where $((f\cdot g)\cdot h)\, x$ is defined is opposition dual. Hence $Q^S$ forms a partial semigroup. Suppose that $ (f\cdot \sum_{i\in I} g_i)\, x$ is defined. Then \begin{equation*} (f\cdot \sum_{i\in I} g_i)\, x = \sum_{x=y\cdot z} f\, y \cdot (\sum_{i\in I} g_i)\, z = \sum_{x=y\cdot z} \sum_{i\in I} (f\, y \cdot g_i\, z) = \sum_{i\in I} (f\cdot g_i)\, x. \end{equation*} The proof can be reversed if the $(f\cdot g_i)\ x$ are defined. The proof of right distributivity is opposition dual. This shows that $Q^S$ forms a partial distributive quantale. Suppose $(f \cdot g)\, x$ is defined and $S$ and $Q$ are both commutative. Then \begin{equation*} (f\cdot g)\, x = \sum_{x=y\cdot z}f\, y \cdot g\, z = \sum_{x=z\cdot y}g\, z \cdot f\, y = (g\cdot f)\, x. \end{equation*} This lifts commutativity. Finally, assume that $S$ is a monoid and $Q$ is unital and define the power series $\mathbb{1}$ as usual. Suppose that $(\mathbb{1} \cdot f)\, x$ is defined. Then \begin{equation*} (\mathbb{1}\cdot f)\, x = \sum_{x=y\cdot z} \mathbb{1}\, y \cdot f\, z = 1\cdot f\, x = f\, x. \end{equation*} Moreover, $f\cdot \mathbb{1} =f$ follows from opposition duality. This lifts unitality. \end{proof} \begin{example}[Linear Transformations of Vectors]\label{ex:lin-trafos} Consider again the partial semigroup $(S,\ast)$ on $n$-dimensional vectors from Example~\ref{ex:separating-conjunction-vectors}. It is easy to check that $S$ actually forms a partial commutative dioid with respect to $\ast$ as multiplication and standard vector addition. Distributivity $x\ast(y+z)=(x\ast y)+(x\ast z)$ follows immediately from the definition: the case of $x_i=0$ holds trivially, the case of $(y+z)_i=0$ requires that $y_i=z_i=0$. Proposition~\ref{prop:partial-quantale-lifting} then implies as a special case that the functions of type $S\to S$ form a commutative dioid; they form a trioid with the other multiplication being function composition. The sum in the convolution is obviously finite since there are only finitely many ways of splitting a vector of finite dimension. In addition, the functions $f$ and $g$ in a convolution are not only applied to separate parts $y$ and $z$ of vector $x$, but they must map to separate parts $f\, y$ and $g\, z$ of the resulting vector as well. Unitality cannot be lifted as in Proposition~\ref{prop:partial-quantale-lifting} because the units of $+$ and $\ast$ coincide. It is easy to check that the unit with respect of $\ast$ on $S^S$ is defined as \begin{equation*} e\, x = \begin{cases} 0, & \text{ if } x=0,\\ \bot, & \text{ otherwise}. \end{cases} \end{equation*} For further illustration consider the linear transformations on $n$-dimensional vectors given by multiplying $n$-dimensional vectors with an $n\times n$ matrix and adding an $n$-dimensional vector. As a simple example of a term contributing to a convolution consider \begin{equation*} \begin{pmatrix} a_1 & b_1\\ c_1 & d_1 \end{pmatrix} \begin{pmatrix} x\\ 0 \end{pmatrix} \ast \begin{pmatrix} a_2 & b_2\\ c_2 & d_2 \end{pmatrix} \begin{pmatrix} 0\\ y \end{pmatrix} = \begin{pmatrix} a_1x\\ c_1y \end{pmatrix} \ast \begin{pmatrix} b_2y\\ d_2y \end{pmatrix} = \bot, \end{equation*} whereas \begin{equation*} \begin{pmatrix} a_1 & b_1\\ 0 & d_1 \end{pmatrix} \begin{pmatrix} x\\ 0 \end{pmatrix} \ast \begin{pmatrix} a_2 & 0\\ c_2 & d_2 \end{pmatrix} \begin{pmatrix} 0\\ y \end{pmatrix} = \begin{pmatrix} a_1x\\ 0 \end{pmatrix} \ast \begin{pmatrix} 0\\ d_2y \end{pmatrix} = \begin{pmatrix} a_1x\\ d_2y \end{pmatrix}. \end{equation*} This shows that matrices contributing to convolutions must essentially consist of two non-trivial blocks along the diagonal modulo (synchronised) permutations of rows and columns. That is, they are of the form \begin{equation*} \begin{pmatrix} M_1 & \mathbb{O}\\ \mathbb{O} & M_2, \end{pmatrix} \end{equation*} where $\mathbb{O}$ represents zero matrices of appropriate dimension. Each pair of vectors resulting from a decomposition can be rearranged such that the first vector consists of an upper block of non-zero coefficients and a lower block of zeros, whereas the second vector consists of an upper zero and a lower non-zero block, and such that the two non-zero blocks do not overlap. One must be able to decompose matrices and vectors of the linear transformation into the same blocks to make convolutions non-trivial. The transformations implemented by the above block matrix on rearranged vectors, and more generally all linear transformations, can clearly be executed independently or in parallel by the matrices $M_1$ and $M_2$ parts of a vector if the convolution is non-trivial. In this sense the convolution $\ast$ on linear transformations is a notion of concurrent composition.\qed \end{example} \section{Power Series over Bi-Semigroups} \label{sec:formal-power-series} Our main lifting result (Theorem~\ref{thm:quantale-lifting}) shows that the quantale structure $Q$ is preserved at the level of the function space $Q^S$ provided that $S$ is a partial semigroup. This can easily be adapted from partial semigroups $S$ to partial $n$-semigroups and $n$-quantales with $n$ operations of composition which may or may not be commutative. Here we restrict our attention to bi-semigroups and bi-quantales and we discuss several examples. \begin{proposition}\label{prop:biquantale-lifting} Let $(S,\circ,\bullet)$ be a partial bi-semigroup. If $(Q,\le,\circ,\bullet)$ is a (distributive unital) bi-quantale, then so is $(Q^S,\le,\circ,\bullet)$. \end{proposition} It is obvious that properties such as commutativity and unitality lift as before. \begin{example}[Functions over Two-Dimensional Intervals] Closed two-dimensional intervals over a linear order can be defined in a straightforward way. For intervals $x$ and $y$, we write $\Rectangle{x}{y}$ for the box consisting of points with x-coordinates in $x$ and y-coordinates in $y$. \begin{eqnarray*} \Rectangle{x}{y} & = & \{ (a,b)\ |\ a \in x \land b \in y \} \\ \Rectangle{x}{\bot} & = & \bot \\ \Rectangle{\bot}{y} & = & \bot \end{eqnarray*} We define the horizontal composition of two-dimensional intervals as \begin{equation*} (\Rectangle{x_1}{y_1}) \circ (\Rectangle{x_2}{y_2}) = \begin{cases} \Rectangle{(x_1 \cdot x_2)}{y_1}, & \text{if } y_1 = y_2, \\ \bot, & \text{otherwise}. \end{cases} \end{equation*} and their vertical composition as \begin{equation*} (\Rectangle{x_1}{y_1}) \bullet (\Rectangle{x_2}{y_2}) = \begin{cases} \Rectangle{x_1}{(y_1 \cdot y_2)}, & \text{if } x_1 = x_2, \\ \bot, & \text{otherwise}. \end{cases} \end{equation*} Whenever the target algebra forms a bi-quantale, Proposition~\ref{prop:biquantale-lifting} applies and the function space forms a bi-quantale as well. In particular, horizontal and vertical convolution are given by \begin{align*} (f \circ g)\, (\Rectangle{x}{y}) & = \sum_{x = x_1 \cdot x_2} f\, (\Rectangle{x_1}{y}) \circ g\, (\Rectangle{x_2}{y}), \\ (f \bullet g)\, (\Rectangle{x}{y}) & = \sum_{y = y_1 \cdot y_2} f\, (\Rectangle{x}{y_1}) \bullet g\, (\Rectangle{x}{y_2}). \end{align*} The situation easily generalises to n-dimensional intervals with $n$ convolutions which may or may not be commutative.\qed \end{example} \begin{example}[Series-Parallel Pomset Languages]\label{ex:pomsets} Let $(S,\cdot,\ast,1)$ be a bi-monoid with non-commutative composition $\cdot$, commutative composition $\ast$ and shared unit $1$. Furthermore, let $(Q,\le,\cdot,\ast,1)$ be a bi-quantale with non-commutative composition $\cdot$, commutative composition $\ast$ and shared unit $1$. Then $Q^S$ forms a bi-quantale according to Proposition~\ref{prop:biquantale-lifting} with a non-commutative convolution given by $\cdot$ and a commutative convolution given by $\ast$. For $\mathbb{B}^S$ and $S$ being freely generated from a finite alphabet $X$, we obtain the \emph{series-parallel pomset languages} or \emph{partial word languages} over $X$, which have been studied by Grabowski, Gischer and others \cite{Grabowski,Gischer}. They form a standard model of true concurrency.\qed \end{example} \begin{example}[Square Matrices with Parallel Composition]\label{ex:matrix-par} We define a partial commutative composition $\ast$ on square matrices as a generalisation of vector case, splitting matrices into blocks along the diagonal. \begin{equation*} (f\ast g)\, (i,j)= \begin{cases} f\ (i,j),& \text{ if } \forall k.\ g\ (i,k) = 0\wedge g\ (k,j)= 0,\\ g\ (i,j),& \text{ if } \forall k.\ f\ (i,k)=0\wedge g\ (k,j)=0,\\ \bot, & \text{ otherwise}. \end{cases} \end{equation*} Associativity and commutativity of this operation is easy to check; (infinite) distributivity holds as well. It follows that square matrices into suitable coefficient algebras form partial bi-quantales.\qed \end{example} Examples~\ref{ex:pomsets} and~\ref{ex:matrix-par} thus show other situations where a commutative convolution gives rise to a notion of parallel or concurrent composition. \section{Two-Dimensional Power Series Bi-Quantales}\label{sec:fpsbiquantale} We now extend the power series approach to two dimensions; an extension to $n$ dimensions can be obtained along the same lines. We consider two separate partial semigroups or monoids $(S_1,\circ)$ and $(S_2,\bullet)$. In many cases, $S_2$ is assumed to be commutative. This differs from \refsec{sec:formal-power-series} in that two different semigroups algebras are lifted to a bi-quantale, whereas in \refsec{sec:formal-power-series} a bi-semigroup is lifted to a bi-quantale. We consider functions $F:S_1\to S_2\to Q$ from the partial semigroups $S_1$ and $S_2$ into an algebra $Q$, usually a bi-quantale. Note that $A\to B\to C$ stands for $A\to (B\to C)$, and we write $(C^B)^A$ for the class of functions of that type. The main construction is as follows. Theorem~\ref{thm:quantale-lifting} can be applied to semigroup $S_1$ and target algebra $Q^{S_2}$ to lift to $(Q^{S_2})^{S_1}$. Alternatively, $S_2$ and $Q^{S_1}$ can be lifted to $(Q^{S_1})^{S_2}$. The algebras $Q^{S_1}$ and $Q^{S_2}$ can be obtained by lifting as well; they can be considered as partial evaluations of a power series $F:S_1\to S_2\to Q$ to power series $F^y:S_1\to Q$ and $F^x:S_2\to Q$ where \begin{align*} F^y = \lambda x.\ F\, x\, y, \qquad\qquad F^x = \lambda y.\ F\, x\, y \end{align*} This construction can be iterated $n$ times for power series $F:S_1\to\dots\to S_n\to Q$. It is well known that the function spaces obtained are isomorphic: in general $(C^A)^B\cong (C^B)^A\cong C^{A\times B}\cong C^{B\times A}$ under the Curry-Howard isomorphism. A categorical framework is provided by the setting of symmetric monoidal closed categories \cite{Kelly}, which we do not explore further in this article. Instead we move freely between isomorphic function spaces. By analogy to the one-dimensional case of power series we define operations on the function space $Q^{S_1\times S_2}$ which lift the corresponding operations on $Q$. Ultimately our aim is to show that bi-quantale axioms lift from $Q$ to $Q^{S_1\times S_2}$. We define \begin{align*} (\sum_{i\in I} F_i)\, x\, y &= \sum_{i\in I} (F_i\, x\, y),\\ (\prod_{i\in I} F_i)\, x\, y &= \prod_{i\in I} (F_i\, x\, y),\\ (F\circ G)\, x\, y &= \sum_{x=x_1\circ x_2} F\, x_1\, y\circ G\, x_2\, y,\\ (F \bullet G)\, x\, y &= \sum_{y=y_1\bullet y_2} F\, x\, y_1\bullet G\, x\, y_2. \end{align*} As in the one dimensional case, $\mathbb{O} = \sum_{i\in\emptyset} F_i$. The convolution $F\circ G$ acts on the first parameter whereas $F\bullet G$ acts on the second one; $\sum_{i\in I}F_i$ and $\prod_{i\in I}F_i$ are defined by pointwise lifting on both arguments. We now show how two-dimensional lifting results can be obtained in a modular fashion from one-dimensional ones with Theorem~\ref{thm:quantale-lifting}. By currying consider the functions $F^y:S_1\to Q$ and $F^x:S_2\to Q$. For these we can reuse the definitions of suprema, infima and convolution from the one dimensional case in Section~\ref{sec:fpsquantale}. Suprema, for instance, are given by \begin{equation*} (\sum_{i\in I} F_{i}^y)\, x =\sum_{i\in I}(F_{i}^y\, x),\qquad (\sum_{i\in I} F_{i}^x)\, y =\sum_{i\in I}(F_{i}^x\, y). \end{equation*} The equations for infima are lattice dual. Convolutions are given by \begin{align*} (F^y\circ G^y) \, x =\sum_{x=x_1\circ x_2}F^Y\, x_1\circ G^y\, x_2, \qquad (F^x\bullet G^x)\, y =\sum_{y=y_1\bullet y_2}F^x\, y_1\bullet G^x\, y_2. \end{align*} The relationship between operations of different dimensions is captured by the following lemma. \begin{lemma}\label{P:spacered1}~ The maps $\varphi_1: Q^{S_1\times S_2}\to Q^{S_2}$ and $\varphi_2:Q^{S_1\times S_2}\to Q^{S_1}$ defined by \begin{equation*} \varphi_1=\lambda X.\ (X)^y,\qquad \varphi_2=\lambda X.\ (X)^x \end{equation*} are homomorphisms. \begin{enumerate} \item $(\sum_{i\in I} F_i)^y= (\sum_{i\in I} F_{i}^y)$ and $(\sum_{i\in I} F_i)^x= (\sum_{i\in I} F_{i}^x)$, \item $(\prod_{i\in I} F_i)^y= (\prod_{i\in I} F_{i}^y)$ and $(\prod_{i\in I} F_i)^x= (\prod_{i\in I} F_{i}^x)$, \item $(F\circ G)^y= (F^y\circ G^y)$ and $(F\bullet G)^x= (F^x\bullet G^x)$. \end{enumerate} \end{lemma} \begin{proof} We only provide proofs for the first conjunct of $(a)$ and for $(c)$. The remaining proofs are similar. For addition we calculate \begin{equation*} (\sum_{i\in I} F_i)^y\, x = (\sum_{i\in I} F_i)\, x\, y = \sum_{i\in I}(F_i\, x \, y) = \sum_{i\in I} (F_{i}^y\, x) = (\sum_{i\in I}F_{i}^y)\, x. \end{equation*} For composition $\circ$, \begin{align*} (F\circ G)^y\, x &=(F\circ G)\, x\, y\\ &=\sum_{x=x_1\circ x_2} (F\, x_1\, y)\circ (G\, x_2\, y)\\ &=\sum_{x=x_1\circ x_2} (F^y\, x_1)\circ (G^y\, x_2)\\ &= (F^y\circ G^y)\, x. \end{align*} \end{proof} If $(S_1,\circ,1_\circ)$ and $(S_2,\bullet,1_\bullet)$ are partial monoids and the bi-quantale $Q$ has units $1^y$ and $1^x$ with respect to $\circ$ and $\bullet$ (overloading notation), we define units on $Q^{S_1\times S_2}$ as \begin{equation*} \mathbb{1}_\circ = \lambda x,y. \begin{cases} 1_\circ, & \text{if } x = 1_\circ,\\ 0, & \text{otherwise}, \end{cases} \qquad \mathbb{1}_\bullet = \lambda x,y. \begin{cases} 1_\bullet, & \text{if } y = 1_\bullet,\\ 0, & \text{otherwise}. \end{cases} \end{equation*} The following result links these binary units with the unary units $(\mathbb{1}^y)_\circ:S_1\to Q$ and $(\mathbb{1}^x)_\bullet:S_2\to Q$, as defined in Section~\ref{sec:fpsquantale}. \begin{lemma}\label{P:spacered2}~ \begin{enumerate} \item $(\mathbb{1}_\circ)^y = (\mathbb{1}^y)_\circ$, \item $(\mathbb{1}_\bullet)^x= (\mathbb{1}^x)_\bullet$. \end{enumerate} \end{lemma} \begin{proof} For (a), \begin{equation*} (\mathbb{1}_\circ)^y\, x = \mathbb{1}_\circ\, x\, y = \begin{cases} 1_\circ, & \text{ if } x=1_\circ,\\ 0, & \text{otherwise}. \end{cases} =(\mathbb{1}^y)_\circ\, x. \end{equation*} The proof of (b) is similar. \end{proof} By Lemmas~\ref{P:spacered1} and~\ref{P:spacered2}, a lifting from $Q$ can be decomposed into a lifting to $Q^{S_2}$ and, if the lifted property is preserved, a function application in $(Q^{S_2})^{S_1}$. Alternatively one can lift to $Q^{S_1}$ and then use function application in $(Q^{S_1})^{S_2}$. In the above constructions, there are two kinds of liftings: pointwise liftings from $Q$ to $Q^{S_1}$ or $Q^{S_2}$ and lifting by convolution for $Q^{S_1}$ and $Q^{S_2}$. \begin{proposition}\label{prop:seq-lifting} Let $(S_1,\circ)$ be a partial semigroup and $S_2$ a set. If $(Q,\le,\circ)$ is a (distributive) quantale, then so is $(Q^{S_1\times S_2},\le,\circ)$. Unitality and commutativity lift from $S_1$ and $Q$ to $Q^{S_1\times S_2}$. \end{proposition} \begin{proof}~ If $S_1$ is a partial semigroup and $Q$ a (distributive) quantale, then $Q^{S_1}$ is a (distributive) quantale by \refthm{thm:quantale-lifting}, and by $\lambda$-abstraction for $F=\lambda y.\ F^y\ x$ and the homomorphic properties of $(.)^y$ in Lemma~\ref{P:spacered1}. For example, \begin{align*} ((F\circ G)\circ H)\, x\, y &= ((F\circ G)\circ H)^y\, x\\ & = ((F^y\circ G^y)\circ H^y)\, x\\ & = (F^y \circ (G^y\circ H^y))\, x\\ & = (F\circ (G\circ H))^y\, x\\ &= (F\circ (G\circ H))\ x\ y. \end{align*} If the quantale $Q$ is unital, then so is $Q^{S_1}$, again by \refthm{thm:quantale-lifting}. As previously, this follows by $\lambda$-abstraction and the homomorphic properties of $(.)^y$ by Lemmas~\ref{P:spacered1} and \ref{P:spacered2}. For instance, \begin{equation*} (\mathbb{1}_\circ\circ F)\, x\, y = (1_\circ\circ F)^y\ x = (1_\circ^y \circ F^y)\, x = F^y\, x =F\, x\, y. \end{equation*} If $S_1$ and $Q$ are both commutative, then \begin{equation*} (F\circ G)\, x\, y=(F\circ G)^y\, x= (F^y \circ G^y)\, x = (G^y\circ F^y)\, x= (G\circ F)^y\ x = (G\circ F)\, x\, y \end{equation*} with the homomorphism properties of $(.)^y$ and commutativity on $Q^{S_1}$ due to \refthm{thm:quantale-lifting}. \end{proof} The next statement is immediate since $Q^{S_1\times S_2}$ and $Q^{S_2\times S_1}$ are isomorphic. \begin{corollary}\label{cor:conc-lifting} Let $S_1$ be a set and $(S_2,\bullet)$ a partial semigroup. If $(Q,\le,\bullet)$ is a (distributive) quantale, then so is $(Q^{S_1\times S_2},\le,\bullet)$. Unitality and commutativity lift from $S_2$ and $Q$ to $Q^{S_1\times S_2}$. \end{corollary} Proposition~\ref{prop:seq-lifting} and Corollary~\ref{cor:conc-lifting} can therefore be combined into the following lifting theorem for two-dimensional power series. \begin{theorem}\label{thm:biquantale} Let $(S_1,\circ)$ and $(S_2,\bullet)$ be partial semigroups. If $(Q,\le,\circ,\bullet)$ is a (distributive) bi-quantale, then so is $(Q^{S_1\times S_2},\le,\circ,\bullet)$. It is unital whenever $Q$ is unital and $S_1$ and $S_2$ are partial monoids. A convolution on $Q^{S_1\times S_2}$ is commutative if the corresponding composition on $S_i$ and $Q$ are commutative. \end{theorem} Remember that a unital bi-quantale may have different units for its two compositions. As already mentioned, the construction of the bi-quantale of two-dimensional power series generalises immediately to $n$ underlying partial semigroups $(S_i,\circ_i)$, $n$-dimensional power series $F:S_1\to \dots \to S_n\to Q$ and convolutions \begin{equation*} (F\circ_i G)\ \dots \ x_i \ \ldots =\sum_{x_i=y\circ_i z} (F\ \dots \ y \ \dots)\circ_i (G\ \dots\ z\ \dots). \end{equation*} We do not pursue this generalisation in this article; the lifting arguments apply without modification. \section{Examples}\label{sec:biquantale-examples} As examples of two-dimensional bi-quantales we present two interval based models that distinguish between time and space dimensions. The monoidal operators may be used to separate these two dimensions independently; time is separated using chop, space using separating conjunction as a notion of concurrent composition. The consideration of such algebras with both kinds of separation was the starting point of this article. In the second example of vector stream interval functions, spatial or concurrent splitting is of course commutative, whereas temporal splitting is not. \begin{example}[Stream Interval Functions]\label{ex:stream-interval-functions} Let $(S_1,\cdot)$ be the partial semigroup $(I_P,\cdot)$ of closed intervals $I_P$ under interval function as in Example~\ref{ex:interval-functions} and let $S_2$ be the set of all functions of type $P\to A$ for an arbitrary set $A$. It follows from Proposition~\ref{prop:seq-lifting} that $Q^{I_P\times A^P}$ forms a distributive quantale, whenever $Q$ is a distributive quantale. A unit can be adjoined to $Q^{I_P\times A^P}$ along the lines of Example~\ref{ex:interval-functions}, but with a second parameter. As a typical interpretation, consider $P=\mathbb{R}$ with the standard order on reals as a model of time and let functions $f:\mathbb{R}\to A$ model the temporal behaviour or trajectories of some system. For instance, $f$ could be the solution of a differential equation. In that case, $F\ x\ f$ evaluates the behaviour of system $f$ in the interval $x$. Such kinds of functions have been called \emph{stream interval functions}~\cite{DHD14}. The convolution \begin{equation*} (F\cdot G)\, x\, f = \sum_{x=y\cdot z} (F\, y\, f) \cdot (G\, z\, f) \end{equation*} splits the interval $x$ into all possible prefix/suffix pairs $y$ and $z$, applies $F$ to the behaviour of $f$ on interval $y$ and $G$ to the behaviour of $f$ on interval $z$ and then combines these results. There are different ways in which the application of stream interval functions can be realised. Moreover, the situation generalised to arbitrary finitely bounded intervals without fusion. As in the case of interval functions, our prime example of stream interval functions are \emph{stream interval predicates}, where $Q=\mathbb{B}$. Then convolution becomes a generalised version of chop or non-commutative separating conjunction: \begin{equation*} (F\cdot G)\, x\, f = \sum_{x=y\cdot z} (F\, y\, f) \sqcap (G\, z\, f). \end{equation*} A predicate $F$ could, for instance, test the values of a function $f$ over an interval $x$---at all points of $x$, at some points of $x$, at almost all points of $x$, at no points of $x$ and so on. It could, for instance, test, whether the trajectory of system $f$ evolves within given boundaries, that is a flight path is within a given corridor or that a train moves according to a given time schedule. More concretely, let $P=A=\mathbb{R}$ and that $f\ t=t^3$ as shown below. Note that the diagram is not drawn to scale. \begin{center} \scalebox{1.25}{\input{tcube.pspdftex}} \end{center} Let \begin{equation*} F\ x\ f= \forall t\in x.\ f\ t\ge 0,\qquad\qquad G\ x\ f =\forall t\in x.\ f\ t<0. \end{equation*} Then $F\ [0,10]\ f=1$ and $G\ [-7,-1]\ f=1$, but $F\ [-2,-1]\ f=0$ and $G\ [-7,0]\ f=0$. \qed \end{example} Stream interval predicates have been used to reason about real-time systems \cite{DHD14}, but their interpretation in terms of power series is new. It is worth noting that $P$ may be instantiated to other partial orders (e.g., $\mathbb{Z}$), allowing one to model both discrete and continuous systems. Using Theorem~\ref{thm:biquantale}, one may further develop this approach with rules for system-level reasoning by decomposing systems along a time and space dimension. To the best of our knowledge, our treatment is the first to offer both decompositions and to add a natural notion of concurrency to interval logics. Exploration of these rules in concrete models as well as their application towards verification of example systems is left as future work. Here we present one single example which is based on vectors of functions. \begin{example}[Vector Stream Interval Functions]\label{ex:vector-stream-interval-functions} Let $f$ from the previous example now be a vector or product of functions $f_i$ such that $f:P\to A^n$, or more concretely $f:\mathbb{R}\to A^n$. One can then split $f(t)$ as in Example~\ref{ex:separating-conjunction-vectors} with respect to the commutative operation $\ast$ on $A^n$. For functions $f,g:P\to A^n$ we define \begin{equation*} (f\ast g)\, p = f\, p \ast g\, p \end{equation*} by pointwise lifting. This turns $(S_2,\ast)=((A^n)^P,\ast)$ into a partial commutative semigroup, whereas $(S_1,\cdot)$ is again the partial semigroup $(I_P,\cdot)$. According to Theorem~\ref{thm:biquantale}, $Q^{S_1\times S_2}$ forms a distributive bi-quantale with commutative convolution $\ast$ whenever $Q$ does. The stream interval predicates in the case of $Q=\mathbb{B}$ yield once more an interesting special case. Now a vector of functions, for instance the solution to a system of differential equations, is applied to arguments ranging over an interval and the stream interval predicates evaluate the behaviour modelled by this vector of functions on the interval. The convolution \begin{equation*} (F\cdot G)\, x\, f = \sum_{x=y\cdot z}(F\, y\, f)\sqcap (G\, z\, f) \end{equation*} can be seen as a horizontal composition. It evaluates the full vector of functions to splittings of the interval $x$, using $F$ for the prefix part of the splitting and $G$ for its suffix part. In the context of interval logics this corresponds to a chop operation, which has a temporal flavour. The convolution or separating conjunction \begin{equation*} (F\ast G)\, x\, f= \sum_{f=g\ast h} (F\, x\, g)\sqcap (G\, x\, h) \end{equation*} can be seen as a vertical composition. It evaluates the conjunction of $F$ and $G$, which is obtained by separating the vector $f$ into all possible parts $g$ and $h$, over the full interval $x$. Applied to vectors this adds an algebraic notion of concurrent composition to interval calculi; it clearly has a spatial flavour. The two types of convolution may be distinguished using diagrams such as the ones below, where time occupies the $x$-axis and space the $y$-axis. \begin{center} \scalebox{0.75}{\input{mult-2dim.pspdftex}} \end{center} The left diagram depicts $(F \ast G) \cdot (H \ast K)$, where the convolution first splits the stream interval function along the $x$-axis (time dimension) to give us formulae $F \ast G$ and $H \ast K$. Each of these is then split along the $y$-axis (space dimension). On the other hand the right diagram depicts $(F \cdot H) \ast (G \cdot K)$, where the space dimension is split first to give $F \cdot H$ and $G \cdot K$, followed by a split along the time dimension. \qed \end{example} The examples in Section~\ref{sec:fpscomquantaleexamples} suggest that other notions of spatial separation, for instance those based on disjoint unions for families of functions, or more specific notions such as separating conjunction on heaps, can be used instead of vector separation. Theorem~\ref{thm:biquantale} is modular in this regard. We therefore do not present these examples in detail. \section{Power Series over Futuristic Monoids}\label{sec:futuristic} This section adapts the power series approach to a case which is appropriate, for instance, for languages with finite and infinite words and for intervals which may be semi-infinite in the sense that they have no upper bounds. Such approaches are, for instance, appropriate for total correctness reasoning, where termination cannot be assumed or for reactive (concurrent) systems. We model these cases abstractly with monoids which, due to lack of better nomenclature, we call futuristic. Formally, a partial semigroup $(S,\cdot)$ is \emph{futuristic} if $S=S^u\cup S^b$, $S^u\cap S^b=\emptyset$ and $x\cdot y$ is undefined whenever $x\in S^u$. Thus, $S^u$ and $S^b$ correspond to the unbounded and bounded elements of $S$, respectively. For $S^b$, we require that if $x \cdot y \in S^b$, then $x \in S^b$. In that case, for $f,g:X\to Y$, we define \begin{equation*} (f\cdot g)\, x = \sum_{x=y\cdot z} (f\, y)\cdot (g\, z) + \begin{cases} f\, x, &\text{ if } x\in S^u,\\ 0, & \text{ if } x\in S^b. \end{cases} \end{equation*} \begin{lemma} \label{lem:futuristic-quantale} Let $(S,\cdot)$ be a futuristic partial semigroup. If $Q$ is a (distributive) quantale, then $Q^S$ is a (distributive) quantale with $\mathbb{O}:S\to Q$ not necessarily a right annihilator and left distributivity holding only for non-empty suprema. \end{lemma} \begin{proof} We need to verify the laws involving `$\cdot$' with our new multiplication. It suffices to consider the cases where $x\in S^u$; the others are covered by Theorem~\ref{thm:quantale-lifting}. For left distributivity we calculate, for $I\neq\emptyset$, \begin{align*} (f\cdot \sum_{i\in I}g_i)\, x & = f\, x + \sum_{x=y\cdot z} f\, y\cdot \sum_{i\in I}(g_i\, z)\\ &= (\sum_{i\in I} f\, x) + \sum_{i\in I}\sum_{x=y\cdot z} (f\, y\cdot g_i\, z)\\ &= \sum_{i\in I} (f\, x + \sum_{x=y\cdot z} (f\, y\cdot g_i\, z))\\ &= (\sum_{i\in I} (f\cdot g_i))\, x. \end{align*} For $I=\emptyset$, however $(f\cdot \mathbb{O})\, x = f\, x$ if $x\in S^u$, hence in this case left distributivity fails. For right distributivity, which is no longer opposition dual, we calculate \begin{align*} ((\sum_{i\in I}f_i)\cdot g)\, x &= (\sum_{i\in I} f_i\, x) + \sum_{x=y\cdot z} (\sum_{i\in I} f_i\, y)\cdot g\, z\\ &= (\sum_{i\in I} f_i\, x) + \sum_{i\in I}\sum_{x=y\cdot z} (f_i\, y\cdot g\, z)\\ &= \sum_{i\in I} (f_i\, x + \sum_{x=y\cdot z} (f_i\, y\cdot g\, z))\\ &=(\sum_{i\in I} (f_i\cdot g))\, x. \end{align*} Left annihilation is as usual a special case of right distributivity. We calculate explicitly \begin{equation*} (\mathbb{O}\cdot f)\, x = \mathbb{O}\, x + \sum_{x=y\cdot z} \mathbb{O}\, y\cdot f\, z= 0+0=0 \end{equation*} Finally, for associativity, we calculate \begin{align*} (f\cdot (g\cdot h))\,x &= f\,x + \sum_{x=y\cdot z} f\,y\cdot (g\,z + \sum_{z=u\cdot v} g\,u\cdot h\,v)\\ &= f\,x + (\sum_{x=y\cdot z} f\,y\cdot g\,z) + \sum_{x = y \cdot z} f\, y \cdot (\sum_{z=u \cdot v} g\,u\cdot h\,v)\\ &= (f\cdot g)\,x + \sum_{x=y \cdot u \cdot v} f\,y\cdot g\,u \cdot h\,v\\ &= (f\cdot g)\,x + \sum_{x=w \cdot v} (\sum_{w = y \cdot u} f\,y \cdot g\,u) \cdot h\,v\\ &= (f\cdot g)\,x + \sum_{x=w\cdot v}(f\cdot g)\,w \cdot h\,v\\ &= ((f\cdot g)\cdot h)\,x. \end{align*} The last but first step uses the fact that $w\in S^b$ \end{proof} \begin{proposition}\label{prop:futuristic-biquantale} Let $(S_1,\circ)$ be a futuristic partial semigroup and $S_2$ a set. If $(Q,\le, \circ)$ is a (distributive) quantale, then $(Q^{S_1\times S_2},\le,\circ)$ is a (distributive) quantale with $\mathbb{O}$ not necessarily a right annihilator and left distributivity holding only for non-empty suprema. Unitality lifts from $Q$ to $Q^{S_1\times S_2}$ with unit $\mathbb{1}_\circ$ if $S_1$ is a partial monoid. \end{proposition} The proof adapts that of Proposition~\ref{prop:seq-lifting} to Lemma~\ref{lem:futuristic-quantale}. A treatment of historistic intervals is dual, that is, left annihilation fails. Proposition~\ref{prop:futuristic-biquantale} can be extended further into an analogue of Theorem~\ref{thm:biquantale}. We do not explicitly display this statement. \begin{example}[Formal Languages with Infinite Words] Let $X$ be a finite alphabet. Let $X^\ast$, as previously, denote the set of finite words over $X$ and $X^\omega$ the set of all infinite words, which are sequences of type $\mathbb{N}\to X$. Let $X^\infty=X^\ast\cupX^\omega$. Then $X^\ast\cap X^\omega=\emptyset$ by definition. Every language $L\subseteqX^\infty$ may contain finite as well as infinite words and we write $\mathsf{fin}(L)$ and $\mathsf{inf}(L)$ for the sets of all finite and infinite words in $L$. In this context it is natural to disallow the concatenation of an infinite word with another word, hence $X^\infty$ is endowed with a futuristic partial monoid structure. In addition, the product of $L_1,L_2\subseteqX^\infty$ is commonly defined as \begin{equation*} L_1\cdot L_2=\mathsf{inf}(L_1)\cup \{vw \mid v\in\mathsf{fin}(L_2)\wedge w\in L_2\}. \end{equation*} This is captured by the futuristic product with $Y=\mathbb{B}$. It then follows from Lemma~\ref{lem:futuristic-quantale} that $X^\infty$ forms a distributive quantale in which $L\cdot\emptyset=\emptyset$ need not hold and left distributivity holds only for non-empty suprema. In fact, the absence of right annihilation can be verified with the singleton stream $L=\{aaa\dots\}$.\qed \end{example} Models with finite/infinite paths and traces can be built in a similar fashion. \begin{example}[Functions and Predicates over Futuristic Intervals] Let $(P,\le)$ be a linear order without right endpoint. Let $I_P^f$ stand for the set of all non-empty closed intervals over $X$ and let $I_X^i$ denote the set of all \emph{futuristic intervals} $[a,\infty]= \{ b\ |\ b\ge a\}$. This does not mean that we add an explicit element $\infty$ to $X$; $\infty$ is merely part of our naming conventions. Then $I_X=I_X^f\cup I_X^i$ and $I_X^f\cap I_X^i=\emptyset$. The fusion product of intervals can now be redefined as \begin{equation*} x\cdot y = \begin{cases} x, &\text{ if } x\in I_X^i,\\ [x_{\min},y_{\max}], &\text{if } x\in I_X^f \text{ and } x_{\max}=y_{\min},\\ \bot, &\text{otherwise}, \end{cases} \end{equation*} where $y_{\max}=\infty$ is included as an option. It then follows from Lemma~\ref{lem:futuristic-quantale} that $Q^{I_P}$ forms a distributive quantale in which $\mathbb{O}$ is not necessarily a right annihilator. In fact, $f\circ \mathbb{O} = \mathbb{O}$ can be falsified with any interval $x=[a,\infty]$ and interval predicate $f=\lambda x.\ a\in x$.\qed \end{example} An example of closed and open intervals without fusion can be obtained along the same lines. Examples of bi-quantales based on stream functions over futuristic intervals with a notion of separating conjunction can be obtained in a straightforward way. \section{Interchange Laws}\label{sec:interchange} Algebras in which a spatial or concurrent separation operation interact with a temporal or sequential one have already been studied, for instance, in the context of concurrent Kleene algebra~\cite{HMSW11}. In addition to the trioid or bi-quantale laws, these algebras provide interesting interaction laws between the two compositions, which in this context are interpreted as concurrent and sequential composition. Such laws are, obviously, of general interest. More concretely, the following \emph{interchange laws} hold in concurrent Kleene algebras: \begin{align*} (x\ast y)\cdot z & \le x\ast (y\cdot z),\\ x\cdot (y\ast z) & \le (x\cdot y)\ast z ,\\ (w\cdot x)\ast (y\cdot z) & \le (w\cdot y)\ast (x\cdot z). \end{align*} We call the first two laws \emph{small interchange laws} and the last one \emph{weak interchange law}. These laws hold in models of concurrency including shuffle languages and certain classes of partially ordered multisets~\cite{Gischer}. It has been shown that one of the small interchange laws is equivalent to a separation logic style frame rule in a certain encoding of Hoare logics~\cite{Locality}. The weak interchange law, in turn, is equivalent to one of the standard concurrency rules for Hoare logic, which is similar to those considered in Owicki and Gries' logic~\cite{OG76} or in concurrent separation logic~\cite{COY07}. This relationship is considered further in Section~\ref{sec:hoare}. The close relationship between power series and separation logic and the similarity between two-dimensional power series and concurrent Kleene algebras make it worth considering the interchange laws in this setting. However we obtain mainly negative results. To start with a positive result, we establish interchange laws between other kinds of operations. \begin{lemma}\label{P:quantaleprops} In every quantale, the following interchange laws hold: \begin{equation*} (w\sqcap x)\cdot (y\sqcap z) \le (w\cdot y)\sqcap (x\cdot z),\qquad (w\sqcap x)\ast (y\sqcap z) = (w\ast y)\sqcap (x\ast z). \end{equation*} \end{lemma} It turns out, however, that the small and weak interchange laws between sequential and concurrent composition do not hold in general. This is established by the counterexamples which support the following lemma. \begin{proposition} There are $F,G,H,K:S_1\to S_2\to \mathbb{B}$ such that the following holds. \begin{enumerate}\label{prop:interchangeref} \item $F\cdot G\not\le F\ast G$, \item $(F\ast G)\cdot H\not\le F\ast (G\cdot H)$, \item $F\cdot (G\ast H)\not\le (F\cdot G)\ast H$, \item $(F\ast G)\cdot (H\ast K)\not\le (F\cdot H)\ast (G\cdot K)$. \end{enumerate} \end{proposition} \begin{proof} First, note that $\le$ can be interpreted as $\Rightarrow$ for stream interval predicates, and recall that parallel composition of predicates is separating conjunction when $f$ is a vector of functions. \begin{enumerate} \item To refute $F\cdot G\le F\ast G$, let $x=[-10,10]$, $f=(f_1,f_2)$ with \begin{equation*} f_1\, t = \begin{cases} 1, &t\le 0,\\ 0, &t> 0, \end{cases} \qquad\qquad f_2\, t = \begin{cases} 0, &t\ge 0,\\ 1, &t<0, \end{cases} \end{equation*} and \begin{equation*} F\, x \, f = \forall t \in x.\ f_1\, t = 1,\qquad G\, x \, f = \forall t\in x.\ f_2\, t = 1. \end{equation*} Then $(F\cdot G)\, x\, f =1$, splitting interval $x$ at $t=0$, whereas $(F \ast G)\, x\, f =0$ since neither $F$ nor $G$ holds on the entire interval $x$. This may be visualised using the diagrams below, where dashed lines represent that the corresponding function has value $0$, and solid lines represent a value $1$. For the right diagram, there is not possible way for the vectors $f_1$ and $f_2$ to go through $F$ and $G$. \begin{center} \scalebox{0.75}{\input{no-weak-inter-3.pspdftex}} \end{center} \item To refute $(F\ast G)\cdot H\le F\ast (G\cdot H)$, let $x=[-10,10]$, $f_1$ as in (a) and $f_2=\lambda t.\ 0$, where \begin{align*} F\, f\, x& = \forall t\in x.\ f_1\, t = 1, \\ G\, f\, x &= \forall t\in x.\ f_2\, t=0, \\ H\, f\, x &= \forall t\in x.\ f_1\, t=0\vee f_2\, t =0. \end{align*} This makes the left hand side $1$ and the right hand side $0$. This is visualised by the diagram below---neither $f_1$ nor $f_2$ may go through $F$. \begin{center} \scalebox{0.75}{\input{no-weak-inter-4.pspdftex}} \end{center} \item $H\cdot (G\ast F)\le (H\cdot G)\ast F$ can be refuted by function \begin{equation*} f_1'\, t = \begin{cases} 0,& t\le 0,\\ 1,& t >0, \end{cases} \end{equation*} and $f_2$ as in (b), exploiting opposition duality between the two interchange laws and realising that $f_1'$ is the ``time reverse'' of $f_1$. \item To refute $(F\ast G)\cdot (H\ast K)\le (F\cdot H)\ast (G\cdot K)$, consider $f = (f_1, f_2, f_3)$ where \begin{equation*} f_1\, t = 0,\qquad f_2\, t = \begin{cases} 0,& t\le 0,\\ 1,& t >0, \end{cases} \qquad f_3\, t = 1 \end{equation*} and \begin{align*} F\, f\, x &= \forall t \in x.\ f_1\, t = 0, \\ G\, f\, x &= \forall t \in x.\ f_2\, t < f_3\, t, \\ H\, f\, x &= \forall t \in x.\ f_1\, t < f_2\, t, \\ K\, f\, x &= \forall t \in x.\ f_3\, t = 1. \end{align*} For $x = [-10, 10]$, the diagram on the left below shows that the left hand side $(F\ast G)\cdot (H\ast K)$ holds. However, in the diagram on the right, which represents $(F\cdot H) \ast (G\cdot K)$, there is no possible combination of horizontal and vertical splits that satisfy $f$. In particular, $f_1$ must go through $F$, and similarly $f_3$ must go through $K$. We have a choice of placing $f_2$ above the horizontal line (through $F$ and $H$), or below (through $G$ and $K$), however, neither choice is appropriate. \begin{center} \scalebox{0.75}{\input{no-weak-inter-2.pspdftex}} \end{center} \end{enumerate} \end{proof} Imposing addition algebraic restrictions, which would allow the derivation of interchange laws, is left for future work. A promising candidate is the consideration of locality assumptions, as in separation logic~\cite{COY07}, which are briefly explained in the following section, or the inclusion of dependency relations~\cite{HMSW11} in the definition of the semigroup operations. \section{Hoare Logics from Power Series Quantales}\label{sec:hoare} One benefit of algebras is that they support the development of verification systems. It is well known, for instance, that quantales can be endowed with Hoare logics~\cite{HMSW11}, more precisely \emph{propositional} Hoare logics, in which data flow rules such as assignment rules are missing. This section illustrates how this leads to propositional Hoare logics over power series. But before that we briefly recall how notions of iteration arise in the quantale setting, since these are needed for while rules in Hoare logic. Since quantales are complete lattices, least and greatest fixpoints of isotone functions exist. Moreover, due to their infinite distributivity laws, functions such as $\lambda \alpha.\ x+ \alpha$, $\lambda \alpha.\ x\cdot \alpha$ and $\lambda \alpha.\ \alpha\cdot x$ are continuous and the first one is even co-continuous in distributive quantales. This means that in particular the least fixpoints built by using combinations of these functions can be obtained by iteration from $0$ to the first limit ordinal. More specifically, the function $\varphi=\lambda\alpha.\ 1+x\cdot \alpha$ is continuous, hence has the least fixpoint $\mu\varphi=x^\ast=\sum_{i\in\mathbb{N}}\varphi^i(0)=\sum_{i\in\mathbb{N}}x^i$. This notion of finite iteration is needed for deriving a while-rule for a finite loop in a partial correctness setting. More generally, the unfold and induction rules \begin{alignat*}{4} 1+x\cdot x^\ast &= x^\ast,&\qquad z+x\cdot y \le y&\Rightarrow x^\ast\cdot z\le y,\\ 1+x^\ast\cdot x &= x^\ast,&\qquad z+y\cdot x \le y&\Rightarrow z\cdot x^\ast\le y \end{alignat*} can be used for reasoning about the star. In a total correctness setting, a notion of possibly infinite iteration is preferable, which corresponds to the greatest fixpoint of $\varphi$. Infinite iteration is also useful for futuristic monoids \refsec{sec:futuristic}, for example, when reasoning about reactive systems, and Hoare rules for these can be developed. However, because these follow a similar pattern to finite iteration, we leave their full treatment as future work. Equipped with the star in the power series quantale we can now follow~\cite{HMSW11} in setting up a propositional Hoare logic. The development is slightly non-standard, in that there is no distinction between assertions and programs at the level of algebra. It follows the lines of a previous approach by Tarlecki~\cite{tarlecki}. For a quantale $Q$ and elements $x,y,z\in Q$, we define validity of a Hoare triple Tarlecki-style as \begin{equation*} \vdash\{x\}y\{z\} \Leftrightarrow x\cdot y\le z. \end{equation*} In Tarlecki's original article, this encoding has been used for a relational semantics where not only the program, but also its pre- and postconditions are modelled as relations. It is equally suitable for trace or language based extensions of Hoare logic to concurrency, such as the rely-guarantee method~\cite{Jon83}. The proof of the following proposition is then straightforward and generic for quantales. \begin{proposition}[\cite{HMSW11}]\label{prop:phl} Let $Q$ be a unital quantale with unit $1$. The following rules of propositional Hoare logic are derivable, for all $w, w_1, w_2,x,x_1,x_2,y,y_1,y_2,z,z_1,z_2\in Q$. \begin{gather*} \vdash\{x\}1\{x\} \qquad \frac{x_1\le x_2\quad\vdash \{x_2\}y\{z_2\}\quad z_2\le z_1}{\vdash\{x_1\}y\{z_1\}} \\ \frac{\vdash\{x\}y_1\{z\}\quad\vdash\{x\}y_2\{z\}}{\vdash\{x\}y_1+y_2\{z\}} \qquad \frac{\vdash\{w\}x_1\{z\}\quad\vdash\{z\}x_2\{y\}}{\vdash\{w\}x_1\cdot x_2\{y\}} \\ \frac{\vdash\{x\}y\{x\}}{\vdash\{x\}y^\ast\{x\}} \end{gather*} \end{proposition} We can strengthen the choice and star rule as follows. \begin{equation*} \frac{\vdash\{x\cdot w_1\} y_1\{z\}\quad\vdash\{x\cdot w_2\}y_2\{z\}}{\vdash\{x\}w_1\cdot y_1+w_2\cdot y_2\{z\}} \qquad \frac{\vdash\{x\cdot w_1\}y\{x\}}{\vdash\{x\}(w_1\cdot y)^\ast\cdot w_2\{x\cdot w_2\}} \end{equation*} The proof of the first one is essentially that of the choice rule. For the second one suppose $x\cdot w_1\cdot y\le x$. Then $x\cdot (w_1\cdot y)^\ast \le x$ by star induction and $x\cdot (w_1\cdot y)^\ast\cdot w_2\le x\cdot w_2$ by isotonicity. If $w_1$ and $w_2$ are, in some sense, complemented, then this yields the standard conditional rule and while rule of Hoare logic. Instantiating Proposition~\ref{prop:phl} to power series quantales automatically yields Hoare calculi for virtually all the examples discussed in this article. The instantiation to the binary relations quantale reproduces Tarlecki's original soundness result. Other instances yield, in a generic way, Hoare logics over computationally meaningful semantics based on finite words (traces in the sense of concurrency theory), paths in graphs (sequences of events in concurrency theory), paths in the sense of automata theory, or pomsets. We also obtain generic propositional Hoare logics for reasoning about interval and stream interval predicates in algebraic variants of interval logics. In addition, Proposition~\ref{prop:phl} covers commutative quantales, where the Tarlecki-style encoding of the validity of Hoare triples might make less sense. The rules covered by Proposition~\ref{prop:phl}, however, are entirely sequential. For applications involving concurrency, such as the vector stream interval functions in Example~\ref{ex:vector-stream-interval-functions}, additional rules are desirable. In concurrent Kleene algebra, Owicki-Gries-style concurrency rules and frame rules in the style of separation logic can be derived. The same derivation, however, is ruled out in the quantale context, because the concurrency rule obtained is equivalent to the weak interchange law and the frame rule to one of the small interchange laws, both of which have been refuted in Proposition~\ref{prop:interchangeref}. Instead we can use the interchange laws provided by Lemma~\ref{P:quantaleprops}. \begin{lemma}\label{lem:quantale-concrule} In quantale $Q$ the following concurrency rule is derivable, for all $x_1,x_2,$ $y_1,y_2,$ $z_1,z_2\in Q$. \begin{equation*} \frac{\vdash \{x_1\}y_1\{z_1\}\quad \vdash \{x_2\}y_2\{z_2\}}{\vdash \{x_1\sqcap x_2\}y_1\sqcap y_2\{z_1\sqcap z_2\}} \end{equation*} \end{lemma} \begin{proof} Suppose $x_1\cdot y_1\le z_1$ and $x_2\cdot y_2\le z_2$. Then \begin{equation*} (x_1\sqcap x_2)\cdot (y_1\sqcap y_2) \le (x_1\cdot y_1)\sqcap (x_2 \cdot y_2) \le z_1\sqcap z_2 \end{equation*} by weak interchange (Lemma~\ref{P:quantaleprops}) and the assumptions. \end{proof} \noindent Once more this rule is available automatically in all examples discussed in this article. As an alternative to conjunction-based notions of concurrency, it might still be possible to derive concurrency and frame rules under additional syntactic restrictions, for instance, those capturing the synchronisation between sequential and concurrent compositions, or in particular models. An investigation is left for future work. \section{The Frame Rule in a Power Series Context}\label{sec:frame} Section~\ref{sec:fpscomquantaleexamples} shows that the assertion quantales which underlie separation logic---implementing the boolean operations together with a notion of separation logic on predicates over a resource monoid---can be modelled in the power series setting. Predicate transformers, which yield another way of deriving Hoare logics over assertion algebras, can be modelled in that setting as well (Section~\ref{sec:transformers}). In this section we sketch how a combination of these results allows us to derive the frame rule of separation logic by equational reasoning. Convolution plays a central part in the proof. Previously, algebraic proofs of the frame rule have been given in a state transformer context~\cite{COY07} as well as in the context of concurrent Kleene algebra~\cite{HMSW11}. It is well known that in the predicate transformer setting, validity of Hoare triples can be encoded as \begin{equation*} \vdash \{p\}R\{q\}\Leftrightarrow p\le \hat{f}_R\, q, \end{equation*} which is essentially an adjunction, using the notation of Section~\ref{sec:transformers}, but writing $p,q,\dots$ for predicates, which are elements of the assertion quantale of separation logic. It is also well known that the rules of Hoare logic can be derived in this setting, assuming that predicate transformers are isotone. A result of separation logic states that the frame rule can be derived whenever the predicate transformer $f$ under consideration is \emph{local}, that is, it satisfies \begin{equation*} f\ast\mathit{id}\le f. \end{equation*} Intuitively, locality means that the effect of a transformer can always be localised on part of the state. For a detailed discussion see~\cite{COY07}. Before deriving the frame rule we use properties of power series and convolution to prove a point-wise analogue of locality which simplfies the proof. \begin{lemma}\label{lem:local-prop} $f$ is local if and only if $(f\, p) \ast q \le f\, (p\ast q)$. \end{lemma} \begin{proof} Let $(f\, p)\ast q\le f\, (p\ast q)$. Then $(f\, p)\ast (\mathit{id}\, q) = (f\, p) \ast q \le f\, (p\ast q)$ and therefore \begin{equation*} (f\ast \mathit{id})\, r = \sum_{r=p\ast q} (f\, p)\ast (\mathit{id}\, q) \le \sum_{r=p\ast q} f\, (p\ast q) = f\, r. \end{equation*} Let $f$ be local. Then \begin{equation*} (f\ast \mathit{id})\, r = \sum_{r=p\ast q} (f\, p) \ast q \le f\, r = f\, (p\ast q), \end{equation*} whence $(f\, p)\ast q\le f\, (p\ast q)$. \end{proof} \begin{lemma} Let $\hat{f}_R$ be a local predicate transformer associated to program $R$. Then the following frame rule holds. \begin{equation*} \frac{\vdash\{p\}R\{q\}}{\vdash\{p\ast r\}R\{q\ast r\}} \end{equation*} \end{lemma} \begin{proof} Let $\vdash\{p\}R\{q\}$, that is, $p\le \hat{f}_R\, q$. Then $p \ast r \le (\hat{f}_R\, q) \ast r \le \hat{f}_R\, (q \ast r)$ by Lemma~\ref{lem:local-prop} and therefore $\vdash\{p\ast r\}R\{q\ast r\}$. \end{proof} A deeper investigation of Hoare logics, inference rules for separation logic, and extensions to concurrency in this setting is left for future work. \section{Conclusion}\label{sec:conclusion} The aim of this article is to demonstrate that convolution is a versatile and interesting construction in mathematics and computer science. Used in the context of power series and integrated into lifting results, it yields a powerful tool for setting up various mathematical structures and computational models and calculi endowed with generic algebraic properties. Beyond the language models known from formal language theory, these include assertion quantales of separation logic (which can be lifted from an underlying resource monoid), assertion quantales of interval logics (which can be lifted from an underlying semigroup of intervals) and stream interval functions (which have applications in the analysis of dynamic and real-time systems). For all these examples, the power series approach provides a simple new approach. For the latter two, new kinds of concurrency operations are provided. In addition, the modelling framework based on power series has been combined with a verification approach by deriving, in generic fashion, propositional Hoare logics for virtually all the examples considered. In particular, state, predicate or resource transformers, which can be used for constructing these logics, arise as instances of power series. This article focused mainly on the proof of concept of the relevance of convolution. Many of the modelling examples and verification approaches featured require further investigation. This includes in particular the derivation of more comprehensive sets of Hoare-style inference rules for concurrency verification, separation logic and interval temporal logics, and more detailed case studies with separation, inverval and stream interval algebras, and with concurrent systems with infinite behaviours. For all these case studies, the formalisation of the power series approach and the implementation of modelling tools plays and important role. In fact, the basic lifting lemma and a detailed predicate transformer approach based on power series have already been formalised within the Isabelle/HOL proof assistant~\cite{NPW02}. The development of a power series based verification tool for separation logic, and even concurrent separation logic, will be the next step in the tool chain. \bibliographystyle{plain}
2,869,038,155,793
arxiv
\section{Introduction} \label{sec:intro} In-hand manipulation, understood as the capability to adapt a grasp on an object, facilitates the complex process involved in picking and using an object. Robots, especially those with simple grippers, lack the necessary dexterity to do so, which strains their manipulation capabilities. \begin{figure} \centering \includegraphics[]{figures/prepush_seq_real_v2.pdf} \caption{An example of prehensile pushing -- an aluminum object is reconfigured in a grasp by pushing it against the environment from different sides.} \label{fig:prepush_seq} \vspace{-0.2in} \end{figure} In this paper, we propose a planner to manipulate grasped objects through a sequence of external pushes, such as those in \figref{fig:prepush_seq}, a.k.a., prehensile pushing~\citep{ChavanDafle2015a}. Given a pair of start and goal object grasps, the planner outputs a sequence of pushes, possibly from different sides of the object, to reconfigure the object in the grasp. Planning these push sequences presents two main challenges: \begin{itemize} \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] \textbf{Continuous contact dynamics} of the frictional interaction between gripper, object and their environment. \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] \textbf{Discrete contact switch-overs} between continuous pushes. \end{itemize} To address them both, we combine a low-level optimization-based approach to solve the inverse dynamics of prehensile pushing with a high-level sampling-based planning approach to build long sequences of pushes. \myparagraph{Low-level optimization-based inverse dynamics} For prehensile pushing, solving for a unit step control to propagate a planning-tree refers to solving the inverse dynamics problem, i.e., finding the external pusher motion that yields the object motion as close to the desired one as possible. We develop an optimization-based dynamics formulation capturing the contact dynamics between gripper, object, and external pusher, which in practice takes the form of a mixed nonlinear complementarity problem (MNCP). \myparagraph{High-level sampling-based planning} The higher level planning architecture follows a transition-based RRT\mbox{*} (T-RRT\mbox{*}) formulation which takes advantage of the optimality convergence properties of typical RRT\mbox{*} technique and efficient exploration of configuration space using transition tests~\citep{trrt_star,trrt,rrt_star}. We use the optimal connections feature of RRT\mbox{*} to minimize the number of pusher contact switch-overs along a pushing strategy. The transition tests allow us to loosely confine the stochastic exploration towards the goal grasp, while allowing the flexibility to explore in other directions if it's necessary to get the object finally to the goal. The planning architecture and the dynamics solver work together to build a tree of grasp poses. A path in this tree provides a pushing strategy to change a grasp pose to another. We evaluate the performance of the planner for the case of a parallel jaw-gripper manipulating different objects. We validate the pushing sequences with real experiments in a robotic manipulation platform which is equipped to track the motion of the robot and pose of the object. To summarize, the main contributions of this paper are: \begin{itemize} \item an optimization-based inverse dynamics formulation for full three dimensional in-hand manipulations using external pushes, \item a planning framework to combine low-level contact dynamics with high-level reasoning for long pushing strategies with discrete contact changes, \item application and experimental validation of the proposed planner to prehensile pushing \end{itemize} \section{Related Work} \label{sec:related} Early work on dexterous manipulation focused on providing a gripper with enough degrees of freedom to give full controllability over a grasped object and further allowing finger contacts to either roll or slide~\citep{salisbury1982ahf,Trinkle1990,Kao1992,Bicchi95a,Rus1999a,CherifGupta99}. It assumed the intrinsic capability of the gripper to control these interactions. Diverging from this assumption, in a recent work, we demonstrated the use of gravity, dynamic motions, and contacts with the environment to regrasp objects using a library of hand-scripted motions~\citep{ChavanDafle2014}. In~\citep{ChavanDafle2015a}, we studied in-hand manipulations with external contacts. We referred to it as \textit{prehensile pushing} and presented a quasi-dynamic formulation to predict the instantaneous motion of a grasped object for a given pusher motion -- forward dynamics problem~\citep{ChavanDafle2015a}. The inverse dynamics solver we use in this paper shares a similar dynamics formulation underneath, but solves for the required pusher motion for a desired object motion. Planning for prehensile pushing requires an understanding of how forces and motions evolve at contact interactions. There is a large array of work on trajectory optimization techniques for planning and control through contact. In most cases, these make assumptions of point contact interactions modelled with polyhedral friction cones or patch contacts modelled as soft point contacts to alleviate the computational complexity of contact modeling \citep{Erez2012,todorov2012,Posa2014}. \citet{lynch15}, \citet{kragic_pivoting2} and \citet{Hou16} demonstrate application of such approach to in-hand manipulation, particularly for in-hand sliding and pivoting. For computational efficiency, \citet{Erez2012,todorov2012} relax complementarity constraints required to impose non-penetration condition at contacts and to model sticking/sliding transitions. This leads to fast algorithms, but with limited success in modeling situations of interest to this work, i.e., benefiting from hard line and patch contacts~\citep{kolbert16}. The contact modelling approach in this paper resembles to that presented in~\citep{ChavanDafle2015a}, but with a quadratic Coulomb friction cone instead of a polyhedral approximation. The polyhedral approximation introduces artificial anisotropy in friction and ``preferred'' sliding directions. Sampling-based techniques for planning are key to the presented approach. Rapidly exploring random tree (RRT) derives it's strength from fast and random exploration of the configuration space~\cite{Lavalle98,Kuffner}. RRT\mbox{*} introduces the concept of optimality for connecting the nodes in a tree and provides conditions under which it can lead to asymptotically optimal solutions~\cite{rrt_star}. One of the variants of RRT* that we particularly find useful in this work is T-RRT\mbox{*}, which is developed for path planning on configuration-space cost maps~\cite{trrt_star,trrt}. By employing a transition test to accept/reject nodes, it guides the exploration to follow low-cost valleys of the cost map with a provision to traverse across high-cost regions whenever required. This provides a more controlled and efficient exploration of the configuration space. While sampling based methods have not been thoroughly explored for contact-rich applications and may not seem an immediate choice for problems with complex and computationally expensive dynamics, in the coming sections we discuss in detail the fit of T-RRT\mbox{*} based approach for such problems and demonstrate its effectiveness at practical in-hand manipulations. \section{Problem Formulation} \label{sec:formulation} This paper focuses on planning in-hand manipulations using external pushes. In our implementation, the external pushes are executed by a robot forcing a grasped object against a rigid environment. More generally, such external pushes could also abstract the interactions with a second robot arm or extra fingers of a multi-finger gripper. Equivalently, in this paper, we assume the gripper is fixed in the world and grasps an object, while a virtual pusher with full 6 DOF mobility executes the external pushes. In this case, planning for external pushes is equivalent to planning the motion of the virtual pusher. For the problem setup, we assume the following information about the manipulation system: \begin{itemize} \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] Object geometry and mass. \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] Initial and goal grasp on the object, specified by the locations and geometries of each fingers contacts. \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] Gripping force. \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] Discrete set of pusher contacts, specified by initial locations and geometries. \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] Coefficient of friction at all contacts. \end{itemize} As described in \secref{sec:intro}, the proposed planner works at two levels -- a high level planning architecture (\secref{sec:planning}) that explores the configuration space of reachable grasps and builds a tree of optimally connected configurations, and a low level inverse dynamics solver (\secref{sec:dynamics}) that controls the unit-step propagation in the tree. In short, the decision flow of the planner is follows: \begin{itemize} \item[i.] Sample a random object configuration in a grasp. \item[ii.] Check if moving toward the sampled configuration satisfies a ``benefit" criteria. If not, return to step i. \item[iii.] Solve inverse dynamics for a valid pusher location and pusher motion to move the object in the direction of the sampled pose. If not possible, return to step i. \item[iv.] Check for other ways to reach the newly added configuration with lower cost, from existing nodes in the tree. \item[v.] Iterate until reaching the goal grasp within a given resolution and cost threshold. \end{itemize} \section{Low-level: Inverse Dynamics Solver} \label{sec:dynamics} Sampling-based planners are built on top of a unit-step algorithm that, when possible, steers the system along a sampled direction. In this paper, we refer to that unit step as the inverse dynamics problem: given the pose of the object in a grasp, the position of a pusher on the object and a desired instantaneous object motion in the grasp, find an instantaneous motion of the pusher that forces the object in the direction as close to the desired direction as possible. The following sections discuss our approach to model contact interactions and kinetic-kinematic constraints governing the object motion in the grasp. \subsection{Contact Modelling} \label{sec:contact} \begin{wrapfigure}[5]{r}{2in} \vspace{-15mm} \includegraphics[]{figures/allPatches_v3.pdf} \caption{Different contact geometries: point, line and circular patch, modeled as sets of rigidly connected point contacts} \label{fig:contact_shape} \end{wrapfigure} Our contact modelling approach is similar to that proposed in~\citep{ChavanDafle2015a}. We model a patch contact as a rigid array of point contacts as shown in \figref{fig:contact_shape}. Each of these constituent point contacts, is modeled as a hard point contact with quadratic Coulomb friction cone. We represent a point contact between two bodies by a local coordinate frame with, $\hat{\boldsymbol{n}}$ normal to the contact plane and and $\hat{\boldsymbol{t}}$ and $\hat{\boldsymbol{o}}$ spanning the contact plane. Let $\boldsymbol{f}=[f_{n}, f_{t},f_{o}]^\top$ and $\boldsymbol{v}=[v_n,v_t,v_o]^\top$ be a net force and a relative velocity at a contact in the local contact frame. For a given coefficient of friction ($\mu$), Coulomb's friction cone at the contact is defined as the following set: \begin{equation} \label{eq:fricCone} FC=\{f_{n}\hat{\boldsymbol{n}} + f_{t}\hat{\boldsymbol{t}} + f_{o}\hat{\boldsymbol{o}}\ | \ f_{n} \geq 0,\ f_{t}^2+f_{o}^2 \leq \mu f_{n}^2\} \end{equation} By Coulomb's law, when a contact slides, the contact force is on the boundary of the friction cone and the direction of the friction force is opposite to that of the sliding velocity at the contact. We can formalize this constraint using the standard complementarity and nonlinear equations: \begin{equation} \label{eq:compfricbound} [(\mu f_n)^2 - f_t^2 -f_o^2] \norm{[v_t,v_o]} = 0, \ \ \ \ (\mu f_n)^2 - f_t^2 -f_o^2\geq 0 \end{equation} \vspace{-3mm} \begin{equation} \label{eq:quad_dissipation} \mu f_{n}v_{i}+f_{i}\norm{[v_t,v_o]}=0 \hspace{1cm} i=t, o \end{equation} For a contact with finite area, modelled as an array of points, we impose \eref{eq:compfricbound} and \eref{eq:quad_dissipation} at each constituent point, along with constraints on the relative velocities at them to make sure that the array moves as a rigid body. See~\cite{ChavanDafle2015a} for more details. \subsection{Dynamics of Prehensile Pushing} \label{sec:constraints} The fictional forces involved in prehensile pushing are much more dominant than the object inertia, so we will limit ourselves to a quasi-dynamic model of pushing. We define this model in the space of local contact impulses, relative velocities at the contacts, velocity of the object, and velocity of the pusher. The solution space is constrained by the following kinematic and kinetic constraints. \myparagraph{Newton Euler Equation}: \label{sec:newton} Let $\mathbf{G_i}$ maps local contact forces at contact $i$ to the corresponding wrench in the object frame. $\mathbf{G}$ is defined as diagonal concatenation of $\mathbf{G_i}$'s for all the contacts on the object. As we are interested in a quasi-dynamic formulation, for a single time step with zero initial velocity of the object, we can write the time-integrated Newton's law for an object with mass $m$ and generalized inertia matrix $\mathbf{M}$ as: \begin{equation} \label{eq:newton_vel} \mathbf{G}\cdot \boldsymbol{P} + \vec{P}_{mg} = \mathbf{M}\cdot {\vec{v}}_{\textnormal{obj}} \end{equation} \noindent where $\boldsymbol{P}$ is an array collecting impulses equivalent to all the contact forces ($\boldsymbol{f_1}, . ., \boldsymbol{f_n}$), $\vec{P}_{mg}$ is the gravitational impulse and ${\vec{v}}_{\textnormal{obj}}$ is the resultant object velocity in the object frame. \myparagraph{Rigid Body Motion Constraints}: \label{sec:rigid_body} Let $\mathbf{J}$ be the jacobian matrix that maps the velocities of the pusher and gripper actuators ($\dot{\theta}$) to the input velocities at all the contacts in the local contact frames. We can write $\boldsymbol{V} = [\boldsymbol{v_1}, \boldsymbol{v_2}, . . ., \boldsymbol{v_n}]^\top$, the array collecting the relative velocities at all contacts, as difference between the input velocities and the reflection of the object velocity at those contacts points: \begin{equation} \label{eq:contact_vel} \boldsymbol{V} = \mathbf{G}^\top\cdot\vec{v}_{\textnormal{obj}} - \mathbf{J}\cdot \dot{\theta} \end{equation} \myparagraph{Unilateral Contact Constraints}: \label{sec:unilateral} There can not be interpenetration at contacts between two rigid bodies. Contacts can only push and not pull, and only when there in no separation at them. We write it as a complementarity constraint at each point contact. \begin{equation} \label{eq:unilateral2} v_{n}\cdot p_{n}=0,\ v_{n}\geq 0,\ p_{n}\geq 0 \end{equation} \myparagraph{Contact Modelling Constraints}: We model the force-motion interactions at every contact as explained in \secref{sec:contact}. Let $\boldsymbol{p}=[p_{n},p_{t},p_{o}]^\top$ be an impulse at a contact, then rewriting equations \ref{eq:compfricbound} and \ref{eq:quad_dissipation} in the space of impulse-velocity: \begin{equation} \label{eq:compfricbound2} [(\mu p_n)^2 - p_t^2 -p_o^2] \norm{[v_t,v_o]} = 0, \ \ \ \ (\mu p_n)^2 - p_t^2 -p_o^2\geq 0 \end{equation} \begin{equation} \label{eq:quad_dissipation2} \mu p_{n}v_{i}+p_{i}\norm{[v_t,v_o]}=0 \hspace{1cm} i=t, o \end{equation} Further, for contacts with finite area modelled with arrays of point contacts, we impose constraints on the relative velocities at them to make sure that each array moves as a rigid body. \subsection{Numerical Solver for the Dynamics Problem} \label{sec:dyn_solving} In our problem, solving inverse dynamics means finding a pusher velocity that produces a desired object velocity while satisfying all the constraints listed above. It has the form of a mixed nonlinear complementarity problem (MNCP), which we solve as a nonlinear constrained optimization problem using interior point method in MATLAB. We define the objective function as a weighted sum of the complementarity constraints and the difference between the desired object velocity and that actually achieved. We try to minimize the objective function subject to the constraints detailed in Section \ref{sec:constraints}. A feasible solution exists when the objective goes close to zero while meeting the constraints. In practice, it helps to give a relatively larger weight on complimentarity constraints, yielding more accurate satisfaction of contact dynamics and compromising on the desired object velocity if necessary. The ratio of weights we used is $10^4$. \section{High-Level: Long Horizon Planning with Contact Switch-overs} \label{sec:planning} An effective regrasp skill requires exploiting contact switch-overs. A continuous and greedy approach based on pushing iteratively towards the goal grasp has limited success in a problem as constrained and underactuated as in-hand manipulation. The problem benefits from a long-horizon planning technique that allows the regrasp strategy to deviate from goal momentarily if necessary and sequence different discrete pushes. Trajectory optimization has been studied to capture the effects of a long-horizon cost, but has difficulty with the hybridness of discrete contact switch-overs. On the other hand, sampling based methods are naturally suited to search over continuous plans intertwined with discrete changes along the plan. Being able to change the pusher contact from one side of the object to another can be pivotal. In practice, minimizing the number of contact switch-overs yields benefits in the form of: time savings, and avoiding uncertainty introduced by engaging and disengaging contacts. The higher level architecture of our planner is based on a T-RRT\mbox{*} formulation. We exploit the optimality convergence properties of the underlying RRT\mbox{*} method to reduce the number of pusher contact switch-overs and the efficiency of transition tests to direct the exploration of configuration space towards the goal. \begin{algorithm} \caption{: In-Hand Manipulation Planner}\label{alg:full_planner} $ \textbf{input}: q_{init}, q_{goal}$ \par $ \textbf{output}:$ {tree} $\ \mathcal{T}$ \begin{algorithmic}[1] \State $\mathcal{T}\gets \textrm{initialize tree}(q_{init})$ \While{$q_{goal} \notin \mathcal{T}$} \State $q_{rand}\gets \textrm{sample random configuration}(\mathcal{C})$ \State $q_{parent}\gets \textrm{find nearest neighbor}(\mathcal{T},q_{rand})$ \State $q_{ideal}\gets \textrm{take unit step}(q_{parent},q_{rand})$ \If{\textrm{transition test}$(q_{parent},q_{ideal},\mathcal{T})$ \textbf{and} \textrm{grasp maintained}$(q_{ideal})$} \State $q_{new}, \dot{\theta}_{pusher} \gets \textrm{InvDynamics}(q_{parent},q_{rand})$ \If{$q_{new}\neq \textrm{null}$ \textbf{and} \textrm{transition test}$(q_{parent},q_{new},\mathcal{T})$ \textbf{and} \textrm{grasp maintained}$(q_{new})$} \State $(q\mbox{*}_{parent})\gets \textrm{optimal connection}(\mathcal{T},q_{new},q_{parent})$ \State $\textrm{add new node}(\mathcal{T},q_{new})$ \State $\textrm{add new edge}(q\mbox{*}_{parent},q_{new})$ \State $\textrm{rewire tree}(\mathcal{T},q_{new},q\mbox{*}_{parent})$ \EndIf \EndIf \EndWhile \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:full_planner} presents our in-hand manipulation planner, starting from the assumptions listed in \secref{sec:formulation}. Let $q$ denote a configuration of an object, i.e., the pose of the object with respect to a gripper frame which is assumed to be fixed in the world. Though the configuration space $\mathcal{C}$ is six dimensional, different types of grasps confine it to lower dimensions. We use scaled euclidean distance, where $1$~mm is treated equivalent to $3$~degree, as a metric between two object configurations. Let $q_{init}$ and $q_{goal}$ be an initial and desired configuration of the object respectively. The planner initiates a tree $\mathcal{T}$ with $q_{init}$. While the desired object pose is not reached, it samples a random configuration ($q_{rand}$) and finds the nearest configuration to $q_{rand}$ in the tree $\mathcal{T}$. \myparagraph{Controlled exploration}: A transition test decides if the propagation of the tree towards the newly sampled configuration is acceptable or not. Let $C_{q}$ be a cost defined on the object configuration $q$, as the distance between $q$ and $q_{goal}$. If moving the object from the nearest neighbor towards the newly sampled pose can reduce the configuration cost, the sample is accepted. If such an object motion will increase the configuration cost, but still keep it lower than some maximum bound set, the sample is accepted with a certain probability. Following \citet{trrt}, we define the transition probability for a transition from $q_a$ to $q_b$ as: \centerline{$p_{(q_a,q_b)} =\exp(\frac{-\Delta C_{(q_a,q_b)}}{KT})$} \noindent where, \begin{itemize} \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] $\Delta C_{(q_a,q_b)} = \frac{C_{q_b}-C_{q_a}}{dist(q_a,q_b)}$ is the rate of cost variation per unit distance. \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] $K$ is a normalization factor defined as average of costs $C_{q_b}$ and $C_{q_a}$. \item[$\raisebox{-0.25ex}{\scalebox{1.75}{$\cdot$}}$] $T$, is temperature parameter which controls the difficulty of a transition. We adjust it as the planner progresses. It is increased if the tree is getting stuck locally to allow transitions of high cost, and decreased otherwise to allow transitions of only low cost. \end{itemize} In practice, the transition test with our configuration cost definition loosely confines the propagation of the tree towards the goal pose, while allowing the flexibility to steer away from it momentarily if necessary. If the transition test succeeds, we query the inverse dynamics solver to predict the motion of a pusher required to move the object from ($q_{parent}$) by a unit step as much as possible towards ($q_{rand}$). The dynamics solver limits its choice for the pushers to a fixed set of pusher locations on the object and the evolved pusher location corresponding to $q_{parent}$. Here, by evolution we mean the new location of the pusher contact if it slides on the object. This makes sure that we account for the pusher slip when sequencing multiple instantaneous pushes to generate a smooth continuous push using the same pusher. \vspace{-5mm} \noindent \begin{minipage}[t]{5.85cm} \null \begin{algorithm}[H] \caption{: optimal connection}\label{alg:optimConnect} $ \textbf{input}: \mathcal{T}, \ q_{new}, \ q_{parent} $ \par $ \textbf{output}: q\mbox{*}_{parent}$ \begin{algorithmic}[1] \State $J_{q_{new}}\gets \textrm{findNodeCost}(q_{new},q_{parent})$ \State $J\mbox{*}_{q_{new}}\gets J_{q_{new}}$; \ \ $q\mbox{*}_{parent}\gets q_{parent}$ \State $Q_{near}\gets \textrm{nodesInBall}(\mathcal{T},q_{new},R_{ball})$ \While {$Q_{near} \neq \emptyset$} \par \State $q_{parent}\gets q \in Q_{near}$ \If{$\textrm{InvDynamics}(q_{parent},q_{new})\neq\textrm{null}$} \State $J_{q_{new}}\gets \textrm{findNodeCost}(q_{new},q_{parent})$ \If{$J_{q_{new}}<J\mbox{*}_{q_{new}}$} \par \State $J\mbox{*}_{q_{new}}\gets J_{q_{new}}$ \State $q\mbox{*}_{parent}\gets q_{parent}$ \EndIf \EndIf \State $Q_{near}\gets Q_{near} \setminus q_{parent}$ \EndWhile \end{algorithmic} \end{algorithm} \end{minipage}% \begin{minipage}[t]{5.75cm} \null \begin{algorithm}[H] \caption{: rewire tree}\label{alg:rewireTree} $ \textbf{input}: \mathcal{T}, \ q\mbox{*}_{parent}, \ q_{new}$ \par $ \textbf{output}: \textrm{tree} \ \mathcal{T}$ \begin{algorithmic}[1] \State $q_{parent}\gets q_{new}$ \State $Q_{near}\gets \textrm{nodesInBall}(\mathcal{T},q_{new},R_{ball})$ \While {$Q_{near} \neq \emptyset$} \par \State $q_{r}\gets q \in Q_{near}$ \State $q_{parent}\gets q_r.{parent}$ \State $J_{qr} \gets \textrm{findNodeCost}(q_{r},q_{parent})$ \If{$\textrm{InvDynamics}(q_{new},q_{r})\neq\textrm{null}$} \State $J_{qr_{new}}\gets \textrm{findNodeCost}(q_{r},q_{new})$ \If{$J_{qr_{new}}<J_{qr}$} \par \State $q_r.{parent}\gets q_{new}$ \EndIf \EndIf \State $Q_{near}\gets Q_{near} \setminus q_{r}$ \EndWhile \end{algorithmic} \end{algorithm} \end{minipage} \vspace{4mm} \myparagraph{Optimal connections}: As we wish to minimize the number of pusher contact switch-overs, we define the cost of a node in a tree ($J_q$) to reflect the contact switch-overs performed to get to that node from the start node. Formally, \centerline{$J_q = J_{q_{parent}} + \ dist(q,q_{goal}) \ + \ $cost of the instantaneous push.} \noindent where, the cost of the instantaneous push that would move the object from $q_{parent}$ to $q$, is set to 0.1 (low) if the pusher used to get to $q_{parent}$ is used in continuation for this instantaneous push, or 1 (high) if the pusher location is changed. For reference, distance from the goal is generally in the order of $10^{-3}$ to $10^{-1}$. Using this node cost definition, \textit{optimal connection} routine explores the space around $q_{new}$ to find transitions that lead to a lower cost for $q_{new}$, and iteratively updates the parent node of $q_{new}$ and the cost of $q_{new}$ accordingly. Similarly, \textit{rewire tree} routine checks if any of the nodes around $q_{new}$ can be connected through $q_{new}$ with the purpose of reducing its cost. Both these routines are characteristics of the RRT* architecture originally proposed in~\cite{rrt_star}. To summarize, the high level planner generates a tree of grasp poses connected with continuous pushes or with discrete pusher switch-overs. A path in this tree is a long pushing sequence that changes the grasp on the object from one pose to another with a small number of pusher contact switch-overs if necessary. \section{Example Cases} \label{sec:examples} In this section, we consider different examples of in-hand manipulation while highlighting notable features of the planner. For all the experiments, we used a computer with Intel Core i7 2.8 GHz processor and MATLAB R2016a. We evaluate the validity of the solutions with a manipulation platform instrumented with an industrial robot arm, a parallel-jaw gripper with force control at the fingers, features in the environment that will act as pushers, and a Vicon system for object tracking. \vspace{-1mm} \subsection{Respect and Exploit Dynamics of Frictional Contact} \label{sec:linpush} \vspace{-7mm} \begin{figure} \centering \includegraphics[width=4.5in]{figures/linpush_sim_exp_small.pdf} \caption{Simulated motion of the object and snapshots from the experiment for a pushing sequence generated by the planner. Object motion is shown from a side view; finger contact is a circular patch (shown in green) and pusher contact is a line/edge contact (shown in magenta).} \label{fig:linpush_simexp} \end{figure} \vspace{-10mm} \begin{SCfigure} {\includegraphics[]{figures/plastic_linpush.pdf}} {\caption{Simulated motion of the object for a pushing sequence for a light-weight plastic object. Note that only side pusher is used throughout and downward sliding of the object is minimal.} \label{fig:plastic_push}} \end{SCfigure} Having a detailed underlying dynamics solver is one of the key strengths of our planner. This example shows different strategies generated by the planner to execute the same manipulation for two similar objects, but of different weights. \begin{figure} \centering \includegraphics[width=4.5in]{figures/linpush_plot_v4.pdf} \caption{Object motion in the grasp as predicated by the planner and as observed in the experiment for the example shown in \figref{fig:linpush_simexp}. Mean values for 10 experimental runs are shown, with error-bars indicating the variation observed during these runs.} \label{fig:linpush_plot} \vspace{-4mm} \end{figure} First consider a $100$~mm long aluminum bar of $1$~inch square cross-section grasped at its center with a parallel-jaw gripper. The goal in this seemingly simple manipulation is to move the object to a pose $20$~mm offset in the horizontal direction from the center. The combination of the coefficient of friction at the fingers and the pusher and the gripping force make it so that the downward sliding of the object under gravity is not negligible. Pushing the object horizontally from side is not a valid solution. We initiate the planner with pusher contacts on left, right, and bottom face of the object. Note that in all the examples we consider in this paper, the robot is constrained to use features in the environment as virtual pushers, so the gravity direction remains constant in the pusher frame and is different in different contact frames based on their orientation in the environment. \figref{fig:linpush_simexp} shows a pushing sequence generated by the planner and consequent motion of the object in the grasp. The object is pushed up first using the bottom pusher. This helps to account for the downward sliding of the object due to gravity in the later pushes from side. Note that the planner decided to do this upward push first, even though it means going away from the goal pose; this strategy leads to only one pusher switch-over in the process of getting object to the goal pose. The median time taken to converge to a plan with only one pusher switch-over for 10 trials was 9.88 minutes. Now, consider the same problem but for a plastic object which weighs half of the aluminum object and has similar frictional properties. For this case, the planner decides to push only from side, as shown in \figref{fig:plastic_push}. The downward sliding of the object during these pushes is minimal and the final object pose in the grasp is within the desired resolution from the goal pose. Experimentally, the plastic object indeed slides down by a negligible amount and we get the horizontal displacement of the object in the grasp as desired. \figref{fig:linpush_plot} shows the comparison between the object motion in the grasp simulated by the planner and that observed during experimental trials for the aluminum object. We get about $0.56$mm error in X and $0.45$mm error in Z in the final position of the object in the grasp from what is expected by the planner. The errors in the orientation are less than $0.25$ degree. Due to high precision of the robot, the experiments are very repeatable and the error-bars in \figref{fig:linpush_plot} showing the variation in 10 experiments are almost non-visible in the position plot. \subsection{Minimize the Number of Contact Switch-overs} \label{sec:pivoting} \vspace{-6mm} \begin{figure} \centering \includegraphics[width=4.5in]{figures/pivot_sim_long_v2.pdf} \caption{A pushing sequence for pivoting the aluminum object in a parallel-jaw grasp. The pushing sequence involves discrete pusher switch-overs to push the object from different facets to eventually get to the desired pose. } \label{fig:pivoting_okseq} \end{figure} \vspace{-10mm} \begin{figure} \centering \includegraphics[width=4.5in]{figures/pivot_sim_exp_small.pdf} \caption{Pivoting strategy generated using a single pusher contact on right face of the object. } \label{fig:pivoting} \vspace{-2mm} \end{figure} In this example, the goal is to pivot the same aluminum object about the fingertips by 90 degrees. We initiate the planner with pushers on left, right and bottom face of the object. \figref{fig:pivoting_okseq} shows a series of pushes and consequent object motion generated by the planner. Note that the planner uses all three contacts to eventually pivot the object by 90 degree and to correct the unwanted object displacements happened during those pushes. \begin{figure} \centering \includegraphics[width=4.5in]{figures/pivot_plot_v4.pdf} \caption{Object motion in the grasp as predicated by the planner and as observed in the experiment for the example shown in \figref{fig:pivoting}. Mean values for 10 experimental runs are shown, with error-bars indicating the variation observed during these runs.} \label{fig:pivot_plot} \vspace{-4mm} \end{figure} In another attempt, we introduce a bias in the definition of the distance metric used to find the nearest node for connection. We influence the distance metric more by the difference in the position than that in the orientation. This promotes the connections between the object poses that are close in terms of positions but may have different orientations. The planner converges to a pivoting strategy in which a single pusher rotates about the fingertips to pivot the object with almost no object displacement in the grasp. \figref{fig:pivoting} shows instances of pushing strategy generated by the planner and corresponding snapshots of the experimental run. Note that the gravity is constant in the pusher frame as shown in \figref{fig:pivoting}. The median time taken to generate this plan for 10 different attempts was $2.14$ minutes. This example shows that with our TRRT*-based formulation and node cost definition a pushing strategy converges to the one with fewer number of pusher changes, and providing some heuristic can further speed up that process. \figref{fig:pivot_plot} shows comparison between the simulated object motion and that observed experimentally for the plan shown in \figref{fig:pivoting}. Error-bars show the variations during 10 experimental runs. For first two pushes, the Vicon markers on the object get occluded by the robot. So, the experimental values are shown in the plots only after the third push. We find a close match between the orientation of the object as predicted by the planner and that seen during the experiments; the object position however shows some deviations. Final position of the object in the grasp is moved along Z by $2.5$mm and in Y by $0.5$mm which the planner does not expect. The errors observed in this as well as the previous example can be attributed to a few possible sources such as the errors in locating the pusher contacts in the environment, unmodeled compliance of the fingers and gripper mechanism, and possible manufacturing defects in the finger and pusher contacts. \subsection{Exploit Complex Contact Interactions} \label{sec:ballrolling} \begin{figure} \centering \includegraphics[width=4.5in]{figures/rolling_sim.pdf} \caption{Evolution of the rolling contact and orientation of the ball in the grasp for the trajectory planned to rotate the ball about Z axis. Finger contacts are shown in green, while the contact between the ball and the ground is shown in magenta color.} \label{fig:rolling_sim} \end{figure} \begin{figure} \centering \vspace{-1mm} \includegraphics[width=4.5in]{figures/rolling_exp_v3.pdf} \caption{Object pose in the grasp at the beginning, middle and end of the rolling trajectory. Black, silver and golden paint marks on the ball, show that the object effectively rotates by 90 degrees about vertical (Z) while net orientation about other two axes (X and Y) go close to zero as before. The supplemental video shows the actions involved better.} \label{fig:rolling_exp} \vspace{-3mm} \end{figure} This example is similar to the classical ball-plate problem \citep{ball_plate}. Imagine a steel ball in a parallel-jaw grasp and resting on a ground as shown in \figref{fig:rolling_exp}. We wish to rotate the ball in the grasp about vertical (Z axis) by 90 degree using the ground as a virtual pusher. As the contact between the ball and the flat ground is of very small area, theoretically a point contact, it can not rotate the ball about Z using friction. When provided with this challenge, the planner generates a series of in-plane pushes that causes the ball to purely rotate about X and Y axes in the grasp and eventually go to the orientation with close to 90 degree rotation about Z and almost zero rotations about X and Y. The time taken to generate this plan was 318.17 minutes. \figref{fig:rolling_sim} shows the rolling contact trajectory of the ball and the orientation the ball along it. Note that the ball is free to rotate about the axis connecting fingers (Y axis) as the finger contacts are point contacts; however, rotation about X needs to overcome friction and locally slide at fingers along the vertical direction (Z). All the contacts are free to stick or slip. For the planned trajectory, the contact between the ground and the object is instantaneously sticking, i.e. rolling contact, while there is sliding at the fingers contacts only in the vertical direction (Z) to allow the ball to rotate about X axis with no change in the position of the ball in the grasp. Realizing such rolling in the grasp is easier when either the gripping force is very low or the coefficient of friction at the pusher contact is much higher than that at the fingers. We use a high friction silicone platform as a ground pusher. Since we did not have a way to track the pose of the ball accurately, we provide only qualitative results for this example. \figref{fig:rolling_exp} shows the snapshots of the actual implementation of the ball rolling example on our system. It shows rotation of the ball by 90 degrees about the vertical axis. The rotation about the other axes is close to zero and the object position in the grasp remains intact. \section{Discussion} \label{sec:discussion} This paper presents a sampling-based planning framework for in-hand manipulations using external pushes. We model the frictional interactions between the grasped object, fingers, and the environment with a quadratic Coulomb friction cone and complementarity constraints capturing the hybrid nature of sticking/sliding. The resulting inverse dynamics problem for estimating the pusher velocity to produce a desired instantaneous object velocity in the grasp naturally takes the form of MNCP and is solved as a nonlinear constrained optimization problem. The high-level planning architecture is based on T-RRT\mbox{*} and relies on the inverse dynamic model of prehensile pushing as the underlying unit-step controller to propagate states. We exploit the strengths of T-RRT\mbox{*} for two specific purposes: 1) to bias the exploration towards the goal pose with a provision to deviate from the goal whenever necessary, and 2) to build low-cost connections in the tree that yield effective pushing strategies for regrasps while avoiding unnecessary pusher contact switch-overs. We evaluate the planner with a parallel-jaw gripper manipulating different objects. Simulation results show that our planning framework is able to exploit the dynamics of pushing and reason about strategies with continuous pushes linked with discrete pusher contact switch-overs. The experimental observations validate the accuracy of the generated plans; the planned strategies move the object very close to the desired pose in the grasp. The main limitations of the current approach are: \begin{itemize} \item \textbf{Speed} The inverse dynamics formulation we developed is computationally expensive which consequently affects the planning time. This is inline with existing algorithms that use complimentarity formulations to explicitly model the hybrid dynamics of rigid contact~\cite{Posa2014,tassa2010stochastic}. This work focuses on demonstrating the effective blend of a detailed dynamics modelling and a sampling-based method for planning in-hand manipulations. It is entirely developed in MATLAB for flexibility and currently not optimized for time. One promising direction for faster planning is to limit the planner to a subset of pushing motions whose dynamics are less expensive to compute~\cite{ChavanDafle2018a}. Another practical way is to extend this work to a multi-query framework to exploit the already built tree/graph. This can work better for applications such as assembly automation where robots often deal with a small set of known objects, initial grasps and goal grasps. \item \textbf{Smoothness} The solutions tend to be jerky, as it is typical from randomized sampling-based planners. It would be interesting to explore the role that trajectory optimization approaches can play in bolstering sampling-based methods. \end{itemize} An approach to in-hand manipulation that is not limited to intrinsic dexterity, but relies on external contacts to produce the desired reconfigurations can make robots more flexible and reliable at autonomous manipulation, even those robots with simple grippers currently involved in today's factories and field applications.
2,869,038,155,794
arxiv
\section{Introduction} \label{s1} \numberwithin{equation}{section} To construct an example of an interacting Quantum Field Theory (QFT) on Minkowski space satisfying the Wightman axioms remains a major challenge of fundamental physics. While a lot of progress has been made in the constructive programme in lower dimensions (see e.g. \cite{GJ87,Fr78,Riv00,Sim74}), still no mathematically well-defined interacting QFT in $D+1=4$ dimensions has been found to date. The importance of this problem can be measured by the fact that the Clay Mathematical Institute\footnote{http://www.claymath.org/millennium-problems/yang-mills-and-mass-gap} devoted one of its millennium prizes to this research field. Due to Haag's theorem, which roughly says that the interacting theory cannot be defined on the same Hilbert space as the free theory, the only chance to solve this problem is to use a non-perturbative approach. One of these is the well-established Lattice Quantumchromodynamics (LQCD) approach. Here one uses a spacetime lattice with spacing $\epsilon$ as a UV regulator that label a whole family of theories labelled by $\epsilon$ which are supposed to describe the theory at resolution $\epsilon$. The naive driscretisations of actions or Hamiltonians that are motivated by the classical theory do not define a consistent family of theories which must be such that the measurements of observables at scale $\epsilon$ must give identical results no matter which theory of scale $\epsilon'<\epsilon$ is used. To construct such a consistent family of theories, which in the constructive setting is defined by a family of measures $\epsilon\mapsto \mu_\epsilon$, one uses the idea of the renormalisation group (RG) \cite{WK73,BW74,Wil75,Kad77,Cre83}. Renormalisation now consists in constructing a sequence $\mu^{(n)}_{\epsilon}$ of measure families where one obtains $\mu^{(n+1)}_{\epsilon}$ from $\mu^{(n)}_{\epsilon/2}$ by integrating out the degrees of freedom at scale $\epsilon/2$ that do not contribute to scale $\epsilon$ and where $\mu^{(0)}_\epsilon$ corresponds to the initial, naive discretisation. If the sequence converges or at least has a fixed point (accumulation point) family $\mu^*_{\epsilon}$ then by construction that family is consistent, and chances are that it qualifies as the set of cylindrical projections of a corresponding physical continuum measure $\mu^*$. A lot of properties of these fixed point theories or {\it perfect lattice measure families} $\mu^*_{\epsilon}$ haven been investigated \cite{Has98,Has08,HN93}, among them how a family of theories labelled by discrete cubic lattices can still encode properties of the continuum such as {\it Euclidian invariance}. In order to attack these problems from a new angle in our companion paper \cite{LLT1} a Hamiltonian Renormalisation formalism has been introduced. The motivation comes from the observation that it is much easier to compute the matrix elements of a Hamiltonian operator $H$ than of its contraction semigroup (Gibbs exponential $e^{-\beta H}$ at inverse temperature $\beta$) which is defined by a measure $\mu$ by Osterwalder-Schrader (OS) reconstruction \cite{OS72} provided that $\mu$ is reflection positive, among other properties. The original idea was therefore to monitor the Wilsonian renormalisation flow of measures $n\mapsto \mu^{(n)}_\epsilon$ sketched above in terms of its corresponding Osterwalder-Schrader data (OS data) of triples $(\mathcal{H}_{\epsilon}^{(n)},H^{(n)}_{\epsilon},\Omega^{(n)}_{\epsilon})$ consisting of a Hilbert space, a self-adjoint and positive semi-definite Hamiltonian operator thereon and a vacuum vector which is a zero eigenvector for the Hamiltonian. Given such a renormalisation flow of OS data the plan was to extract a direct renormalisation flow that relates the OS data at scale $\epsilon$ to those at scale $\epsilon/2$ without recourse to the measure. It turns out that this idea fails in the sense that this measure derived flow of OS data makes it necessary to go back and forth between the OS data and OS measure so that one does need the matrix elements of the contraction semigroup which we wanted to avoid. However, the measure derived flow suggests a different, direct Hamiltonian flow which does avoid the recourse to the measure. In our companion paper \cite{LLT2} we checked that the proposed direct Hamiltonian flow, while very different from the measure derived one, still defines the same continuum OS data as the continuum OS measure at the respective fixed points, at least for the two-dimensional, massive Klein-Gordon model. In other words, the measure flow has a continuum fixed point measure $\mu^\ast$ whose OS data $({\cal H}^\ast, H^\ast, \Omega^\ast)$ coincide with the the continuum fixed point of the direct Hamiltonian flow which we take as an encouraging sign. In fact, the OS data of the cylindrical projections $\mu^\ast_\epsilon$ of $\mu^\ast$ have OS data which have nothing to do with the family member $({\cal H}^{\ast}_\epsilon, H^\ast_\epsilon, \Omega^\ast_\epsilon)$ of the fixed point family of the direct Hamiltonian flow. The latter are the physically relevant quantities since there are natural isometric injections $J^\ast_\epsilon {\cal H}^\ast_\epsilon\to {\cal H}^{\ast}$ which are the result of an inductive limit construction of Hilbert spaces such that $H^\ast_\epsilon=(J^\ast_\epsilon)^\dagger$ and $J^\ast_\epsilon \Omega^\ast_\epsilon=\Omega^\ast$. Further properties of this direct Hamiltonian flow touching subjects such as stability, criticality and universality are examined in our companion paper \cite{LLT3}. In this paper we are going extend \cite{LLT2,LLT3} by removing the restriction to two dimensions and considering the massive Klein Gordon model in all dimensions. This enables us to ask and answer the question, how the finite resolution OS data, while defined on a cubic lattice labelled by $\epsilon$, still reveal properties that the continuum theory does have, such as the spatial and rotational invariance of both the Hamiltonian and the vacuum. \\ \\ The architecture of the article is as follows:\\ \\ In section \ref{Recap} we briefly review elements of the formalism developed in \cite{LLT1,LLT2, LLT3}. In section \ref{Hamiltonian renormalisation} we perform the direct Hamiltonian renormalisation for a massive free scalar field in $D+1=3$ dimensions, by breaking it down into several independent renormalisation steps for each direction. It will transpire that this splitting is doable also for more dimensions and that each dimension can be treated independently due to a certain factorisation property. We can explicitly compute the family of fixed point covariances of the Hilbert space measure family and find that it perfectly matches, as in the 1+1 case, the perfect Hilbert measure family that one obtains by the cylindrical projections of the known continuum Hilbert space measure. The Hilbert space measure contains the same information as the Hilbert space together with its vacuum vector. By the argument already exploited in the 1+1 dimensional case, the agreement of the fixed point continuum Hamiltonian with the known continuum Hamiltonian then immediately follows. In section \ref{Renormalisation with changed RG-map} we investigate the consequences of modifying the {\it coarse graining map} $I_{M \rightarrow2M}$ that we used so far in this series of papers and which relates lattices with $M^D$ vertices to those with $(2M)^D$ many. It was already shown in \cite{LLT2} for the 1+1 dimensional case that not every choice of coarse graining map employed in the literature passes our criterion of being an allowed cylindrically consistent coarse graining map. We find that at least when modifying it to $I_{M\rightarrow M'}$ with $M'=2M,3M,5M,...$ we still obtain cylindrically consistent coarse graining maps and, moreover, they all lead to the same fixed point Hilbert space measure and Hamiltonian. Moreover, it is possible to mix and concatenate different blocking kernels for different directions leading to more general coarse graining maps such as those on hypercuboids rather than hypercubes and beyond. The fixed point structure is robust under such modifications, thus adding to the degree of universality of the model. In section \ref{Rotational Invariance} we present how the continuum concept of rotational invariance can be expressed as a condition on the finite resolution Hilbert space measures $\nu^*_{\epsilon}$ of the fixed point family and and we will numerically demonstrate that in the case of the massive free scalar field the perfect lattice action seems to satisfy this condition. In section \ref{Conclusion} we summarize and give an outlook to further research directions. \section{Review of direct Hamiltonian Renormalisation} \label{Recap} This section serves to recall the notation and elements of Hamiltonian renormalisation from \cite{LLT1, LLT2} to which the reader is referred to for all the details.\\ \\ We consider infinite dimensional conservative Hamiltonian systems on globally hyperbolic spacetimes of the form $\mathbb{R}\times \sigma$. If $\sigma$ is not compact one introduces an infrared (IR) cut-off $R$ for the spatial manifold $\sigma$ by restricting to test-functions which are defined on a compact submanifold, e.g. a torus $\sigma_R:=[0,R]^D$ if $\sigma=\mathbb{R}^D$. We will assume this cut-off $R$ to be implicit in all formulae below but do not display it to keep them simple, see \cite{LLT1,LLT2} for the explicit appearance of $R$. Moreover, an ultraviolet (UV) cut-off $\epsilon_M:=R/M$ is introduced by restricting the smearing functions $f_M$ to a finite spatial resolution. In other words, $f_M\in L_{M}$ is defined by its value on the vertices of a graph, which we choose here to be a cubic lattice, i.e. there are $M^D$ many vertices, labelled by $m\in\mathbb{Z}^D_M$, $\mathbb{Z}_M=\{0,1,...,M-1\}$. In this paper we consider a background dependent (scalar) QFT and thus we have access to a natural inner product defined by it. For more general theories this structure is not available but the formalism does not rely on it. The scalar products for $f_M,g_M\in L_{M}$ and respectively for $f,g\in L=C^{\infty}([0,R])$ are defined by \begin{align} \langle f_M, g_M \rangle_M = \epsilon^D_M\sum_{m\in\mathbb{Z}^D_M} \bar{f}_M(m) g_M(m), \hspace{20pt} \langle f,g \rangle =\int_{\sigma_R} d^Dx \bar{f}(x)g(x) \end{align} where $\bar{f}$ denotes the complex conjugate of $f$. In other words, $L$ is the space of all compactly supported smooth functions such that for $f,g\in L$ the inner product $\langle f,g\rangle$ stays finite. The similar statement for finite sequences and the inner product $\langle .,.\rangle_M$ defines the Hilbert space $L_M$.\\ Given an $f_M:\mathbb{Z}^D_M\rightarrow \mathbb{R}$ we can embed it into the continuum by an {\it injection map} $I_M$ \begin{equation} \label{injection map} \begin{split} I_M : \hspace{10pt} L_{M} &\rightarrow \hspace{10pt} L \\ f_M &\mapsto (I_M f_M ) (x):=\sum_{m\in \mathbb{Z}^D_M}f_M(m)\chi_{m\epsilon_M}(x) \end{split} \end{equation} with $\chi_{m\epsilon_M}(x):=\prod_{a=1}^D\chi_{[m^a\epsilon_M,(m^a+1)\epsilon_M)}(x)$ being the characteristic function over the displayed intervals. Note that indeed the coefficient $f_M(m)$ is the value of $I_M f_M$ at $x=m\epsilon_M$. $L$ is much larger than the range of $I_M$. This allows us to define its corresponding left inverse: {\it the evaluation map} $E_M$ is found to be \begin{equation} \label{evaluation map} \begin{split} E_M : \hspace{10pt} L &\rightarrow \hspace{10pt} L_{M} \\ f &\mapsto (E_M f ) (m):=f(m\epsilon_M) \end{split} \end{equation} and by definition obeys \begin{align}\label{left-inverse} E_M\circ I_M= \text{id}_{L_M} \end{align} where $\text{id}_{L_M}$ denotes the identity map on the space $L_M$. Given those maps we are now able to relate test functions and thus also observables from the continuum with their discrete counterpart, e.g. for a smeared scalar field one defines: \begin{align} \phi_M[f_M]:=\langle f_M, \phi_M\rangle_M,\hspace{20pt}\phi_M(m):=(I^\dagger_M\phi)(m)=\int_{[0,R)^D}d^Dx\; \chi_{m\epsilon_M}(x)\phi(x) \end{align} Indeed, since the kernel of any map $C: L\rightarrow L$ in the continuum is given as \begin{align} \langle f, C g\rangle =: \int_{[0,R]^D}d^Dx\int_{[0,R]^D}d^Dy \;C(x,y)\bar{f}(x)g(y) \end{align} it follows \begin{align} \langle I_M f_M, C I_M g_M\rangle = \langle f_M, [I^{\dagger}_M C I_M]g_M\rangle_M=: \langle f_M, C_M g_M\rangle_M \end{align} which shows that \begin{align} C_M(m,m')=\epsilon^{-2D}_M\langle \chi_{m\epsilon_M},C \chi_{m'\epsilon_M}\rangle \end{align}\\ The concatenation of evaluation and injection for different discretisations shall be called {\it coarse graining map} $I_{M\rightarrow M'}$ if $M<M'$: \begin{align} I_{M\rightarrow M'} = E_{M'} \circ I_M : L_{R,M} \rightarrow L_{R,M'} \end{align} as they correspond to viewing a function defined on the coarse lattice as a function on a finer lattice of which the former is not necessarily a sublattice although. In practice we will choose the set of $M$ such that it defines a partially ordered and directed index set. The coarse graining map is a free choice of the renormalisation process whose flow it drives, and its viability can be tested only a posteriori if we found a fixed pointed theory which agrees with the measurements of the continuum. Hence proposals for such a map should be checked at least in simple toy-models before one can trust their predictions. The coarse graining maps are now used to call a family of of measures $M\mapsto \nu_M$ on a suitable space of fields $\phi_M$ {\it cylindrically consistent} iff \begin{align}\label{cylindricalconsistency} \nu_{M}(w_M[f_M])=\nu_{M'}(w_{M'}[I_{M\rightarrow M'}\circ f_M]) \end{align} where $w_M$ is a Weyl element restricted to the configuration degrees of freedom, i.e. for a scalar field theory as in the present paper $w_M[f_M]=\exp(i\phi_M[f_M])$. The measure $\nu_M$ can be considered as the positive linear GNS functional on the Weyl $^\ast-$algebra generated by the $w_M[f_M]$ with GNS data $({\cal H}_M,\Omega_M)$, that is \begin{align} \nu_M(w_M[f_M])=\langle \Omega_M , w_M[f_M]\Omega_M\rangle_{{\cal H}_M} \end{align} In particular, the span of the $w_M[f_M]\Omega_M$ lies dense in $\mathcal{H}_M$ and we simplify the notation by refraining from displaying a possible GNS null space. Under suitable conditions \cite{Yam75} a cylindrically consistent family has a continuum measure $\nu$ as a projective limit which is related to its family members $\nu_M$ by \begin{align} \label{projectivelimit} \nu_M(w_M[f_M])=\nu(w[I_M f_M]) \end{align} It is easy to see that (\ref{projectivelimit}) and (\ref{cylindricalconsistency}) are compatible iff we constrain the maps $I_M, E_M$ by the condition for all $M<M'$ \begin{align}\label{cylconrel} I_{M'}\circ I_{M\rightarrow M'} = I_M \end{align} This constraint which we also call {\it cylindrical consistency} means that injection into the continuum can be done independently of the lattice on which we consider the function to be defined, which is a physically plausible assumption. In the language of the GNS data $(\mathcal{H}_M,\Omega_M)$ cylindrical consistency means that the maps \begin{align} J_{M\rightarrow M'} :\hspace{20pt} \mathcal{H}_M\hspace{20pt} & \rightarrow \hspace{20pt}\mathcal{H}_{M'}\\ w_M[f_M]\Omega_M & \mapsto w_{M'}[I_{M\rightarrow M'} f_M]\Omega_{M'} \end{align} define a family of {\it isometric} injections of Hilbert spaces. The continuum GNS data are then given by the corresponding inductive limit, i.e. the embedding of Hilbert spaces defined densely by $J_M w_M[f_M]\Omega_M=w[I_M f_M] \Omega$ which is isometric. Note that $J_{M'} J_{M\rightarrow M'}=J_M$. The GNS data are completed by a family of positive self-adjoint Hamiltonians $M\mapsto H_M$ defined densely on the $w_M[f_M]\Omega_M$ to the OS data $({\cal H}_M, \; \Omega_M,\; H_M)$. We define a family of such Hamiltonians to be cylindrically consistent provided that \begin{align} J_{M\rightarrow M'}^\dagger H_{M'} J_{M\rightarrow M'}=H_M \end{align} It is important to note that this does {\it not} define an inductive system of operators which would be too strong to ask and thus does not grant the existence of a continuum Hamiltonian. However, it grants the existence of a continuum positive quadratic form densely defined by \begin{align} J_M^\dagger H J_M =H_M \end{align} If the quadratic form can be shown to be closable, one can extend it to a positive self-adjoint operator. In practice, one starts with an {\it initial} family of OS data $({\cal H}^{(0)}_M,\; \Omega^{(0)}_M,\; H^{(0)}_M)$ usually obtained by some {\it naive} discretisation of the classical Hamiltonian system and its corresponding quantisation. The corresponding GNS data will generically not define a cylindrically consistent family of measures, i.e. the maps $J_{M\rightarrow M'}$ will fail to be isometric. Likewise, the family of Hamiltonians will generically fail to be cylindrically consistent. Hamiltonian renormalisation now consists in defining a sequence of an {\it improved} OS data family $n\mapsto ({\cal H}^{(n)}_M,\; \Omega^{(n)}_M,\;H^{(n)}_M)$ defined inductively by \begin{align} \label{improving} J^{(n)}_{M\rightarrow M'} w_M[f_M] \Omega^{(n+1)}_M:=w_{M'}[I_{M\rightarrow M'} f_M] \Omega^{(n)}_{M'},\;\;H^{(n+1)}_M:=J^{(n)}_{M\rightarrow M'} H^{(n)}_{M'} J^{(n)}_{M\rightarrow M'} \end{align} Note that $H^{(n)}_M \Omega^{(n)}_M=0$ for all $M,n$. If the corresponding flow (sequence) has a fixed point family $({\cal H}^\ast_M,\; \Omega^\ast_M,\; H^\ast_M)$ then its internal cylindrical consistency is restored by construction. \section{Hamiltonian renormalisation of the massive free quantum scalar field}\label{Hamiltonian renormalisation} In \cite{LLT2} the Hamiltonian renormalisation prescription introduced above has been tested with a model whose continuum theory is known and hence presents a way to validate the method: the massive free quantum scalar field described by the action \begin{align} S:=\frac{1}{2\kappa}\int_{\mathbb{R}^{D+1}} dt d^Dx [\frac{1}{c}\dot{\phi}^2-c\phi\omega\phi] \end{align} with ($n=1,2,..$) \begin{align}\label{contcovariance} \omega^2=\omega^2(p,\Delta)=\frac{1}{p^{2(n-1)}}(-\Delta+p^2)^n \end{align} where $p=\frac{mc}{\hbar}$ is the inverse Compton length. Following \cite{LLT2} we will study here the Poincare invariant case with $n=1$, other models with $n\not=1$ can be studied with the methods developed for $n=1$ by contour integral techniques. After performing the Legendre transform, introducing the IR cut-off and discretising the theory for various scales $M$ one considers the lattice Hamiltonian family ($\hbar=1$) \begin{equation}\label{DiscretizedHamiltonian} H_M:=\frac{c}{2}\sum_{m\in \mathbb{Z}^D_M} \left(\kappa\epsilon^D_M \pi^2_M(m) +\frac{1}{\epsilon^D_M\kappa}\phi_M(m)(\omega^2_M\cdot\phi_M)(m)\right) \end{equation} with ($\pi:=\dot{\phi}/\kappa$) \begin{align} \phi_M (m):=\int_{[0,1]^D} d^Dx\; \chi_{m\epsilon_M}(x)\phi(x),\hspace{20pt}\pi_M(m):=(E_M \pi)(m)=\pi(m\epsilon_M) \end{align} and $\omega^2_M=\omega^2(p,\Delta_M)$ is to be understood in terms of $\Delta_M$ the naively discretized Laplacian, which reads e.g. in two dimensions: \begin{equation} (\Delta^{(0)}_M f_M)(m):=\frac{1}{\epsilon_M^2}\left(f_M(m+e_1)+f_M(m+e_2)+f_M(m-e_1)+f_M(m-e_2)-4f_M(m)\right) \end{equation} with $e_i$ being the unit vector in direction $i$. One can write down the explicit action of the coarse graining map for projecting a lattice on a finer version with twice as many lattice points: \begin{equation} (I_{M\rightarrow 2M} f_M)(m) = \sum_{m'\in \mathbb{Z}^D_M}\chi_{m'\epsilon_{2M}}(m\epsilon)f_M(m') = f_M(\lfloor\frac{m}{2}\rfloor) \end{equation} where $\lfloor x \rfloor$ denotes the component wise Gauss bracket. The cylindrical consistency condition (\ref{cylindricalconsistency}) demanded that the measures on both discretisation, $M$ and $2M$, agree. Being a free field theory, one can show that the measure can be written as a Gaussian measure described at the fixed point by a covariance $c^\ast_M$, thus (\ref{cylindricalconsistency}) reads explicitly \begin{align} e^{-\frac{1}{2}\langle I_{M\rightarrow 2M}f_M,c^*_{2M}I_{M\rightarrow2M}f_M\rangle_{2M}}= e^{-\frac{1}{2}\langle f_M,c^*_M f_M\rangle_M} \end{align} Thus by studying the flow defined by \begin{align}\label{Covarianceflow} c^{(n+1)}_M := I^{\dagger}_{M\rightarrow2M}c^{(n)}_{2M}I_{M\rightarrow2M} \end{align} we know that the existence of a fixed point $c^*_M$ describes a Gaussian measure family, which is equivalent to corresponding Hilbert spaces $\mathcal{H}_M^*$ with vacua $\Omega^*_M$ which are all annihilated by the correspondingly defined Hamiltonians $H^\ast_M$. \subsection{Determination of the fixed point covariance} The flow defined by (\ref{Covarianceflow}) may lead to various fixed points (or none at all) depending on the initial family $c^{(0)}_M$. Thus, the naive discretisation should be of such a form that it captures important features of the continuum theory. For example, we will demand the covariance to be translation invariant, which is a property of the discretised Laplacian and will remain true under each renormalisation step.\\ We begin by rewriting (\ref{DiscretizedHamiltonian}) in terms of discrete annihilation and creation operators \begin{align} a^{(0)}_M(m):=\frac{1}{\sqrt{2\hbar\kappa}}\left[\sqrt{{\omega^{(0)}_M}/{\epsilon^D_M}}\phi_M -i\kappa\sqrt{{\epsilon^D_M}/{\omega^{(0)}_M}}\pi_M(m)\right] \end{align} where \begin{align} [\omega^{(0)}_M]^2:=p^2-\Delta^{(0)}_M \end{align} which after some standard algebra displays the Hilbert space measure as: \begin{align} \nu^{(0)}_{M}(w_M[f_M])=\nu_{M}\left(e^{i\langle f_M, \phi_{M}\rangle_{M}}\right)=\exp\left(-\frac{1}{4}\langle f_M,\frac{\hbar\kappa}{2}\omega_{M}^{-1}f_M\rangle_M\right) \end{align} Hence our starting covariance is given as: \begin{align} c^{(0)}_M=\frac{\hbar\kappa}{2}[\omega^{(0)}_M]^{-1} \end{align} Using the discrete Fourier transform ($k_M=\frac{2\pi}{M}$) \begin{align} f_M(m)=\sum_{l\in\mathbb{Z}^D_M}\hat{f}_M(l)e^{ik_Ml\cdot m},\hspace{20pt}\hat{f}_M(l):=M^{-D}\sum_{m\in\mathbb{Z}^D_M}f_M(m)e^{-ik_Mm\cdot l} \end{align} we diagonalise the discretised Laplacian appearing in $\omega^{(0)}_M$. Thus, the initial covariance family becomes in $D=2$ (dropping the factor $\frac{2}{\hbar\kappa}$ in what follows) \begin{align} \hat{c}^{(0)}_M(l)&=\frac{1}{\sqrt{-\frac{1}{\epsilon^2_M}(2\cos(k_Ml_1)+2\cos(k_Ml_2)-4)+p^2}}=\nonumber\\ &=\int_{\mathbb{R}}\; \frac{dk_0}{2\pi} \frac{\epsilon_M^2}{[k_0^2+p^2]\epsilon_M^2+(4 -2\cos(k_Ml_1)-2\cos(k_Ml_2))}\label{integrand} \end{align} with $l\in\mathbb{Z}^2_M$ and we used the residue theorem. We rewrite the integrand of (\ref{integrand}) as ($t_i=k_Ml_i, q^2:=(k_0^2+p^2)\epsilon_M^2$) \begin{equation} \label{startingpoint2D} \hat{c}^{(0)}_M(k_0,l)=\frac{1}{2}\; \frac{\epsilon_M^2}{[q^2/4+(1-\cos(t_1))]-[-q^2/4-(1-\cos(t_2))]} \end{equation} Since $1+q^2/4> \cos(t), \forall p>0,t\in \mathbb{R}$ one deduces that the first of the square brackets in (\ref{startingpoint2D}) is always positive, the other one always negative. Consequently, they lie in different halfplanes of $\mathbb{C}$. This can be used to artificially write this as an integral in the complex plane, by inverting the residue theorem: Given $z_1,z_2\in\mathbb{C}2$ with $Re(z_1)>0, Re(z_2)<0$ and a curve $\gamma$ going along $i\mathbb{R}$ from $+i\infty$ to $-i\infty$ and closing in the right plane on a half circle with radius $R\rightarrow \infty$, we can write: \begin{equation}\label{InvertResThm} \oint_{\gamma} dz \frac{1}{(z-z_1)(z-z_2)}=2\pi i \frac{1}{z_1-z_2} \end{equation} since the integrand decays as $z^{-2}$ on the infinite half circle. We have chosen the orientation of $\gamma$ counter clock wise. Note that this seemingly breaks the symmetry between $t_1$ and $t_2$. However, this is only an intermediate artefact of the free choice of $\gamma$ which will disappear at the end of the computation. Substituting $z\rightarrow z/2$ the initial covariance can thus be written \begin{equation} \label{startingdecoupledcovariance} \hat{c}^{(0)}_M(l)=-\oint_{\gamma} dz \frac{1}{8\pi i}\; \frac{\epsilon^2_M}{\epsilon^2_M(\frac{p^2+k_0^2}{2}-z)/2+1-\cos(t_1)}\;\;\frac{\epsilon^2_M}{\epsilon^2_M(\frac{p^2+k_0^2}{2}+z)/2+1-\cos(t_2)} \end{equation} In order to shorten our notation, we will introduce: $q^2_{1,2}(z):= \epsilon^2_M([k_0^2+p^2]/2\mp z)$. The starting point of our RG flow is now factorised into two factors which very closely resemble the 1+1 dimensional case. This is the promised factorising property.\\ On the other hand, one can also show by explicit calculation (see appendix \ref{sa}) that both directions decouple in the renormalisation transformation (\ref{Covarianceflow}). Since the initial covariance factorises under the contour integral over $\gamma$ this factorisation is preserved under the flow and implies that the flow of the covariance in each direction can be performed separately. At the end we then compute the resulting integral over $z$ along $\gamma$. In addition, the decoupling of the flow (\ref{Covarianceflow}) and the factorisation of the initial family of covariances (\ref{startingdecoupledcovariance}) for the naive discretisation of the Laplacian are features that occur independently of the dimension $D$. For the decoupling this follows immediately from the corresponding generalisation of (\ref{flowdefinition}) as the sum over $\delta',\delta^{\prime\prime}$ is carried out on the exponential function which contains both linearly in the exponent. For the factorisation we note the following iterated integral identity for complex numbers $k_j,\; j=1,..,D$ with strictly positive real part \begin{align} \frac{1}{k_1+..+k_D}=(2\pi i)^{D-1} \oint_\gamma\; \frac{dz_1}{z_1-k_1}\; \oint_\gamma\; \frac{dz_2}{z_2-k_2}\;.. \oint_\gamma\; \frac{dz_{D-1}}{z_{D-1}-k_{D-1}}\;\frac{1}{z_1+..+z_{D-1}+k_D} \end{align} in which $\gamma$ is always the same closed contour with counter clock orientation over the half circle in the positive half plane followed by the integral over the imaginary axis. Because of that the real part of each of the integration variables $z_j$ is non negative so that the last fraction has a denominator with strictly positive real part. Accordingly, the only pole of the integrand for the $z_j$ integral in the domain bounded by $\gamma$ is $k_j$ and the claim follows from the residue theorem. It transpires that the strategy illustrated for the case $D=2$ also solves the case of general $D$ and it therefore suffices to carry out the details for $D=2$. The flow now acts on the integrand of the contour integral and we can do it for each $z$ separately. The flow in each direction is thus described by exactly the same map as in the one dimensional case in \cite{LLT2}. We can therefore immediately copy the fixed point covariance from there. We just have to keep track of the $z$ dependence. In direction $i=1,2$ the covariance can be parametrised by three functions of $q_i(z)$ ($t_i=k_Ml_i$, $l_i\in\mathbb{Z}_M$) \begin{align} \hat{c}_{M}^{(n)}(k_0,l_i,z)=\frac{\epsilon_M^2}{q_{i}^3(z)}\frac{b_n(q_{i}(z))+c_n(q_{i}(z))\cos(t_i)}{a_n(q_{i}(z)-\cos(t_i)} \end{align} The initial functions are \[a_0(q_{1,2}(z))=1+\frac{q^2_{1,2}}{2},\hspace{15pt} b_0(q_{1,2}(z))=\frac{q^3_{1,2}}{2},\hspace{15pt} c_0(q_{1,2}(z))=0\] Before plugging in the fixed points, however, one has to check whether the flow will drive the starting values into a finite fixed point, i.e. all the numerical prefactors that are picked up in front of the covariance should cancel each other. Indeed, one RG steps corresponds to \begin{align} (2\pi i) \hat{c}^{(n+1)}_M(k_0,l)&=-\frac{1}{4}\oint_{\gamma} dz \left(\sum_{\delta_1=0,1}(1+\cos(k_{2M}(l_1+\delta_1M)))\hat{c}^{(n)}_{2M}(k_0,l_1,z)\right) \times\\ &\hspace{120pt} \times \left(\sum_{\delta_2=0,1}(1+\cos(k_{2M}(l_2+\delta_2M)))\hat{c}^{(n)}_{2M}(k_0,l_2,-z)\right)\nonumber \end{align} Note that $\epsilon_M\rightarrow \epsilon_{2M}=\epsilon_M/2$ whence \begin{equation} \label{transformqpm} q^2_{1,2}:=\epsilon^2_M(\frac{p^2+k_0^2}{2}\mp z) \rightarrow \frac{1}{4}\epsilon^2_M(\frac{p^2+k_0^2}{2}\mp z) = q^2_{1,2}/4 \end{equation} Collecting all powers of $2$, we get 1. minus two from the $\epsilon^2_M$ in the numerator of the factor for both directions, that is altogether minus four; 2. the RG map gives an additional minus two because of the $1/4$ prefactor; and 3. due to (\ref{transformqpm}) the factor $q_1^{-3}q_2^{-3}$ gives a power of plus six. Hence the overall power of two is zero. Accordingly, we find the same fixed points as in \cite{LLT2}: \begin{align} a^*(q_{1,2}) &=\text{ch}(q_{1,2})\\ b^*(q_{1,2}) &=q_{1,2} \text{ch}(q_{1,2})-\text{sh}(q_{1,2})\\ c^*(q_{1,2}) &=\text{sh}(q_{1,2})-q_{1,2} \end{align} where we write shorthand for the hyperbolic functions: $\text{ch}(q):=\cosh(q)$ and $\text{sh}(q):=\sinh(q)$. Thus we find with $t_j=k_M l_j$ \begin{align}\label{finalIntegralof2DRen} \hat{c}^*_M(k_0,l)&=-\left(\frac{\epsilon_M^4}{2\pi i}\right)\; \oint_\gamma\; dz\; \prod_{j=1,2} \frac{1}{q_j^3} \frac{q_j \text{ch}(q_j) - \text{sh}(q_j) +(\text{sh}(q_j)-q_j)\cos(t_j)}{\text{ch}(q_j)-\cos(t_j)} \end{align} Note that it is not necessary to pick a square root of the complex parameter $q^2_{1,2}(z)=\epsilon_M^2(\frac{k_0^2+p^2}{2}\mp z)$ since the integrand only depends on the square, despite its appearance (in other words, one may pick the branch arbitrarily, the integrand does not depend on it). It follows that the integrand is a single valued function of $z$ which is holomorphic everywhere except for simple poles which we now determine, and which allow to compute the contour integral over $\gamma$ using the residue theorem (see appendix \ref{sa} for further details) and end up with \begin{align}\label{CovarianceResult2D} \hat{c}_M^*(k_0,l)=& \epsilon^2_M\frac{[q_N {\rm ch}(q_N)-{\rm sh}(q_N)]+[{\rm sh}(q_N)-q_N]\cos(t_2)}{q_N^3[{\rm ch}(q_N)-\cos(t_2)]}+\\ &\hspace{50pt}-2\epsilon^2_M\underset{N\in\mathbb{Z}}{\sum}\frac{\cos(t_1)-1}{(2\pi N +t_1)^2} \frac{1}{q_N^3} \;\frac{q_N \text{ch}(q_N)-\text{sh}(q_N)+(\text{sh}(q_N)-q_N)\cos(t_2)} {\text{ch}(q_N)-\cos(t_2)}\nonumber \end{align} The result has no manifest symmetry in $t_1\leftrightarrow t_2$ but from the derivation it is clear that it must be. Note that each term in the sum remains finite for $\epsilon\rightarrow 0$ as the individual parts contribute inverse powers of $\epsilon_M$: $(\cos(t)-1)$ scales as $\mathcal{O}(\epsilon^2_M)$, since $t=k_R \epsilon_M l$ depends linearly on $\epsilon_M$ as well as $q=\epsilon^2_M (p^2+k_0)^2$. Thus $(q^2+(t+2\pi N)^2)=\mathcal{O}(\epsilon^2_M)$ if $N=0$ or approaches a constant else. \subsection{Consistency check with the continuum covariance} The mere existence of a fixed point measure family described by the covariance (\ref{CovarianceResult2D}) of the flow induced by (\ref{Covarianceflow}) does not necessarily mean that it has any relation with the known continuum theory. We will thus invoke the consistency check also presented in \cite{LLT2}, which consists of looking at the cylindrical projection at resolution $M$ of the continuum covariance $c:=\frac{1}{2}\omega^{-1}$ in $D=2$. Using that the latter is given by (\ref{contcovariance}) we find its projection to be \begin{align} c_M(m,m')&=\epsilon_M^{-4} (I^\dagger_M\; c\; I_M)(m,m') =\\ & =\epsilon^{-4}_M \int^{(m_1+1)\epsilon_M}_{m_1\epsilon_M}dx_1 \int^{(m_2+1)\epsilon_M}_{m_2\epsilon_M}dx_2\int^{(m'_1+1)\epsilon_M}_{m'_1\epsilon_M}dy_1\int^{(m'_2+1)\epsilon_M}_{m'_2\epsilon_M}dy_2 \hspace{5pt}\;c(x,y)\nonumber \end{align} see \cite{LLT2} for more details. Using that the $e_{n}(x):=\frac{1}{R} e^{ik_R n\cdot x}, \; k_R=2\pi/R$ define an orthonormal basis of $L_R=L_2([0,R]^2,d^2x)$ one finds the resolution of identity \begin{equation} \frac{1}{R^2}\sum_{n\in \mathbb{Z}^2}e^{ik_R (x-y)\cdot n}=\delta_R(x,y):=\delta_R(x_1,y_1)\delta_R(x_2,y_2) \end{equation} We use this to write the covariance as \begin{align} c(x,y)&= \frac{1}{2}\left(-\Delta_{Rx}+p^2\right)^{-1/2} \delta_R(x,y)= \int \; \frac{dk_0}{2\pi} \left(-\Delta_{Rx}+k_0^2+p^2\right)^{-1} \delta_R(x,y)\\ &=\int \frac{dk_0}{2\pi} \sum_{n\in\mathbb{Z}^2} e_{nR}(y)\left(-\Delta_{Rx}+p^2+k_0^2\right)^{-1}e_{nR}(x)= \frac{1}{R^2}\sum_{n\in\mathbb{Z}^2}e^{ik_R n\cdot (x-y)}\frac{1}{n^2 k_R^2+k_0^2+p^2}\nonumber \end{align} Now we can perform the integrals, e.g. \begin{equation} \int^{(m_1+1)\epsilon_M}_{m_1\epsilon_M}dx_1 e^{i(2\pi) n_1 x_1}=\frac{1}{ik_R n_1}\left(e^{ik_R n_1(m_1+1)\epsilon_M}-e^{i k_R n_1m_1\epsilon_M}\right) \end{equation} where the case $n_1=0$ is obtained using de l'Hospital. We find with $k_M=2\pi/M$ \begin{align} c_M(m&,m')=\epsilon^{-4}_M \frac{1}{R^2}\int \frac{dk_0}{2\pi}\; \sum_{n\in\mathbb{Z}^2}\frac{1}{n^2 k_R^2+p^2+k_0^2}\left(\int^{(m_1+1)\epsilon_M}_{m_1\epsilon_M}dx_1 e^{i(2\pi)n_1x_1}\right)\times\\ &\hspace{20pt}\times\left(\int^{(m_2+1)\epsilon_M}_{m_2\epsilon_M}dx_2 e^{i(2\pi)n_2x_2}\right)\left(\int^{(m_1'+1)\epsilon_M}_{m_1'\epsilon_M}dy_1 e^{i(2\pi)n_1y_1}\right)\left(\int^{(m_2'+1)\epsilon_M}_{m_2'\epsilon_M}dy_2 e^{i(2\pi)n_2y_2}\right)\nonumber\\ &=R^{-2} \int\frac{dk_0}{2\pi} \sum_{n\in\mathbb{Z}^2}\frac{1}{n^2 k_R^2+p^2+k_0^2}\;e^{ik_Mn\cdot (m-m')}\; \frac{4}{k_M^4 n_1^2 n_2^2}[1-\cos(k_M n_1)]\;[1-\cos(k_M n_2)]\nonumber \end{align} We now wish to proceed exactly as in \cite{LLT2} and thus split the sum over $n_j=l_j+M N_j$ with $l_j\in \mathbb{Z}_M$ and $N\in \mathbb{Z}^2$ \begin{align} c_M(m,m')&=R^{-2} \epsilon_M^4 \int\frac{dk_0}{2\pi} \sum_{m\in\mathbb{Z}_M^2} e^{ik_Ml \cdot (m-m')}\sum_{N\in \mathbb{Z}^2}\times\;\\ & \;\;\;\;\times\frac{[1-\cos(k_M (l_1+M N_1)] \;[1-\cos(k_M (l_2+M N_2)]}{(l+M N)^2 k_M^2+\epsilon_M^2(p^2+k_0^2)}\; \; \frac{4}{k_M^4 (l_1+M N_1)^2 (l_2+M N_2)^2}\nonumber \end{align} from which we read off the Fourier transform of $c_M(m)=R^{-2}\sum_{l\in \mathbb{Z}_M^2} e^{k_M l\cdot m} \hat{c}_M(l)$ \begin{align} \hat{c}_M(k_0,l)&=\epsilon_M^4 \sum_{N\in \mathbb{Z}^2}\;\times \\ &\;\;\;\;\times\frac{[1-\cos(k_M (l_1+M N_1)] \;[1-\cos(k_M (l_2+M N_2)]}{(l+M N)^2 k_M^2+q^2}\; \; \frac{4}{k_M^4 (l_1+M N_1)^2 (l_2+M N_2)^2}\nonumber \end{align} Using the contour integral idea as in the previous section we obtain \begin{align} \hat{c}_M(k_0,l)=&-\frac{1}{2\pi i} \oint_\gamma\; dz\; \prod_{j=1,2}\; \\ &\hspace{30pt}\big[\sum_{N_j \in \mathbb{Z}} \frac{\epsilon_M^2}{(l_j+M N_j)^2 k_M^2+q_j(z)^2}\; \; \frac{2}{k_M^2 (l_j+M N_j)^2}[1-\cos(k_M (l_j+M N_j))]\big]\nonumber \end{align} where $q_j(z)$ is the same as in the previous subsection. Now the two sums of the formula above are exactly the same that occurred in \cite{LLT2} with $q^2$ replaced by $q_j(z)^2$ and $t$ replaced by $t_j =l_j k_M$. Thus, we can copy the result from there and find \begin{align} \label{continuumcovariance} \hat{c}_M(k_0,l)=- \frac{\epsilon_M^4}{2\pi i} \oint_\gamma\; dz\; \prod_{j=1,2}\; \frac{1}{q_j^3}\frac{q_j(z) \text{ch}(q_j)-\text{sh}(q_j) +[\text{sh}(q_j)-q_j]\cos(t_j)}{\text{ch}(q_j)-\cos(t_j)} \end{align} with $q_j\equiv q_j(z)$. Comparing (\ref{continuumcovariance}) and (\ref{finalIntegralof2DRen}) we see that both agree, thus the fixed point covariance family indeed coincides with the continuum covariance family. \section{Fixed points of the free scalar field for changed RG-flows}\label{Renormalisation with changed RG-map} The aim of this section is to change the block-spin-transformation we have used so far and to check whether the fixed point is changed as well. As has already been discussed in \cite{LLT3} not every coarse graining map fulfils the cylindrical consistency relation which induces a corresponding relation on the family of coarse grained measures. Note that coincidence of continuum measures with their cylindrical (finite resolution) projections can only be deduced if one uses the same blocking kernel (which defines those projections). Thus, it a natural question to ask whether other maps of the kind $I_{M\rightarrow M'}$ apart from $M'=2M$ will also lead to physically relevant theories. Due to the cylindrical consistency property of $I_{M\rightarrow M'}$ it is apparent that this is the case for all $M'=2^nM$ for $n\in\mathbb{N}$. A natural extension would be to consider powers of any prime number. In this section we present how at least for the choice for $M'=3M$ and $M'=5M$ this indeed gives the same fixed point covariance and argue that it should hold true for every choice of prime number. This would be useful because the set $\mathbb{N}$ is partially ordered and directed by $<$ but given $m_1,m_2\in \mathbb{N}$ we do not always find $m_3>m_1, m_2$ with $m_3=m_1 2^{n_1}=m_2 2^{n_2}$. If one considers $I_{M\rightarrow u M}$ with $u\in\mathbb{P}$ a prime number then the coarse graining map is given by \begin{equation} [I_{M\rightarrow uM}f_M](m)=f_M(\lfloor \frac{m}{u}\rfloor) \end{equation} where $\lfloor .\rfloor$ is the component wise Gauss bracket. This map is easily checked to be cylindrically consistent $I_{u^k M\to u^{k+l} M}\circ I_{M\to u^k M}=I_{M\to u^{k+l} M}$. To see this, we note that $\lfloor m/u^k\rfloor =m'$ if $m=m' u^k+r,\; r=0, ... u^k-1$ so that $\lfloor\lfloor m/u^l\rfloor /u^k\rfloor =m'$ for $\lfloor m/u^l\rfloor =m' u^k+r,\; k=0,..,u^k-1$ that is for $m=(m' u^k+r) u^l+s,\; s=0,..u^l-1$ i.e. $m= m' u^{k+l}+t,\; t=0,..,u^{k+l}-1$ i.e. $m'=\lfloor m/u^{k+l}\rfloor$. We now use these maps on our Gaussian example. For their covariances this implies \begin{align} &\langle f_M, C^{(n+1)}_M f_M\rangle = \epsilon^{2D}_M\sum_{m'_1,m'_2\in\mathbb{Z}^D_M} f_M(m'_1)f_M(m'_2)C^{(n+1)}_M(m'_1,m'_2)\\ &=\langle I_{M\rightarrow uM} f_M, C^{(n)}_{uM} I_{M\rightarrow uM}f_M\rangle =\frac{\epsilon^{2D}_{uM}}{u^{2D}}\sum_{m'_1,m'_2\in\mathbb{Z}^D_M}f_M(m'_1)f_M(m'_2)\sum_{\scriptsize \begin{array}{c} \lfloor m_1/u\rfloor =m'_1,\\ \lfloor m_2/u\rfloor =m'_2 \end{array} }C^{(n)}_{uM}(m_1,m_2)\nonumber \end{align} This allows to deduce by direct comparison: \begin{align} C^{(n+1)}_M(m'_1,m'_2)=u^{-2D}\sum_{\delta',\delta''\in\{0,1,..,u-1\}^D} C^{(n)}_{uM}(um'_1+\delta', um'_2+\delta'') \end{align} Again we employ translational invariance, i.e. $C^{(n)}_M(m_1,m_2)=C^{(n)}_M(m_1-m_2)$ and find for the Fourier transform: ($k_M=\frac{2\pi}{M}=uk_{uM}$) \begin{align} C^{(n+1)}_M(m'_1,m'_2) u^{-2D} \sum_{l'\in\mathbb{Z}^D_M}e^{ik_Ml'(m'_1-m'_2)}\sum_{\delta',\delta'',\delta\in\{0,1,...,u-1\}^D}\hat{C}^{(n)}_{3M}(l'+\delta M)e^{ik_{uM}(l'+\delta)\cdot(\delta'-\delta'')} \end{align} whence \begin{equation}\label{generaldecoupling} \hat{C}^{(n+1)}_M(l')=u^{-2D}\sum_{\delta\in\{0,1,...,u-1\}^D}\hat{C}_{uM}^{(n)}(l'+\delta M)\prod_{i=1}^D\frac{\sin(\frac{u}{2}k_{uM}(l'_i+\delta_i M))^2}{\sin(\frac{1}{2}k_{uM}(l'_i+\delta_i M))^2} \end{equation} where we have used that the exponentials decouple, and that the geometric series can be performed explicitly \begin{align}\label{geometricseries} \sum_{\delta,\delta'\in\{0,...,u-1\}}e^{ia(\delta-\delta')} = \frac{1-e^{iau}}{1-e^{ia}}\frac{1-e^{-iau}}{1-e^{-ia}}=\frac{\sin(\frac{u}{2}a)^2}{\sin(\frac{1}{2}a)^2} \end{align} Since (\ref{generaldecoupling}) states that the flow decouples in general and since we can write the initial covariance also in a decoupled form, this allows us to limit our further analysis to the $D=1$ case without loss of generality.\\ In appendix \ref{sb} the determination of the fixed point for the prime $u=3$ will be explicitly performed as it illustrates what needs to be done in the general case. The initial data of the RG-flow is given for $D=1$ with $t=k_Ml, q^2=\epsilon_M^2(k_0^2+p^2)$ by \begin{equation}\label{3start} \hat{c}^{(0)}_M(k_0,l)=\frac{\epsilon^2_M}{2(1-\cos(t))+q^2} \end{equation} With the help of trigonometric identities, one manages to write the $\hat{c}^{(n)}$ in the form \begin{equation} \label{3start2} \hat{c}^{(n)}_M(k_0,l)=\frac{\epsilon^2_M}{q^3}\frac{b_n(q)+c_n(q)\cos(t)}{a_n(q)-\cos(t)} \end{equation} with suitably chosen functions $a_n,b_n,c_n$ of $q$ as we already know is true for (\ref{3start}). As it will transpire in appendix \ref{sb}, one finds exactly the same fixed point under the $M\rightarrow3M$ coarse graining map as we found for the $M\to 2M$ coarse graining map! We did the same calculations also for the prime $u=5$ which is considerably more work but all steps are literally the same and also the fixed point is the same. For reasons of space, we do not display these calculations here and leave it to the interested reader as an exercise. For the general prime we do not have a proof available yet but hope to be able to supply it in a future publication. However, we do not expect any changes. In any case, for whatever primes the fixed point stays the same (it holds at least for $u=2,3,5$) the statement is also true for all dimensions due to the factorising property. This factorising property also makes it possible to study in higher dimensions more complicated hypercuboid like coarse graining block transformations rather than hypercube like ones. In order to illustrate this, we give some details for the case $D=2$ dimensions of a rectangle blocking with $u_1=2$ for the first direction and $u_2=3$ for the second. The map is consequently $I_{(M_1,M_2)\rightarrow (2M_1,3M_2)} =I_{M_1\to 2M_1}\times I_{M_2\to 3 M_2}$. The naively discretised Laplacian on a lattice with different spacings $\epsilon_{M_1},\epsilon_{M_2}$ is given as (here: $2\epsilon_{M_1}=3\epsilon_{M_2}$) \begin{align} &\left(\Delta_M f_M\right) (m):=\\ &=\frac{1}{\epsilon^2_{M_1}}\left(f_M(m+e_1)+f_M(m-e_1)-2f_M(m) \right)+\frac{1}{\epsilon^2_{M_2}}\left(f_M(m+e_2)+f_M(m-e_2)-2f_M(m)\right)\nonumber \end{align} Hence the same strategy from (\ref{InvertResThm}) works again and gives us: \begin{align} &\hat{C}^{(0)}_M(k_0,l)=\left(-\frac{1}{\epsilon^2_{M_1}}(2\cos(k_{M_1}l_1)-2)-\frac{1}{\epsilon^2_{M_2}}(2\cos(k_{M_2}l_2)-2)+p^2+k_0^2\right)^{-1}\\ &=-\frac{1}{2^3\pi i}\oint dz \frac{\epsilon^2_{M_1}}{\epsilon^2_{M_1}(z+k_0^2+p^2/2)/2+1-\cos(k_{M_1}l_1)}\;\frac{-\epsilon^2_{M_2}}{\epsilon^2_{M_2}(-z+k_0^2+p^2/2)/2+1-\cos(k_{M_2}l_2)}\nonumber \end{align} So both directions decouple and yield, as already shown the same fixed point! It remains to compute the integral which is exactly the same as (\ref{finalIntegralof2DRen}). A further immediate consequence is that at this fixed point, one could also consider the flow of arbitrary concatenations of different coarse-graining maps, independently for each direction, e.g. $.....I_{6M\rightarrow 12M}I_{2M\rightarrow 6M}I_{M\rightarrow2M}$ and we see that all of them have the same fixed point. We conclude that the fixed point is quite robust under rather drastic changes of the coarse graining map. \section{Rotational Invariance of the lattice fixed pointed theory}\label{Rotational Invariance} We now turn our attention towards the much discussed question of {\it rotational invariance} \cite{HN93,LR82,DS12}. By this we mean that most Hamiltonians for continuum theories on Minkowski space have $SO(D)$ as a symmetry group besides spatial translation invariance. On the one hand, a fixed lattice certainly breaks rotational invariance and in the case of a hypercubic lattice reduces the invariance to rotations by multiples of $\pm\pi/2$ around the coordinate axes. On the other hand, it is clear that the cylindrical projections of a rotationally invariant measure in the continuum with respect to smearing functions adapted to the family of hypercubic lattices in question must carry an imprint of that continuum rotation invariance. In other words, there must exist a criterion at finite lattice resolution, whether the corresponding lattice measure qualifies as the cylindrical projection of a continuum rotationally invariant measure. In this section we identify such a notion of {\it rotational invariance} at finite resolution at least for the case of scalar field theories. We will consider the generalisation to other field contents in future publications. We then successfully test this criterion for the fixed point covariance $\hat{c}^*_M(l)$ from section \ref{Hamiltonian renormalisation} for the free Klein Gordon field. Due to the factorisation property and due to the possibility of presenting any rotation in terms a composition of rotations about the coordinate axes, we can reduce our attention to two spatial dimensions. This presents an example for how the Hamiltonian renormalisation scheme is able to detect the restoration of continuum properties of the classical theory which upon naive regularisation were lost in the quantisation process. \subsection{The lattice rotational invariance condition} Given the IR-restricted compact submanifold of $\sigma$, i.e. the $D$-dimensional torus $\sigma_R$ with periodic boundary conditions and length $R$, one must be precise what one means by rotations. Due to the periodicity, the definition of what is understood as a rotation may vary for points which have a distance to the centre of rotation greater then $R/2$. For the convenience of the reader, we will hence present the following description of what will be understood as a rotation in this paper:\\ In order to rotate the system around $x_0\in\sigma_R$ one uses the Euclidian metric on the torus to identify all points as a set $S_r$ which have distance $r>0$ to the central point $x_0$. We then choose $S_r$ in order to construct a representation of $\text{SO}(D)$ on it, e.g. in $D=2$ one has $\Pi : \text{SO}(2) \mapsto \text{GL}(\sigma_R)$ with $\Pi(2\pi)= \text{id}$ and $\Pi(\alpha)\Pi(\beta)=\Pi(\alpha+\beta)$, where we label the elements of the one-dimensional $SO(2)$ by $\alpha,\beta\in [0,2\pi)$. Without loss of generality we will consider in the following $x_0=0$. Indeed, upon considering a chart in Cartesian coordinates that includes some complete $S_r$ with $r<R/2$ this means we can write the action of a rotation on one of those $S_r$ as a matrix ($x\in S_r$) \begin{align}\label{rotation_linear} \Pi(\alpha) \cdot x= \left(\begin{array}{cc} \cos(\alpha) & \sin(\alpha)\\ -\sin(\alpha) & \cos(\alpha) \end{array}\right) \cdot x \end{align} Note that the rotations for $S_{r\ge R/2}$ are not described by a linear transformation due to the non-trivial boundary conditions. However, as one is ultimately interested in a thermodynamical limit where the infra-red regulator is removed via $R\to \infty$, all rotations of finite distance will have a corresponding $R$ from which on they can be described by (\ref{rotation_linear}). Hence, we choose any $r<R/2$ in the following. In the remainder of this section we limit the analysis to $D=2$ as, once rotational invariance is established for all rotations in an arbitrary plane, any other rotation can be understood as multiple rotations in suitable planes. Further we employ the ideas of \cite{King86_1,King86_2}: instead of considering arbitrary angles in $[0,2\pi)$, it suffices to show invariance under rotations of only one angle $\theta$ given that $\theta/(2\pi)$ is irrational. This is because the sequence \begin{equation} \mathbb{N} \to [0, 2\pi);\; n\mapsto \theta_n:=n\cdot \theta \mod 2\pi \end{equation} lies dense in $[0,2\pi)$, i.e. $\forall \theta' \in [0,2\pi)$ there exists a partial sequence $j\mapsto \theta_{n_j}\rightarrow \theta'$. Hence we can define the rotation by the angle $\theta'$ as \begin{equation}\label{ApproximationAngle} \Pi(\theta'):= \lim_{\theta_{n_j} \rightarrow \theta'} \Pi(\theta)^{n_j} \end{equation} It follows, assuming suitable continuity properties, that invariance under all these angles would be established, once it is shown for $\theta$. In this paper we specialise to the angle $\theta$ defined by $\cos(\theta)=3/5, \sin(\theta)=4/5$ as it is indeed irrational. A proof for this and further properties can be found in \cite{King86_2}. By the above considerations we can give meaning to the term {\it rotational invariance} as a condition on the continuum Hilbert space measure $\nu$. It is called rotationally invariant provided that for any measurable function $g$ we have $\nu(g)=\nu(r(\theta)^\ast\cdot g)$ where $(r(\theta)^\ast\cdot g)[\phi]=g[r(\theta)\cdot \phi]$ and $[r(\theta)\cdot \phi](x)= \phi(\Pi(-\theta)\cdot x)$. Since $\nu$ is defined by its generating functional, we may restrict to the functions $g=w[f]$ for which in case of a scalar theory $r(\theta)^\ast w[f]= w[r(-\theta)\cdot f]$. We now translate this into a condition on the cylindrical projections $\nu_M$ of $\nu$ defined by $\nu_M(w_M[f_M]):=\nu(w[I_M f_M])$ where \begin{align} (I_M f_M)(x):=\sum_{m\in\mathbb{Z}^2_M}f_M(m)\chi_{m\epsilon_M}(x) ,\; \chi_{m\epsilon_M}(x)=\prod_{a=1,2} \chi_{[m^a\epsilon_M,(m^a+1)\epsilon_M)}(x^a) \end{align} It follows that $r(-\theta)\cdot I_M f_m$ cannot be written as linear combinations of functions of the form $I_M f'_M$ because $r(-\theta)\cdot \chi_{M \epsilon_M}$ is the characteristic function of the rotated block. Hence the rotational invariance of $\nu$ does not have a direct translation into a condition of the $\nu_M$. While we can define a new embedding map by \begin{align} I_{\theta M}: L_{M} &\rightarrow L\\ f_M &\mapsto [I_{\theta M} f_M](x):= \sum_{m\in\mathbb{Z}^2_M} f_M(m)\chi_{m\epsilon_M}(\Pi(\theta)\cdot x) \end{align} the renormalisation flow defined by it may result in a fixed point covariance family $c^*_{\theta M}$ different from $c^*_M$. It is therefore a non-trivial question to ask what one actually means by {\it rotational invariance of a discrete lattice theory} or more precisely of a family of corresponding measures. The idea is to consider both families (i.e. the non-rotated theory described by the covariances $c^*_M$ and the rotated one described by the covariances $c^*_{\theta M}$) as coarse-grained versions of {\it common} finer lattices with spacing $\epsilon_{5M}$ which is why we chose the above particular angle $\theta$. The rotation of the coarse non-rotated lattice is a sublattice of the fine non-rotated lattice called {\it discrete rotation} and is defined by \begin{align} D_{\theta}:\mathbb{Z}_{M}^2\rightarrow\mathbb{Z}_{5M}^2;\; m\mapsto \Pi(\theta)\cdot m \end{align} This map can be extended to \begin{align} D_\theta \mathbb{Z}_{5M}^2 \rightarrow \mathbb{Z}_{5M}^2 m\mapsto \lfloor \Pi(\theta) \cdot m \rfloor \end{align} which maps the whole rotated finer lattice into the non-rotated finer lattice. \begin{figure}[h] \begin{center} \includegraphics[scale=1]{pic115_0}\label{figure1} \caption{\footnotesize Fixed point covariances $C^*_M,\; C^\ast_{\theta M}$ on lattices rotated relative to each other by the irrational angle $\theta$ (such that $\cos(\theta)=3/5$) can be related by a common refined unrotated lattice and a map $D_{\theta}$, called discrete rotation.} \end{center} \end{figure} The condition that we are about to derive holds for general measures but we also note in tandem the corresponding specialisation to Gaussian ones for a later test on our model free theory. Suppose then that $\nu^\ast$ is a rotationally invariant (Gaussian) measure, that is, for its generating functional (covariance) we have \begin{align} \nu^{\ast}(w[f])=\nu^\ast(w[\Pi(\theta) \cdot f])\;\;(c^\ast= \Pi(\theta)^\dagger c^\ast \Pi(\theta)) \end{align} This means that for the cylindrical projections we have the identity \begin{align} \nu^\ast_M(w_M[f_M])=\nu^\ast(w[I_M f_M])=\nu^\ast(w[\Pi(\theta) I_M f_M])\;\; (c^\ast_M=[\Pi(\theta) I_M]^\dagger c^\ast [\Pi(\theta) I_M]) \end{align} Now \begin{align} (\Pi(\theta) I_M f_M)(x)=\sum_{m\in \mathbb{Z}_M^2} f_M(m) \chi_{m,\epsilon_M}(\Pi(\theta)^{-1}\cdot x) \end{align} Let $B_{m,M}$ be the square (block) of which $\chi_{m,\epsilon_M}$ is the characteristic function. Then \begin{align} (\Pi(\theta) \cdot \chi_{m, \epsilon_M})(x)= \chi_{m, \epsilon_M}(\Pi(\theta)^{-1}\cdot x)=\chi_{\Pi(\theta)\cdot B_{m,M}}(x) \end{align} is the characteristic function of the rotated block of the coarse lattice with base (lower left corner) now at $\Pi(\theta)\cdot m\in \mathbb{Z}_{5M}^2$. Since we have the disjoint decomposition \begin{align} B_{m,M}=\cup_{m'\in \mathbb{Z}_{5M}^2\cap B_{m,M}}\; B_{m',5M} \end{align} we have \begin{align} \label{rotationapproximation} \Pi(\theta) B_{m,M}=\sum_{m'\in \mathbb{Z}_{5M}^2\cap B_{m,M}}\; \Pi(\theta) B_{m',5M} \approx \sum_{m'\in \mathbb{Z}_{5M}^2\cap B_{m,M}}\; B_{D_\theta \cdot m',5M} \end{align} where we have replaced in the last step the rotated blocks of the fine lattice, which before rotation compose the unrotated block of the coarse lattice, by those unrotated blocks of the fine lattice with the bases at the points defined by $D_\theta$. This is an approximation only but it is better than one might think because the difference between the two functions only affects those blocks $B_{D_\theta \cdot m',5M}$ which intersect the boundary of $B_{\Pi(\theta) \cdot m, M}$. We will come back to the quality of this approximation below. In any case, the last line in (\ref{rotationapproximation}) defines an embedding $I^\theta_{M\to 5M}:\; L_M \to L_{5M}$ by \begin{align}\label{4.377} (I^\theta_{M\to 5M} f_M)(m')=\sum_{m''\in \mathbb{Z}_{5M}^2} \delta_{m',D_\theta \cdot m''} \sum_{m\in \mathbb{Z}_M^2}\delta_{ m''\in B_{m ,M}} \;f_M(m) \end{align} such that $I_{5M} \circ I^\theta_{M\to 5M}$ approximates $\Pi(\theta) \cdot I_M$ in the sense specified below. Thus \begin{align} \nu^\ast_M(w_M[f_M])\approx \nu^\ast( w[I_{5M} I^\theta_{M\to 5M} f_M])= \nu^\ast_{5M}(w_{5M}[I^\theta_{M\to 5M}f_M]) \end{align} or for the Gaussian case \begin{align} c^\ast_M\approx [(I_{5M} \circ I^\theta_{M\to 5M}]^\dagger c^\ast [I_{5M}\circ I^\theta_{M\to 5M}] =[I^\theta_{M\to 5M}]^\dagger c^\ast_{5M} I^\theta_{M\to 5M} \end{align} To write this just in terms of a single measure (covariance), we use cylindrical consistency\\ $\nu^\ast_M(w_M[f_M])=\nu^\ast_{5M}(w_{5M}[I_{M\to 5M} f_M])$ or $c^\ast_N=I_{M\to 5M}^\dagger c^\ast_{5M} I_{M\to 5M}$ to find \begin{align} \nu^\ast_{5M}(w_{5M}[I^\theta_{M\to 5M} f_M]) \approx \nu^\ast_{5M}(w_{5M}[I_{M\to 5M} f_M]) \end{align} or \begin{align} I_{M\to 5M}^\dagger c^\ast_{5M} I_{M\to 5M}\approx (I^\theta_{M\to 5M})^\dagger c^\ast_{5M} I_{M\to 5M} \end{align} as a lattice version for rotational invariance for (Gaussian) measures for scalar field theories. To specify the quality of the approximation depends on the details and properties of the corresponding measure family. The following result is targeted to the class of Gaussian measures. \begin{Lemma} Suppose that $c^\ast$ is the covariance of a rotationally invariant Gaussian measure whose kernel is differentiable in the sense of distributions. Then \begin{align} \label{ConditionForRotationalInvariantTheories} \{c^\ast_M-[I^\theta_{M\to 5M}]^\dagger c^\ast_{5M} I^\theta_{M\to 5M}\}(m_1,m_2)=O(\epsilon_M^5) \end{align} for all $m_1,m_2\in \mathbb{Z}_M^2$. The coefficient of $\epsilon_M^5$ is independent of $M$. Note that $c^\ast_M(m_1,m_2)=O(\epsilon_M^4)$ in $D=2$. \end{Lemma} \begin{proof} Let $B^\theta_{m,M}=\cup_{m'\in \mathbb{Z}^2_{5M}\cap B_{m,M}} B_{D_\theta m', 5M}$ and $S^\theta_{m,M}:=\Pi(\theta) B_{m,M}\cap B^\theta_{m,M}$. Denote $\Delta^{\theta +}_{m,M}=\Pi(\theta) B_{m,M}-S^\theta_{m,M}$ and $\Delta^{\theta -}_{m,M}=B^\theta_{m,M}-S^\theta_{m,M}$. The sets $\Delta^{\theta\pm}_{m,M}$ are homeomorphic since $B^\theta_{m,M}$ consists of the squares of $\mathbb{Z}_{5M}^2$ whose lower left corner lies in $\Pi(\theta) B_{m,M}$. Thus $B^\theta_{m,M}$ lacks parts of $\Pi(\theta) B_{m,M}$ at the left two boundaries of $\Pi(\theta) B_{m,M}$ while $B^\theta_{m,M}$ exceeds $\Pi(\theta) B_{m,M}$ at its two right boundaries. Hence $\Delta^{\theta\pm}_{m,M}$ are complementary disjoint sets whose joint measure is equal to the measure of an integer number of squares of the lattice $\mathbb{Z}^2_{5M}$. They also have the same Lebesgue measure because $\Pi(\theta) B_{m,M}$ has measure $\epsilon_M^2$ due to rotational invariance of the Lebesgue measure and $B^\theta_{m,M}$ has measure $5^2 \epsilon_{5M}^2=\epsilon_M^2$ because $D_\theta$ is injective as is easy to check so that $B^\theta_{m,M}$ consists of 25 squares of the lattice $\mathbb{Z}^2_{5M}$. Let $h:\Delta^{\theta +}_{m,M}\mapsto \Delta^{\theta -}_{m,M}$ be the corresponding homeomorphism which can be written in the form $h(x)=x+g(x) \epsilon_M$ with $||g(x)||\le \sqrt{2}$ as the maximal distance between points in the two sets is $\sqrt{2}\epsilon_M$. Then by rotational invariance we obtain the third line in \begin{eqnarray} &&\{c^\ast_M-[I^\theta_{M\to 5M}]^\dagger c^\ast_{5M} I^\theta_{M\to 5M}\}(m_1,m_2)=\nonumber\\ &=&\{\int_{ B_{m_1,M}}\;d^2x\int_{B_{m_2,M}}\; d^2y -\int_{B^\theta_{m1,M}}\;d^2x\int_{B^\theta_{m_2,M}}\; d^2y\}c(x,y) \nonumber\\ &=&\{\int_{\Pi(\theta) B_{m_1,M}}\;d^2x\int_{\Pi(\theta) B_{m_2,M}}\; d^2y -\int_{B^\theta_{m1,M}}\;d^2x\int_{B^\theta_{m_2,M}}\; d^2y\}c(x,y) \nonumber\\ &=&\{\int_{S^\theta_{m_1,M}}\;d^2x\int_{\Delta^{\theta+}_{m_2,M}}\; d^2y +\int_{\Delta^{\theta+}_{m_1,M}}\;d^2x\int_{S^\theta_{m_2,M}}\; d^2y +\int_{\Delta^{\theta+}_{m_1,M}}\;d^2x\int_{\Delta^{\theta+}_{m_2,M}}\; d^2y \nonumber\\ && -\int_{S^\theta_{m_1,M}}\;d^2x\int_{\Delta^{\theta-}_{m_2,M}}\; d^2y -\int_{\Delta^{\theta-}_{m_1,M}}\;d^2x\int_{S^\theta_{m_2,M}}\; d^2y -\int_{\Delta^{\theta-}_{m_1,M}}\;d^2x\int_{\Delta^{\theta-}_{m_2,M}}\; d^2y \} c(x,y) \nonumber\\ &=&\int_{S^\theta_{m_1,M}}d^2x\int_{\Delta^{\theta+}_{m_2,M}} d^2y [c(x,y)-c(x,y+g(y) \epsilon_M)]+\nonumber\\ && +\int_{\Delta^{\theta+}_{m_1,M}}d^2x\int_{S^\theta_{m_2,M}} d^2y[c(x,y)-c(x+g(x)\epsilon_M,y)] \nonumber\\ && +\int_{\Delta^{\theta+}_{m_1,M}}d^2x\int_{\Delta^{\theta+}_{m_2,M}}d^2y [c(x,y)-c(x+g(y)\epsilon_M, y+g(y)\epsilon_M] \end{eqnarray} from which the claim now follows by considering a power series expansion of $c$.\\ \end{proof} The lemma does not tell us anything about the size of the coefficient of $\epsilon_M^5$ and thus of the actual quality at given $M$, however, assuming that the coefficient is finite, for sufficiently large $M$ the approximation error is as small as we want compared to the value of the discrete kernel $c^\ast_M(m_1,m_2)$. We translate the approximant $c^\ast_{M\theta}:=[I^\theta_{M\to 5M}]^\dagger c^\ast_{5M} I^\theta_{M\to 5M}$ whose coefficients are explicitly given by (using translation invariance) \begin{align} c^*_{M\theta}(m)=\frac{1}{5^4}\sum_{\delta_1,\delta_2\in\{0,...,4\}^2}c^*_{5M}(D_{\theta}(5m+(\delta_1-\delta_2))) \end{align} into the corresponding Fourier coefficients over which we have better analytic control \begin{align}\label{prefinalRotInv2} c^*_{\theta M}(m)&=\sum_{l\in\mathbb{Z}^2_{M}}e^{ik_Ml\cdot m}\hat{c}^*_{\theta M}(l)\\ &=\frac{1}{5^4}\sum_{l\in\mathbb{Z}^2_M}\sum_{\delta \in\{-2,...,2\}^2}e^{ik_Ml\cdot m}e^{ik_MM\delta\cdot m}\sum_{\delta_1,\delta_2\in\{0...4\}^D}e^{ik_{5M}(l+M\delta)\cdot(\delta_1-\delta_2)}\hat{c}^*_{5M}(D_{\theta}(l+M\delta))\nonumber \end{align} where we used the fact that $D_{\theta}$ is a bijective map to obtain the third line, as well as for the fourth line, where we have relabelled $D_{\theta}l'\rightarrow l'$ and split $l'=l+M\delta$. We have chosen the interval $\delta\in\{-2,...,2\}^2$ because of its symmetry regarding rotations around the point $x_0=0$, which are considered here, using the periodicity of the boundary conditions. Performing the sum over $\delta_1,\delta_2$ and comparing coefficients we obtain \begin{align}\label{finalRotInv} \hat{c}^*_{\theta M}(l)=\frac{1}{5^4}\sum_{\delta\in\{-2...2\}^2}\prod_{i=1}^2\frac{\sin(\frac{5k_{5M}}{2}[l_i+M\delta_i])^2}{\sin(\frac{k_{5M}}{2}[l_i+M\delta_i])^2}\hat{c}^*_{5M}(D_\theta(l+M\delta)) \end{align} which can now be readily numerically compared to $c^\ast_M(l)$ (after writing it as an integral over $k_0$). We remark that for rotational invariance under an arbitrary angle $\theta'$ we pick an approximant $n\cdot \theta$ mod $2\pi$ for sufficiently large $n\in \mathbb{N}$. Then, the whole analysis can be repeated using the $M\to 5^n M$ refinement since $\Pi(\theta)^n$ is a matrix with rational entries with common denominator $5^n$. Since the sets $\Delta^{\theta\pm}_{m, 5^n}$ involve an order of $4\times 5^n$ boundary squares of respective measure $\epsilon_{5^n M}^2=\epsilon_M^2 5^{-2n}$ the relative error here would even be smaller, i.e. of order $5^{-n} \epsilon_M$. \\ We leave the detailed numerical analysis of rotational invariance for future publications.\\ \subsection{Example: Numerical investigation for rotational invariance of the free scalar field theory} In this subsection, we test our criterion numerically using the fixed point theory in $D=2$, which we know to be rotationally invariant in the continuum.\\ First, we verify that the family of covariances $c^\ast_M$ is invariant under rotations by $\pm \pi/2$. It suffices to consider the rotation $\Pi(\pi/2)$ and apply this to (\ref{finalIntegralof2DRen}) which is symmetric under exchange of $t_1\leftrightarrow t_2$ (since we could have interchanged the roles of those in the contour integral). We have \begin{align} \langle r(\pi/2)& f_M, c^*_M r(\pi/2)f'_M\rangle_M=\\ &=\epsilon^4_M\sum_{m,m'\in\mathbb{Z}^2_M}f_M(m)f'_M(m')\sum_{n\in\mathbb{Z}^2_M}e^{ik_M n\cdot (m-m')}\hat{c}^*_M((\Pi(\pi/2)^{-1}\cdot n)_1,(\Pi(\pi/2)^{-1}\cdot n)_2)\nonumber \end{align} \noindent Thus, for $\pi/2$ equation (\ref{ConditionForRotationalInvariantTheories}) becomes the condition: \begin{equation}\label{RIconditionPI/2} \hat{c}^*_M(n_1,n_2)=\hat{c}^*_M(-n_2,n_1)\; ,\hspace{10pt} \forall n=(n_1,n_2)\in \mathbb{Z}_M^2 \end{equation} which is fulfilled in case of the free scalar field (\ref{finalIntegralof2DRen}) due to its symmetry and $\cos(-t_i)=\cos(t_i)$.\\ We will now investigate numerically whether the fixed point covariance satisfies the criterion for rotational invariance (\ref{ConditionForRotationalInvariantTheories}). As a sufficient example, we consider the afore-mentioned irrational angle $\theta$, such that $\cos(\theta)=3/5$. Moreover, we will set the IR cut-off to $R=1$ for simplicity and without loss of generality the number of spatial dimensions to $D=2$. As the value of the mass $p$ and the parameter $k_0$ in (\ref{CovarianceResult2D}) only appear in the combination $q^2:=(p^2+k_0^2)\epsilon^2_M$, it suffices to fix the latter one to account for both. Here, we choose $q_1^2:=p^2+k_0^2=1$.\\ First, we present the covariance $c_M^*$ itself for $M=40$ in figure \ref{FigureIV_Cov40} where the point $m=(0,0)$ lies in the centre. Due to the periodic boundaries the values on the corners do agree with each other. One can see that the next neighbour interactions drop rapidly with $m\in\mathbb{Z}^2_M=\{0,1,...,M-1\}^2$. The same is true for its Fourier transform $\hat{c}^*_M$. Moreover, the covariance at finite resolution is not invariant under arbitrary rotations, but, heuristically, it appears that the asymmetry could be smoothed out with increasing resolution $M$.\\ \begin{figure}[h] \begin{center} \includegraphics[width=0.4\textwidth]{FigIV_C40slim} \includegraphics[width=0.5\textwidth]{FigIV_C403Dslim} \caption[Fixed point covariance of resolution $M=40$]{\footnotesize \label{FigureIV_Cov40} The covariance $c_M^*(m)$ of the fixed point theory in $D=2$ spatial dimensions. We have chosen the IR cut-off $R=1$, mass $p=1$, and $k_0=0$. The torus $[0,1)^2$ is approximated by a lattice with $M=40$ points in each direction, where the point $m=(0,0)$ lies in the centre of the plotted grid. As one can see, the contributions from next neighbour frequencies are highly suppressed.} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.44\textwidth]{Error103D_fitted} \includegraphics[width=0.44\textwidth]{Error403D_fitted} \includegraphics[width=0.44\textwidth]{Error803D_fitted} \includegraphics[width=0.44\textwidth]{Error1603D_fitted} \caption[Comparison of rotational invariance at different resolutions]{\footnotesize \label{FigureIV_Error} For lattices of size $M=10,\; 40,\; 80,\;160$ the relative deviation $\Delta\hat{c}_M^*(l)=[|\hat{c}_M^*-\hat{c}^*_{\theta,M}|/\hat{c}^*_M](k_0=0,l)$ is plotted for $l\in \mathbb{Z}_{M}^2$ with mass $p=1$ and IR cut-off $R=1$ . High values of $\Delta\hat{c}_M^*$ indicate non-invariance of the covariance at given resolution under rotations. (The {\it grey} data point lies outside the plotted range of $[0,1)$, with numerical value $\approx 40$.) We find that the relative deviation is non-vanishing everywhere at finite resolution, however it decreases with $M^{-1}$ because $\hat{c}^*_M-\hat{c}^*_{\theta M} \sim O(\epsilon_M^5) \sim \hat{c}^*_M \epsilon_M$. This is the approximative behaviour of a rotationally invariant fixed point theory. For $M=160$ the computed covariance features already rotational invariance to a high precision.} \end{center} \end{figure} Next, we consider the quality of the approximant to the rotated covariance as $M$ varies. This approximant, $c^*_{M\theta}$, is the Fourier transform of (\ref{finalRotInv}) and should agree with the unrotated covariance $c^*_M$ up to a mistake $\mathcal{O}(\epsilon_M^5)$, given the fixed point covariance restores rotational invariance in the continuum. As the same must be true for their Fourier transforms, we consider $\hat{c}^*_{M\theta}$ and $\hat{c}^*_M$ on lattices of different size $M$ and study whether their deviation decays appropriately. Both covariances are of order $\mathcal{O}(\epsilon^4_M)$, hence their {\it relative deviation} should decay with $\mathcal{O}(\epsilon_M)$: \begin{align} \Delta \hat{c}^*_M(l):=\frac{|\hat{c}^*_M(0,l)-\hat{c}^*_{M\theta}(0,l)|}{\hat{c}^*_M(0,l)}\sim \mathcal{O}(\epsilon_M) \end{align} That it decays indeed fast, is shown in figure \ref{FigureIV_Error} for lattices of size $M=10,40,80$ and $160$. Although at low resolution the covariance features a high discrepancy with the approximant $\hat{c}_{M\theta }^*$, the relative deviation $\Delta\hat{c}^*_M$ becomes smaller as the resolution of the spatial manifold increases. Only in a neighbourhood of the centre of the grid, i.e. the point around which we rotate, the approximation fails. But, this neighbourhood shrinks linearly with the resolution $M$. For $M=160$ the computed covariance already features rotational invariance to a high precision.\\ To study the decay behaviour of $\Delta\hat{c}^*_{M}(l)$ further, one could now consider the characteristic function $\chi_B$ of a region $B\subset[0,1)^2$ and compare, for different resolutions $M$, the mean $\overline{\Delta\hat{c}_M}[\chi_B]$ of the relative deviation in this region, i.e. the mean of $\Delta\hat{c}^*_{M}(l)$ over all $l\in\mathbb{Z}^2_M$ such that $\text{supp}(\chi_l)\subset B$. For example, for $l_0=(0,2)\in\mathbb{Z}_5^2$ let the support of $\chi_{l_0}$ be the region of interest, i.e. on resolution $M_0=5$ we have $\overline{\Delta\hat{c}_{M_0}}[\chi_{l_0}]=\Delta\hat{c}^*_{5}(l_0)$. At any other $M\in5\mathbb{N}$, we consider the refinement in $\mathbb{Z}_M^2$, i.e. the points $\frac{M}{5} l_0+\delta$ for $\delta\in [0,M/5-1]^2$. We find that the mean \begin{align} \overline{\Delta\hat{c}_M}[\chi_{l_0}]:=\frac{1}{(M/5)^2}\sum_{\delta\in [0,M/5-1]^2}\Delta\hat{c}^*_{M}(\frac{M}{5} l_0+\delta) \end{align} is decaying with $M^{-1}$, see figure \ref{FigureIV_Decay} for two examples. It confirms that (\ref{ConditionForRotationalInvariantTheories}), i.e. the condition for rotational invariance, is satisfied up to an error of $\epsilon_M=M^{-1}$ to a very high precision for the considered examples and thus indicates that rotational invariance will be recovered in the continuum.\\ \begin{figure}[H] \begin{center} \includegraphics[width=0.44\textwidth]{Decay_of_02} \includegraphics[width=0.44\textwidth]{Decay_of_All} \caption[Decay behaviour of the relative deviation from rotational invariance for increasing resolution]{\footnotesize \label{FigureIV_Decay} The decay behaviour of the mean $\overline{\Delta\hat{c}_M}[\chi]$ of the relative deviation over a region with characteristic function $\chi$ is presented. For two distinct regions, we compute it at different resolutions $M$. On the left, $\chi_{(0,2)}$ is the characteristic function of this block that can be associated with the point $m_0=(0,2)$ on resolution $M=5$. The values for $\overline{\Delta\hat{c}_M}[\chi]$ are shown in blue and we approximate the decay behaviour by the function $f(M)=3.5\;M^{-1}$ (in orange). Similarly, $\chi=1$ is associated with the whole torus $[0,1)^2$ and is presented on the right. Here, the decay is best approximated by $f(M)=140\times M^{-1}$ in orange. These two are arbitrary cases, however we expect the decay to be of this form for each region. This confirms that the decay behaviour is sufficiently fast to account for a rotationally invariant fixed point theory.} \end{center} \end{figure} \section{Conclusion} \label{Conclusion} While the framework of covariant Renormalization \cite{Riv91,Comb04,Bal89a,Bal89b,Dim12a,Dim12b,Dim12c} has a long story of success, its implementation and application in the Hamiltonian setting have remained largely unexplored. With this series of papers, we have made first steps in this direction: In \cite{LLT1} we motivated a Hamiltonian renormalisation scheme and in \cite{LLT2,LLT3} we tested this scheme successfully for the free massive Klein Gordon field in $D+1=2$ spacetime dimensions. In this paper we extended this test successfully to arbitrary dimensions. This extension made it possible also to test the robustness of the fixed point under changes of the coarse graining map which defines the renormalisation flow as well as how to test the rotational invariance of the fixed point theory by using the finite resolution projections of the corresponding Hilbert space measure. The latter test is useful in situations in which the computation of the continuum measure is too complicated, but in which one has at least numerical access to its approximate finite resolution cylindrical projections obtained by iterating the flow equations sufficiently often, provided its convergence or at least the existence of a fixed point can be established. The next step in our programme will be to extend the framework to gauge theories, as the most interesting models of modern physics are phrased in this language, e.g. QCD, and test whether the validity of the direct Hamiltonian renormalisation is also given therein. Afterwards, one could apply the framework in the context of gravity. This includes the ``Asymptotic Safety'' programme \cite{RS02,Per09,RS12} as well as other approaches to quantum gravity, such as Loop Quantum Gravity (LQG) \cite{Rov04,AL04,Thi07}. As the latter one was originally formulated in the canonical setting, it is hoped that the strategy outlined in this series could fixed the quantization ambiguities arising when defining the constraint or Hamiltonian operators, see e.g. \cite{LT16,Thi96_1,Thi96_2,ALM15}. In the context of LQG different regularisation schemes (e.g. based on different ordering prescriptions) lead to different operators \cite{DL17a,DL17b} and the goal must be to find ideally unique constraint operators of general relativity which do not display those ambiguities anymore. A strict criterion is to obtain a theory free of anomalies for the symmetries of the theory. \section*{Acknowledgements} Part of this work was financially supported by a grant from the Friedrich-Alexander University to the Emerging Fields Project ``Quantum Geometry" under its Emerging Fields Initiative. TL thanks the Heinrich-B\"oll Foundation for financial support. KL thanks the German National Merit Foundation for financial support.\\ \\ \begin{appendix} \section{Detailed computations of section \ref{Hamiltonian renormalisation}} \label{sa} In this appendix we fill the gaps which were left out in section \ref{Hamiltonian renormalisation} of the main text. The first paragraph investigates the fact that the multi-dimensional renormalisation transformation (\ref{Covarianceflow}) decouples in its spatial directions. The second paragraph describes how the integral in (\ref{finalIntegralof2DRen}) can be performed.\\ As discussed in the main section, the initial covariance can be factorised into two factors which very closely resemble the 1+1 dimensional case \begin{equation} \label{startingdecoupledcovariance_app} \hat{c}^{(0)}_M(l)=-\oint_{\gamma} dz \frac{\epsilon_M^4}{8\pi i}\; \frac{1}{q_1^2(z)/2+1-\cos(t_1)}\;\;\frac{1}{q_2^2(z)/2+1-\cos(t_2)} \end{equation} Let us now focus on the precise action of the map (\ref{Covarianceflow}), by writing it in terms of its kernel $c^{(n)}_M(m_1',m_2')=c^{(n)}_M(m'_1-m'_2)$: \begin{equation} C^{(n+1)}_M (m'_1-m'_2)= 2^{-2D}\sum_{\delta',\delta''\in\{0,1\}^D}C^{(n)}_{2M}(2m'_1+\delta'-2m'_2+\delta'') \end{equation} and correspondingly for the Fourier transform for $D=2$ \begin{align} \label{flowdefinition} &\hat{c}^{(n+1)}_M(l)=2^{-4}\sum_{\delta,\delta',\delta''\in\{0,1\}^2}\hat{c}^{(n)}_{2M}(l+\delta M) e^{ik_{2M}(l+\delta M)\cdot(\delta'-\delta'')}=\nonumber\\ &=\frac{1}{2^4} \sum_{\delta_1,\delta_2\in\{0,1\}}\hat{c}^{(n)}_{2M}(l_1+\delta_1 M, l_2 +\delta_2 M)\left( e^{ik_{2M}(l_1+l_2+(\delta_1+\delta_2)M)}+\right.\nonumber\\ &\hspace{20pt} \left.+e^{-ik_{2M}(l_1+l_2+(\delta_1+\delta_2)M)}+e^{ik_{2M}(l_1-l_2+(\delta_1-\delta_2)M)}+e^{-ik_{2M}(l_1-l_2+(\delta_1-\delta_2)M)}\right.\nonumber\\ &\hspace{20pt} \left.+2e^{ik_{2M}(l_2+\delta_2M)}+2e^{-ik_{2M}(l_2+\delta_2M)}+2e^{ik_{2M}(l_1+\delta_1M)}+2e^{-ik_{2M}(l_1+\delta_1M)}+4\right) \end{align} where we wrote explicitly all 16 terms stemming from the different combinations of $(\delta'-\delta''$). \begin{align} \label{decoupling} &=\frac{1}{2^4} \sum_{\delta_1,\delta_2=0,1}\hat{c}^{(n)}_{2M}(l_1+\delta_1M,l_2+\delta_2M)\left( 4+ 4\cos(k_{2M}(l_2+\delta_2M))+4\cos(k_{2M}(l_1+\delta_1M))\right.+\nonumber\\ &\hspace{20pt} \left. +2\cos(k_{2M}(l_1+\delta_1M)+k_{2M}(l_2+\delta_2M))+2\cos(k_{2M}(l_1+\delta_1M)-k_{2M}(l_2+\delta_2M))\right)\nonumber\\ &=\frac{1}{2^2}\sum_{\delta_1,\delta_2=0,1}\hat{c}^{(n)}_{2M}(l_1+\delta_1M,l_2+\delta_2M)\times\nonumber\\ &\hspace{20pt} \left( 1+ \cos(k_{2M}(l_2+\delta_2M))+\cos(k_{2M}(l_1+\delta_1M))+\cos(k_{2M}(l_1+\delta_1M))\cos(k_{2M}(l_2+\delta_2M)) \right)\nonumber\\ &=\frac{1}{4}\sum_{\delta_1,\delta_2=0,1}\left(1+\cos(k_{2M}(l_1+\delta_1 M)\right)\left(1+\cos(k_{2M}(l_2+\delta_2 M)\right)\hat{c}^{(n)}_{2M}(l_1+\delta_1M,l_2+\delta_2M) \end{align} where we have used in the second step, that $2\cos(x)\cos(y)=\cos(x+y)+\cos(x-y)$. One realises that both directions completely decouple in the renormalisation transformation. Since the initial covariance factorises under the contour integral over $\gamma$ this factorisation is preserved under the flow and implies that the flow of the covariance in each direction can be performed separately. \\ Following the arguments in section \ref{Hamiltonian renormalisation} one can determine the fixed point covariance stemming from (\ref{startingdecoupledcovariance_app}) for each direction separately and finds with $t_j=k_M l_j$ \begin{align}\label{finalIntegralof2DRen} \hat{c}^*_M(k_0,l)&=-\left(\frac{\epsilon_M^4}{2\pi i}\right)\; \oint_\gamma\; dz\; \prod_{j=1,2} \frac{1}{q_j^3} \frac{q_j \text{ch}(q_j) - \text{sh}(q_j) +(\text{sh}(q_j)-q_j)\cos(t_j)}{\text{ch}(q_j)-\cos(t_j)} \end{align} Note that it is not necessary to pick a square root of the complex parameter $q^2_{1,2}(z)=\epsilon_M^2(\frac{k_0^2+p^2}{2}\mp z)$ since the integrand only depends on the square, despite its appearance (in other words, one may pick the branch arbitrarily, the integrand does not depend on it). It follows that the integrand is a single valued function of $z$ which is holomorphic everywhere except for simple poles which we now determine, and which allow to compute the contour integral over $\gamma$ using the residue theorem. There are no poles at $q_{1,2}^2=0$ since the functions $[q \text{ch}(q)-\text{sh}(q)]/q^3,\; [\text{sh}(q)-q]/q^3$ are regular at $q=0$. Hence the only poles come from the zeroes of the function $\text{ch}(q)-\cos(t)$. Using $\text{ch}(iz)=\cos(z)$ and the periodicity of the cosine function we find $iq=\pm[t+2\pi N]$ with $N\in \mathbb{Z}$ or $q^2=-(t+2\pi N)^2$. In terms of $q_{j},\;j=1,2$ this means that \begin{align} (k_0^2+p^2)/2\mp z=-\frac{(t_j+2\pi N)^2}{\epsilon_M^2}\;\;\Leftrightarrow\;\; z=z_{N}=\pm [(k_0^2+p^2)/2+\frac{(t_j+2\pi N)^2}{\epsilon_M^2}] \end{align} It follows that the second factor involving $q_2$ has no poles in the domain bounded by $\gamma$ because they all lie on the negative real axis while those coming from the factor involving $q_1$ lie all on the positive real axis. We will denote the latter by $z_N$. The poles coming from the zeroes of $\text{ch}(q_1)-\cos(t_1)$ are simple ones as one can check by expanding the hyperbolic cosine at $z_N$ in terms of $z-z_N$, in other words \begin{align} \lim_{z\to z_N}\frac{z-z_N}{\text{ch}(q_1(z))-\cos(t_1)}= \lim_{z\to z_N}\frac{1}{\text{sh}(q_1(z)) q_1'(z)}= \lim_{z\to z_N}\frac{2 q_1(z)}{\text{sh}(q_1(z)) [q_1^2(z)]'}= -\frac{2q_1(z_N)}{\epsilon_M^2{\rm sh}(q_1(z_N))} \end{align} which is again independent of the choice of square root. We have used de l$^\prime$ Hospital's theorem in the second step. Note that $q_1(z_N)^2=-(t_1+2\pi N)^2$ and $q_2(z_N)^2=q^2+(t_1+2\pi N)^2:=q_N^2$ where $q^2:=\epsilon_M^2(k_0^2+p^2)$. Performing the integral using the residue theorem and using that $\text{ch}(q_1(z_N))=\cos(t_1)$ and $\text{sh}(q_1(z_N))=\pm i\sin(t_1)$ (sign cancels against similar choice for $q_1(z_N)$) we end up with \begin{align}\label{CovarianceResult2D} \hat{c}_M^*(k_0,l)=& \epsilon^2_M\frac{[q_N {\rm ch}(q_N)-{\rm sh}(q_N)]+[{\rm sh}(q_N)-q_N]\cos(t_2)}{q_N^3[{\rm ch}(q_N)-\cos(t_2)]}+\nonumber\\ &-2\epsilon^2_M\underset{N\in\mathbb{Z}}{\sum}\frac{\cos(t_1)-1}{(2\pi N +t_1)^2} \frac{1}{q_N^3} \;\frac{q_N \text{ch}(q_N)-\text{sh}(q_N)+(\text{sh}(q_N)-q_N)\cos(t_2)} {\text{ch}(q_N)-\cos(t_2)} \end{align} \section{Derivation of the fixed point for RG-flow of prime $u=3$} \label{sb} The following explicit calculations are performed for the prime $u=3$ as this illustrates what needs to be done also in the general case. The initial data of the RG-flow is given for $D=1$ with $t=k_Ml, q^2=\epsilon_M^2(k_0^2+p^2)$ by \begin{equation}\label{3start_app} \hat{c}^{(0)}_M(k_0,l)=\frac{\epsilon^2_M}{2(1-\cos(t))+q^2} \end{equation} In order to compute this flow, it is useful to recall the trigonometric addition theorems for the cosine function \begin{equation} \cos(x)+\cos(y)=2\cos\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right),\hspace{10pt} \cos(x)\cos(y)=\frac{1}{2}\left(\cos(x-y)+\cos(x+y)\right) \end{equation} to note the following explicit values \begin{equation} \cos(\frac{1}{6}2\pi)=\frac{1}{2}, \hspace{20pt} \cos(\frac{1}{3}2\pi)=-\frac{1}{2}, \hspace{20pt}\cos(\frac{2}{3}2\pi)=-\frac{1}{2} \end{equation} and to employ the {\it Chebyshev recursive method}, which states that for $N\in\mathbb{N}$: \begin{equation} \cos(Nx)=2\cos(x)\cos((N-1)x)-\cos((N-2)x) \end{equation} which is an easy expansion into exponentials and finds application in what follows for the case $N=3$ and $x \rightarrow x/3$ to express $\cos(x)=2 \cos(x/3) \cos(2/3x)-\cos(x/3)$. Equipped with these tools, we start to compute the RG-flow of $I_{M\rightarrow 3M}$ by finding a common denominator of the sum in (\ref{generaldecoupling}) assuming $\hat{c}^{(n)}$ could have been written in the form \begin{equation} \label{3start2} \hat{c}^{(n)}_M(k_0,l)=\frac{\epsilon^2_M}{q^3}\frac{b_n(q)+c_n(q)\cos(t)}{a_n(q)-\cos(t)} \end{equation} with suitably chosen functions $a_n,b_n,c_n$ of $q$ as we already know is true for (\ref{3start}). Then, the common denominator after one renormalisation step is generated by the linear combination of the of the three fractions in (\ref{generaldecoupling}) and is given by: \begin{align*} &\left[a_n(q)-\cos\left(\frac{t}{3}\right)\right]\left[a_n(q)-\left(\frac{t}{3}+M\frac{2\pi}{3M}\right)\right]\left[ a_n(q)-\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)\right]=\\ &=a_n(q)^3-a_n(q)^2\left[\cos\left(\frac{t}{3}\right)+\cos\left(\frac{t}{3}+\frac{1}{3}2\pi\right)+\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)\right]_A\\ &\hspace{15pt}+a_n(q)\left[\cos\left(\frac{t}{3}\right)\cos\left(\frac{t}{3}+\frac{1}{3}2\pi\right)+\cos\left(\frac{t}{3}\right)\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)+\cos\left(\frac{t}{3}+\frac{1}{3}2\pi\right)\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)\right]_B\\ &\hspace{15pt}-\left[\cos\left(\frac{t}{3}\right)\cos\left(\frac{t}{3}+\frac{1}{3}2\pi\right)\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)\right]_C \end{align*} Each of the three prefactors in front of each power of $a_n(q)$ can now be evaluated precisely with the methods stated above. We obtain: \begin{align} \left[\cos\left(\frac{t}{3}\right)\right.&\left.+\cos\left(\frac{t}{3}+\frac{1}{3}2\pi\right)+\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)\right]_A=0\\ \left[\cos\left(\frac{t}{3}\right)\right.&\left.\cos\left(\frac{t}{3}+\frac{1}{3}2\pi\right)+\cos\left(\frac{t}{3}\right)\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)+\cos\left(\frac{t}{3}+\frac{1}{3}2\pi\right)\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)\right]_B=-\frac{3}{4}\\ \left[\cos\left(\frac{t}{3}\right)\right.&\left. \cos\left(\frac{t}{3}+\frac{1}{3}2\pi\right)\cos\left(\frac{t}{3}+\frac{2}{3}2\pi\right)\right]_C=\frac{1}{4}\cos(t) \end{align} So we get for the denominator \begin{equation}\label{fixpointeq3} \frac{1}{4}\left(\left[4a_n(q)^3-3a_n(q)\right]-\cos(t)\right) \end{equation} which is again of the form that (\ref{3start2}) had. Moreover, we note that the $t$-independent part of (\ref{fixpointeq3}) is exactly the right hand side of the triple-angle formula for cos, cosh: \begin{equation} a(3q)=4a(q)^3-3a(q) \end{equation} hence with the choice of $a(q)=\text{ch}(q)$ we have found a fixed point for the flow induced onto the $a_n(q)$.\\ For the numerator, we continue in the same manner. After some pages of calculation, one finds it to be given by \begin{align} &(3+4\cos(t/3)+2\cos(2/3t))(b_n-c_n\cdot \cos(t/3))(a_n-\cos(t/3+2\pi/3))\times\nonumber\\ &\times(a_n-\cos(t/3+2/3\cdot2\pi))(3+4\cos(t/3+2\pi/3)+2\cos(2t/3+2/3\cdot 2\pi))(a_n-\cos(t/3))\times\nonumber\\ &\times(a_n-\cos(t/3))(a_n-\cos(t/3+2/3\cdot 2\pi))(3+4\cos(t/3+2/3\cdot 2\pi)+2\cos(2/3t+4/3 \cdot 2\pi)) \times\nonumber\\ &\times(b_n-c_n\cdot \cos(t/3+2/3 \cdot 2\pi))(a_n-\cos(t/3+2\pi/3)(b_n-c_n \cdot \cos(t/3+2\pi/3)))\nonumber\\ \label{3nominatorflow} &=\ldots= \left(-\frac{3}{4}+6a_n+9a_n^2\right)b_n-6a_n(1+a_n)c_n+\frac{3}{4}\left(4(1+a_n)b_n-(3+4a_n+4a_n^2)c_n\right)\cos(t) \end{align} Thus, also the numerator is cast again into an expression of the form $b_{n+1}+c_{n+1}\cos(t)$. We can already make use of the fact, that at the fixed point one has $a=\cosh(q)$. Making an educated guess and trying whether \begin{equation}\label{educatedguess} b=q \text{ch}(q)-\text{sh}(q),\hspace{30pt} c=\text{sh}(q)-q \end{equation} are solutions of the fixed point equation determined by (\ref{3nominatorflow}) one uses the triple-angle formula for the sine function \begin{equation} \sin(3x)=2\cos(x)\sin(2x)-\sin(x)=-2\cos(2x)\sin(x)-\sin(x) \end{equation} and obtains indeed by plugging (\ref{educatedguess}) into (\ref{3nominatorflow}): \begin{align} 3(1+\cosh(q))(q \text{ch}(q))-\frac{1}{4}(3+4\text{ch}(q)+4\text{ch}(q)^2)(-\text{sh}(q)+q)=\frac{3}{4}\left[-3q+\text{sh}(3q)\right] \end{align} and \begin{align} \left(-\frac{3}{4}+6\text{ch}(q)+9\text{ch}(q)^2\right)( \text{ch}(q)-\text{sh}(q))-6\text{ch}(q)(1+\text{ch}(q))(-\text{sh}(q)+q)=\frac{3}{4}\left[3q \text{ch}(3q)-\text{sh}(3q)\right] \end{align} which presents indeed the triple angle formula, up to the common prefactor of $3/4$. The factor $1/4$ gets cancelled by the pre factor of $1/4$ in front of $a_n$ in (\ref{fixpointeq3}). The factor $3$ cancels against a factor $3^{-1}$ which is obtained as follows: the map itself was defined with a prefactor $3^{-2}$, the factor $q^{-3}$ gives $3^3$ and the $\epsilon_M^2$ gives $3^{-2}$ which altogether gives a factor $3^{-1}$. Hence we have indeed found exactly the same fixed point under the $M\rightarrow3M$ coarse graining map as we found for the $M\to 2M$ coarse graining map! \end{appendix} }
2,869,038,155,795
arxiv
\section{Introduction} \label{sec:1} Most young stars form in stellar systems such as stellar clusters and associations \citep{LL03,PCA03}. It is expected that only less than 10\% of young stellar groups will remain gravitationally bound clusters \citep{LL03}, so a significant portion of field stars may originate from the dissolution of such stellar systems \citep{MS78,BPS07}. The formation of stellar systems is interconnected in space and time, which leads to form larger structures \citep{EEPZ00,G18}. Therefore, they are superb laboratories to understand the star formation taking place on various spatial scales. Stellar associations are, in general, composed of a single or multiple stellar clusters (or groups) and a distributed young stellar population \citep{B64,KAG08}. In addition, the internal structure is tightly associated with the kinematics of constituent stars \citep{LSB18,LNGR19,LHY20,LNH21}. The morphological features and stellar kinematics provide hints in understanding the formation process of stellar associations. To obtain such inference, the high precision astrometry from the Gaia mission \citep{gaia16} is key as it allows us to select genuine members spread over a wide region and investigate their kinematics in detail, especially when combined with radial velocity (RV) data. Monoceros OB1 (Mon OB1) and R1 (Mon R1) are nearby (within 1 kpc) stellar associations \citep[etc]{vdB66,OMT96, SBL97,BCM09}. Mon OB1 hosts the active star-forming region (SFR) \object{NGC 2264} with numerous substructures \citep{SSB09,KFG14}. A number of extensive multi-wavelength imaging surveys for this SFR have been conducted due to its proximity and low interstellar extinction \citep[etc]{SBL97,PSB00,SBC04,FMS06,SBC08, SSB09,VPS18}. Mon OB1 is about 700 -- 800 pc away from the Sun \citep{W56,BF71,FvG91,SBL97,KIO14,DMM21}. Its age is about 3 Myr with a large spread of 3 -- 5 Myr \citep{SB10,VPS18}. Extensive RV surveys have been performed for young stars in \object{NGC 2264}, the core of Mon OB1 \citep{FHS06,THF15}. The velocities of young stars follow the velocity field of the remaining molecular gas. A group of stars with RVs larger than the systemic velocity ($\sim$ 5 km s$^{-1}$) was found toward the O-type binary \object{S Monocerotis} \citep{S09}. \citet{THF15} claimed that this group might have formed on the far side of the remaining gas compressed by the strong wind from the massive star. They also reported the presence of another group of stars with systematically small RVs. Recently, the internal kinematics of this SFR has been investigated with the Gaia proper motion (PM) data \citep{KHS19,BKG20}. As a result, a pattern of expansion was detected in this stellar group at the north of NGC2264. Mon R1 is about $2^{\circ}$ west away from Mon OB1. It is composed of small SFRs surrounded by reflection nebulae such as IC 446, IC 447, NGC 2245, and NGC 2247. Infrared observations revealed several young stellar groups around Herbig Ae/Be stars in Mon R1 \citep{WL07,GMM09}. Some Herbig-Haro objects were also discovered in those groups \citep{MMD21}. The distance to Mon R1 was previously determined in the range of 660 pc to 715 pc \citep{vdB66,MDM20,DMM21,MMD21}. This association has thus been considered to be at the same distance as Mon OB1. The molecular CO line observations of Mon R1 were carried out by \citet{KDT79}. They found a semi ring-like structure, which is also evident in Planck continuum image at 550 \micron \ \citep{BDP20}. The overall velocity field is distinguishable from that of Mon OB1 (see also \citealt{OMT96}). In the direction of Mon R1, \citet{KDT79} identified two velocity components at $-1$ to 1 km s$^{-1}$ and 3 to 5 km s$^{-1}$. The former component is physically associated with IC 446 and IC 447, while the latter one belongs to NGC 2245 and NGC 2247. Recently, \citet{BDP20} considered the former component as a filament and discussed the star formation process along the filament in the context of the end-dominated collapse model \citep{B83, PJH11}. However, the kinematic properties of young stars in Mon R1 have not yet been studied in detail. In this study, we aim to investigate not only the star formation process in Mon OB1 and Mon R1 but also the physical association between them by probing the kinematics of young stars. For this purpose, the recent Gaia Early Data Release 3 (EDR3; \citealt{gedr3}) is used with RV data. We describe data and target selection in Section~\ref{sec:2}. The scheme of member selection is addressed in Section~\ref{sec:3}. In Section~\ref{sec:4}, we investigate the substructures in the two associations and the kinematic properties of young stars in the substructures. Star formation history is also inferred from a color-magnitude diagram (CMD). Star and cluster formation is discussed in Section~{\ref{sec:5}. Our results are summarized in Section~\ref{sec:6}. \section{Data} \label{sec:2} \subsection{Selection of member candidates}\label{ssec:21} A $6^{\circ} \times 6^{\circ}$ region centered at R.A. = 06$^{\mathrm{h}}$ 36$^{\mathrm{m}}$ $23\fs52$, decl. = $+10^{\circ}$ 04$^{\prime}$ $55\farcs7$ (J2000) was surveyed. In order to minimize the inclusion of field interlopers in the field of view, we first isolated member candidates based on the intrinsic properties of young stars as done in \citet{LNH21}. Early-type (O- and B-type) stars are probable member candidates because of their short lifetime, particularly that of O-type stars. We compiled lists of early-type stars taken from the data bases of MK classifications \citep{WOE00,R03, MP03,S09,MSM13} and removed some duplicated stars. A total of 609 early-type stars were selected as member candidates. Low-mass young stellar objects (YSOs) with warm circumstellar disks appear bright at infrared wavelengths \citep{L87}. Hydrogen recombination lines are observed in emission due to mass accretion \citep{MHC98,MHC03,FKvB13}. X-ray is emitted from their hot coronal region \citep[etc]{FDM03,CMP12,RN16}. Based on these properties, the membership of young stars in NGC 2264 has been thoroughly evaluated by a series of multi-wavelength studies \citep{SBL97,PSB00,SBC04,SBC08,SSB09}. We built a list of 992 member candidates from these studies. \begin{figure}[t] \epsscale{1.0} \plotone{fig01.pdf} \caption{Color-color diagrams of YSO candidates classified with the AllWISE data \citep{C14}. Red triangle, green square, and orange diamond represent Class I, Class II, YSO with a transitional disk, respectively. The color criteria for the YSO classification of \citet{KL14} are shown by dashed lines. }\label{fig1} \end{figure} The Wide-field Infrared Survey Explorer (WISE) has mapped whole sky in mid-infrared passbands \citep{WEM10}. We used AllWISE catalogue \citep{C14} to identify YSO candidates spread over the entire survey region. First, a number of spurious sources were rejected by adopting the criterion $ nm/m \leq 0.2$ in each passband \citep{KL14}, where $nm$ and $m$ are the number of profile-fit flux measurements for a given source with signal-to-noise ratios (SNRs) larger than 3 and the total number of profile-fit flux measurements for the same source in a given passband, respectively. There could still be many spurious sources in W3 and W4 bands. We suppressed those sources following \citet{KL14}; \begin{enumerate} \item W3 band \\ $SNR \geq 5$ \\ $0.45 < \chi^2 < 1.15$ or $\chi^2 < (SNR - 8)/8 $ \item W4 band\\ $\chi^2 < (2 \times SNR - 20) / 10$ \end{enumerate} \noindent where $\chi^2$ is the reduced chi-square of profile-fit in a given passband. Additional contaminants such as active galactic nuclei and star-forming galaxies were excluded by using the criteria of \citet{KL14}. In the end, we classified 75 Class I, 348 Class II, and three YSOs with transitional disks according to the scheme of \citet{KL14} as shown in Figure~\ref{fig1}. These YSO candidates were considered as member candidates. We cross-matched the three lists that we built to create a master catalogue of member candidates. A total of 29 out of 609 early-type stars from the data bases of MK classification were found in the member candidate list of NGC 2264 \citep{SBL97,PSB00,SBC04, SBC08,SSB09}. There are 124 candidates in common between the YSO list from the AllWISE data and that of member candidates in NGC 2264. Therefore, a total of 302 sources are additional YSO candidates throughout our survey region, of which two are early-type stars, i.e. they are in the catalogue of 609 early-type stars. The total number of member candidates is 1872. We searched for the counterparts of all member candidates in the catalogue of the Gaia EDR3 \citep{gedr3} within a radius of $3^{\prime\prime}$. All the OB star candidates were found in the Gaia data. The Gaia data further contains 969 out of 992 member candidates belonging to \object{NGC 2264} and 336 out of 426 YSO candidates identified with AllWISE data. A total of 1622 candidates brighter than 18 mag in $G_{\mathrm{RP}}$ were used for member selection and analysis. \subsection{Radial velocity measurements}\label{ssec:22} \citet{THF15} published the RV data of 695 stars located in \object{NGC 2264}. It is noted that the published RVs are actually line of sight velocities at the local standard of rest, not heliocentric RVs. We took the data and cross-matched them with the Gaia catalogue, leading to a total of 684 having Gaia counterparts. We obtained the high-resolution spectra of 14 YSO candidates in Mon R1 using the Immersion GRating Infrared Spectrometer (IGRINS, $R \sim 45,000$ -- \citealt{YJB10,PJY14}) attached to the 8.2-m Gemini South telescope on 2020 February 4, 5, 7, 9, 11, and 12. An ABBA nod technique was applied to our observations to subtract the sky background. Some A0V stars, such as \object{HIP 30387}, \object{HIP 36796}, \object{HIP 33297}, and \object{HIP 28686}, were observed as telluric standard stars. A large number of OH emission lines were observed from blank sky regions for wavelength calibration. Data reduction was performed by using the IGRINS pipeline package version 2 \citep{LGK17}\footnote{https://github.com/igrins/plp}. This pipeline sequentially executes aperture extraction, the subtraction of background emission, bad pixel correction, and wavelength calibration. The synthetic spectrum of Vega \citep{CK04} was fit to those of the observed A0V stars. The telluric spectra were obtained from the spectra of the A0V stars divided by the best-fit synthetic spectrum of Vega. Target spectra were corrected by the telluric spectra. Synthetic stellar spectra in the wide temperature range of 3500 to 9000 K for the solar abundance were generated using {\tt SPECTRUM v2.76} \citep{GC94}\footnote{http://www.appstate.edu/~grayro/spectrum/spectrum.html} based on a grid of the ODFNEW model atmospheres \citep{CK04}. The wavelength of the synthetic spectra in air was converted to that in vacuum by using the relation of \citet{C96}. We derived the cross-correlation functions between the synthetic spectra and the observed spectra of the 14 YSO candidates with {\tt xcsao} task in the \textsc{RVSAO} package \citep{KM98}. The velocities at the strongest correlation peaks were adopted as RVs. The task {\tt xcsao} yields the RV uncertainty based on the $r$ value as below \citep{TD79}: \begin{center} \begin{equation} r = {h \over \sqrt{2}\sigma_a} \end{equation} \end{center} \noindent where $h$ and $\sigma_a$ represent the amplitude of a cross-correlation function and the root mean square value of its antisymmetric component, respectively. The RV uncertainty is then obtained from the relation $3w/8(1+r)$ where $w$ is the full width at half maximum of the peak of cross-correlation function \citep{KM98}. \begin{figure}[t] \epsscale{1.2} \plotone{fig02.pdf} \caption{Parallax (left) and PM (right) distributions of member candidates. The Gaia parallaxes were corrected for zero-point offsets using the recipe of \citet{LBB21}. We plot only stars with parallaxes larger than three times their associated errors. Dashed lines in the left panel confine the probable members to distances between 500 pc and 1000 pc. The right panel shows the PM distribution of member candidates between 500 pc and 1000 pc. Red, blue, and black dots represent the genuine members of Mon OB1, Mon R1, and the halo, respectively, while gray dots denote probable nonmembers (see the main text for detail). }\label{fig2} \end{figure} Some spectral orders showed poor cross-correlation functions because of the small number of lines. The $r$ values tend to be lower than 6.0 for these orders. Therefore, we adopted the weighted-mean value and standard deviation of RVs measured from spectral orders with $r$ larger than 6.0 as the final RV and RV error of a given YSO candidate, respectively. The inverse of squared uncertainty was used as the weight value. The RVs of YSO candidates were then converted to velocities in the local standard of rest frame using the \textsc{IRAF}/{\tt rvcorrect} task. \subsection{Supplementary data}\label{ssec23} The Infrared Astronomical Satellite (IRAS) mission surveyed more than 95\% of sky at 12, 25, 60, and 100 \micron \ \citep{NHvD84}. Later, the Improved Reprocessing of the IRAS Survey (IRIS) provided better quality of dust images over the sky \citep{ML05}. In addition, the AKARI satellite mapped almost all sky in the four far-infrared bands centered at 65, 90, 140, and 160\micron \ \citep{DTO15}. These infrared maps help to investigate the distribution of interstellar material around Mon OB1 and Mon R1. We took the IRIS image at 100$\micron$ and the AKARI Far-Infrared Surveyor false-color image of our survey region \citep{ML05,DTO15} processed by the {\tt Aladin} interactive sky atlas \citep{BFB00,BF14}. \section{Member selection} \label{sec:3} We may assume that young stars in an SFR have formed in the same molecular cloud. Therefore, they are almost at the same distance and share similar kinematic properties. Based on this conventional idea, we assessed the membership of the member candidates using the Gaia parallax and PM data \citep{gedr3}. Systematic zero-point offsets that depend on magnitude, color, and position were found in the Gaia parallax \citep{LBB21}. We corrected parallaxes for the zero-point offsets according to the recipe of \citet[\url{https://gitlab.com/icc-ub/public/gaiadr3_zeropoint}]{LBB21}. Stars with parallaxes smaller than three times the associated errors were excluded in member selection. In addition, we did not use stars with negative parallaxes or close companions (duplication flag = 1), or poor astrometric solutions (RUWE $> 1.4$) were not used in analysis as well as member selection. The left panel in Figure~\ref{fig2} displays the parallax distributions of member candidates. We considered only member candidates between 500 pc and 1000 pc given the previously determined distances of the two associations ($\sim 700$ pc \citealt{vdB66,SBL97,MDM20,MMD21}). We plot the PMs of members fulfilling this criterion in the right panel of Figure~\ref{fig2}. \begin{figure}[t] \epsscale{1.0} \plotone{fig03.pdf} \caption{Spatial distribution of member candidates. The boundaries of Mon OB1 and Mon R1 are outlined by a red ellipse and a blue circle, respectively. A region encompassing all members is shown by a black ellipse (dashed line). The size of dots is proportional to the brightness of stars. The positions of stars are relative to the reference coordinate R.A. = $06^{\mathrm{h}} \ 36^{\mathrm{m}} \ 23\fs52$, decl. = $+10^{\circ} \ 04^{\prime} \ 55\farcs7$ (J2000). The other colors of dots are the same as the right panel in Figure~\ref{fig2}.}\label{fig3} \end{figure} There are a few groups that have different PMs, on average. These groups may be related to Mon OB1 and Mon R1. We assigned the member candidates to each association based on the spatial distribution in Figure~\ref{fig3}. There are the well-populated association Mon OB1 to the east and the loose association Mon R1 to the west. We considered the boundary of Mon OB1 as a red ellipse with a semi-major axis of 40$^{\prime}$ and an eccentricity of 0.7 centered at R.A. = $06^{\mathrm{h}} \ 40^{\mathrm{m}} \ 54\fs56$, decl. = $+09^{\circ} \ 39^{\prime} \ 43\farcs7$ (J2000). The boundary of Mon R1 was assumed to be a blue circle with a radius of 35$^{\prime}$ centered at R.A. = $06^{\mathrm{h}} \ 31^{\mathrm{m}} \ 39\fs11$, decl. = $+10^{\circ} \ 05^{\prime} \ 55\farcs7$ (J2000). A total of 653 and 48 stars were found in Mon OB1 and Mon R1, respectively, from these criteria. An iterative process of removing PM outliers was performed to select a reliable set of members. We computed the weighted mean values and standard deviations of the PMs from the stars in each association. The inverse of squared error was adopted as the weight value. Then, stars with PMs within the standard deviations ($1\sigma$) from the mean PMs were used to determine better mean PMs and their standard deviations. These new statistical values were used as the initial values to select members. In each region, stars whose PMs are within four times the standard deviations ($4\sigma$) from the newly determined mean PMs were selected as members. The latter criteria was used to avoid eliminating possible member candidates. We redetermined the weighted mean PMs and standard deviations using the members. This procedure was repeated until the statistical values converge to constant values. The numbers of the final members in Mon OB1 and Mon R1 are 631 and 46 in total, and they are shown by red and blue dots in Figures~\ref{fig2} and \ref{fig3}, respectively. \begin{figure}[t] \epsscale{1.0} \plotone{fig04.pdf} \caption{Distance distributions. The panels from top to bottom display the distance distributions of members in the entire survey region, Mon OB1, Mon R1, and the halo, respectively. Distances were obtained from the inverse of the Gaia parallaxes \citep{gedr3} after correction for the zero-point offsets \citep{LBB21}. We used stars with parallaxes larger than five times their associated errors. The black curves shows the best-fit Gaussian distributions. }\label{fig4} \end{figure} A total of 53 YSO candidates were also found between the two associations. All of them are sparsely distributed within an elliptical region (dashed curve in Figure~\ref{fig3}). We refer to this low-stellar density region as halo and assume that there is no member outside the halo. The YSO members were selected in the same manner as above. As a result, we selected 51 YSO members in the halo. The halo YSO members are shown by black dots in Figures~\ref{fig2} and ~\ref{fig3}. There are a number of early-type stars that do not belong to the two associations. These stars, on average, have larger PMs and PM dispersion than those of genuine members within the associations (most gray dots in the right panel of Figure~\ref{fig2}). In addition, they are uniformly distributed over the surveyed region, except for the western part obscured by dark clouds. These facts implies that most of them may not be genuine members. Therefore, we did not additionally select early-type members in the halo. A total of 728 stars (631 in Mon OB1, 46 in Mon R1, and 51 in the halo) were finally selected as genuine association members. The list of members is presented in Table~\ref{tab1}. We plot the distance distributions of these stars in Figure~\ref{fig4}. The distance distribution of each group was fit by a Gaussian distribution. The distances to Mon OB1, Mon R1, and the halo were determined to be $704\pm38$ (s.d.) pc, $660\pm35$ (s.d.) pc, and $700\pm47$ (s.d.) pc, respectively, from the best-fit Gaussian distributions. These results are consistent with those of previous studies \citep{BF71,FvG91,SBL97,vdB66,MDM20, DMM21,MMD21} within errors. The members of Mon R1 are systematically closer than Mon OB1 although there is only an 1$\sigma$ level difference. \begin{deluxetable}{lccccccccccccccccc} \rotate \setlength\tabcolsep{1.5pt} \tabletypesize{\tiny} \tablewidth{0pt} \tablecaption{List of members \label{tab1}} \tablehead{\colhead{Sq.} & \colhead{R.A. (2000)} & \colhead{decl. (2000)} & \colhead{$\pi$} & \colhead{$\epsilon(\pi)$} & \colhead{$\mu_{\alpha}\cos\delta$} & \colhead{$\epsilon(\mu_{\alpha}\cos\delta)$} & \colhead{$\mu_{\delta}$} & \colhead{$\epsilon(\mu_{\delta})$} & \colhead{$G$} & \colhead{$G_{BP}$} & \colhead{$G_{RP}$} & \colhead{$G_{BP}-G_{RP}$} & \colhead{RV} & \colhead{$\epsilon$(RV)} & \colhead{Member type} & \colhead{Region} & \colhead{Group} \\ & \colhead{(h:m:s)} & \colhead{($\degr:\arcmin:\arcsec$)} & \colhead{(mas)} & \colhead{(mas)} & \colhead{(mas yr$^{-1}$)} &\colhead{(mas yr$^{-1}$)} & \colhead{(mas yr$^{-1}$)} &\colhead{(mas yr$^{-1}$)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} & & & } \startdata 1 & 06:28:42.43 & +09:32:09.3 & 1.5218 & 0.0304 & -2.623 & 0.034 & -5.604 & 0.030 & 15.4340 & 16.3833 & 14.3820 & 2.0014 & \nodata &\nodata& Y & Halo & \\ 2 & 06:30:47.07 & +09:51:54.4 & 1.6008 & 0.0665 & -2.078 & 0.067 & -5.383 & 0.057 & 16.7430 & 18.0777 & 15.5531 & 2.5247 & 1.53 & 0.86 & Y & Mon R1 & IC 447 \\ 3 & 06:30:47.06 & +10:03:46.4 & 1.6570 & 0.0436 & -2.283 & 0.044 & -5.091 & 0.035 & 9.4031 & 9.2126 & 9.2965 & -0.0840 & \nodata &\nodata& E & Mon R1 & IC 447 \\ 4 & 06:30:46.56 & +10:46:28.6 & 1.5125 & 0.1434 & -1.289 & 0.178 & -4.872 & 0.132 & 18.0089 & 19.9074 & 16.6634 & 3.2440 & \nodata &\nodata& Y & Halo & \\ 5 & 06:30:48.21 & +09:46:03.7 & 1.7047 & 0.0551 & -2.227 & 0.054 & -5.440 & 0.047 & 16.3639 & 17.4478 & 15.2790 & 2.1687 & \nodata &\nodata& Y & Mon R1 & IC 447 \\ 6 & 06:30:48.19 & +10:32:50.1 & 1.5169 & 0.0774 & -1.495 & 0.088 & -4.847 & 0.075 & 17.0090 & 18.4251 & 15.7492 & 2.6759 & \nodata &\nodata& Y & Mon R1 & IC 446 \\ 7 & 06:30:57.92 & +09:48:34.6 & 1.5884 & 0.0251 & -2.102 & 0.024 & -5.125 & 0.021 & 9.7876 & 9.8408 & 9.6407 & 0.2001 & \nodata &\nodata& E & Mon R1 & IC 447 \\ 8 & 06:31:02.58 & +09:59:20.6 & 1.5790 & 0.1324 & -2.268 & 0.129 & -4.552 & 0.106 & 17.8270 & 18.8366 & 16.5279 & 2.3086 & \nodata &\nodata& Y & Mon R1 & IC 447 \\ 9 & 06:31:03.63 & +10:01:13.6 & 1.5373 & 0.0168 & -2.363 & 0.016 & -4.934 & 0.013 & 11.9248 & 12.3280 & 11.3442 & 0.9838 & \nodata &\nodata& Y & Mon R1 & IC 447 \\ 10 & 06:31:06.12 & +10:27:34.1 & 1.5698 & 0.0160 & -2.647 & 0.019 & -4.337 & 0.016 & 10.8878 & 11.2365 & 10.3401 & 0.8963 & \nodata &\nodata& E & Mon R1 & IC 446 \\ \enddata \tablecomments{Column (1) : Sequential number. Columns (2) and (3) : The equatorial coordinates of members. Columns (4) and (5) : Absolute parallax and its standard error. Columns (6) and (7) : PM in the direction of right ascension and its standard error. Columns (8) and (9): PM in the direction of declination and its standard error. Columns (10) -- (12) : Magnitudes in $G$, $G_{BP}$, and $G_{RP}$ bands. Column (13) : $G_{BP} - G_{RP}$ color index. Columns (14) and (15) : Radial velocity at the local standard of rest and its error. Column (16) : Member type. `E' represents O- or B-type stars obtained from the data bases of MK classification \citep{WOE00,R03,MP03,S09,MSM13}. 'Y' denotes young stellar objects or pre-main sequence members. Column (17) : Region names. Column (18) : Group names. The parallax and PM were taken from the Gaia Early Data Release 3 \citep{gedr3}. We corrected for the zero-point offsets for the Gaia parallaxes according to the recipe of \citet{LBB21}. The radial velocities were obtained from \citet{THF15} and our observation. The full table is available electronically.} \end{deluxetable} \section{Results}\label{sec:4} \subsection{Substructures}\label{ssec:41} A number of previous studies have probed substructures in many SFRs from the spatial distributions of stars. However, in the absence of kinematic information, it is unclear whether substructures are real physical systems. In this study, we search for substructures using the Gaia PM and RV data, as well as the spatial distribution of members. \subsubsection{Mon OB1}\label{sssec:411} A deep photometric study of \citet{SBC08} in optical passbands showed that \object{NGC 2264} is composed of two active SFRs and a halo. The northern group is located around the massive O-type binary \object{S Monocerotis}, while the southern group is in the vicinity of the \object{Cone Nebula}. A halo surrounds these two SFRs. Later, Spitzer observations revealed that the southern group is composed of two subgroups of YSOs (Spokes and Cone(C), \citealt{TLY06,SSB09}). Additional smaller-scale substructures were identified in that southern group \citep{KFG14}. We searched for substructures in Mon OB1 from the correlations between PMs and positions of members. As a result, three stellar groups that are spatially and kinematically distinct were identified from the correlation between $\mu_{\alpha}\cos\delta$ (PM along R.A.) and declination in the upper left panel of Figure~\ref{fig5}. We checked the spatial distributions of members within the three ellipses in the figure and found that members were populated in three specific regions, i.e. they do not show a random distribution across this association. We divided members into three groups by three solid lines as shown in Figure~\ref{fig5}. These lines were adjusted through the visual inspection of their spatial distributions. Note that this division could be somewhat arbitrary because it is difficult to define the exact boundaries of each group. \begin{figure}[t] \epsscale{1.0} \plotone{fig05.pdf} \caption{Kinematic substructures in Mon OB1. The upper left panel shows the correlation between PMs along R. A. and positions along declination. It is evident that there are three groups of stars that are spatially and kinematically distinct. Three ellipses were used to ascertain the spatial distributions of members within them. The solid lines represent the arbitrary boundaries of each group. The other panels display the spatial distributions of members belonging to each group. Red, blue, and orange dots represent the members of the S Mon group, the Cone group, and the THF15 group, respectively. The size of dots is proportional to the brightness of individual stars. }\label{fig5} \end{figure} \begin{figure}[t] \epsscale{1.0} \plotone{fig06.pdf} \caption{Tangential velocities and RVs of stars with respect to R.A. and declination in Mon OB1. The vertical lines represent the errors of velocities. The colors of symbols are the same as those in Figure~\ref{fig5}. }\label{fig6} \end{figure} The northern group (hereafter S Mon group, red symbol in Figure~\ref{fig5}) has a median PM ($\mu_{\alpha}\cos\delta$, $\mu_{\delta}$) of ($-1.649$ mas yr$^{-1}$, $-3.655$ mas yr$^{-1}$). We could not find additional clustering in the group. On the other hand, the southern group is separated into two groups. The group shown by blue symbol (the lower-left panel) has a median PM of ($-2.455$ mas yr$^{-1}$, $-3.721$ mas yr$^{-1}$): we refer to this southern group as the Cone group according to the nomenclature of \citet{SBC08}. The other group (orange symbol in the lower-right panel) has a median PM of ($-1.676$ mas yr$^{-1}$, $-3.685$ mas yr$^{-1}$). This group seems to correspond to the blueshifted population reported by \citet{THF15}, and therefore we refer to this group as THF15 from the names of the authors. There is no smaller-scale substructure within these two groups as well given the absence of additional enhancement in stellar surface density. The central coordinates of these three groups were obtained from the median positions of the relevant members. We summarize the basic properties of these groups in Table~\ref{tab2}. Figure~\ref{fig6} displays the tangential velocities ($V_{\mathrm{R. A.}}$ and $V_{\mathrm{decl.}}$) and RVs of members. The tangential velocities were obtained from the PMs multiplied by the distance of 704 pc. Stars in the S Mon and THF15 groups have similar $V_{\mathrm{R. A.}}$ but it differs from that of the Cone group, while stars in the groups have almost same $V_{\mathrm{decl.}}$ (see also Table~\ref{tab2}). The kinematic substructures can also be confirmed from the correlations between RVs and positions of members. Especially, there is a gradient of RVs with respect to declination ($\sim$ 0.4 km s$^{-1}$ pc$^{-1}$). The median RVs of the S Mon group, Cone, and THF15 are about 5.7, 4.3, and 2.3 km s$^{-1}$, respectively. The members in THF15 have RVs systematically smaller than the others as reported by \citet{THF15}. \begin{deluxetable*}{lcccccccc} \tabletypesize{\tiny} \tablewidth{0pt} \tablecaption{Basic properties of the identified stellar groups. \label{tab2}} \tablehead{\colhead{Group} & \colhead{R.A. (2000)} & \colhead{decl. (2000)} & \colhead{$\mu_{\alpha}\cos\delta$} & \colhead{$\mu_{\delta}$} & \colhead{$V_{\mathrm{R.A.}}$} & \colhead{$V_{\mathrm{decl.}}$} & \colhead{RV} & \colhead{$N_{\mathrm{star}}$} \\ & \colhead{(h:m:s)} & \colhead{($\degr:\arcmin:\arcsec$)} & \colhead{(mas yr$^{-1}$)} & \colhead{(mas $^{-1}$)} & \colhead{(km s$^{-1}$)} &\colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} & } \startdata S Mon & 06:40:52.99 & +09:52:25.0 & -1.649 & -3.655 & -5.5 & -12.2 & 5.7 & 279\\ Cone & 06:41:02.78 & +09:34:09.9 & -2.455 & -3.721 & -8.2 & -12.4 & 4.3 & 240\\ THF15 & 06:41:03.02 & +09:30:23.0 & -1.676 & -3.685 &-5.6 &-12.3& 2.3 & 112 \\ IC 447 &06:31:14.03 & +09:53:41.8 & -2.143 & -5.137 &-6.7 &-16.1& 1.6 & 18\\ N2245/47 & 06:32:34.72 & +10:15:41.2 &-1.830 & -4.433 &-5.7 & -13.9& 4.5 & 20\\ IC 446 & 06:31:19.60 & +10:24:42.1 & -2.322 & -4.514 & -7.3 & -14.1 & \nodata & 8\\ \enddata \tablecomments{All the shown measurements correspond to median values.} \end{deluxetable*} \begin{figure}[t] \epsscale{1.0} \plotone{fig07.pdf} \caption{Spatial distribution of members in Mon R1. Pink, green, and cyan dots represent the members of IC 447, N2245/47, and IC 446, respectively. The size of dots is proportional to the brightness of individual stars. Solid lines are used to separate stellar groups.}\label{fig7} \end{figure} \begin{figure}[t] \epsscale{1.0} \plotone{fig08.pdf} \caption{Tangential velocities and RVs of stars with respect to R.A. and declination in Mon R1. The vertical lines represent the errors of velocities.The colors of symbols are the same as those in Figure~\ref{fig7}. }\label{fig8} \end{figure} \subsubsection{Mon R1}\label{sssec:412} Figure~\ref{fig7} displays the spatial distribution of stars in Mon R1. This stellar association is composed of three small stellar groups associated with reflection nebulae. The young open cluster Collinder 95 occupies the southern part of Mon R1. The reflection nebula IC 447 may have been formed by the early-type members of this cluster. We referred to this group as IC 447. A partially embedded group of stars in the vicinity of the reflection nebula IC 446 are found to the northwest of this association (hereafter IC 446). The other embedded group lies between the reflection nebulae \object{NGC 2245} and \object{NGC 2247} in the eastern region (hereafter N2245/47). The boundaries of these groups were determined from the spatial distribution of members. Members below the declination of $\Delta\mathrm{decl.} = -0\farcm5$ were assigned to the members of IC 447. The members of IC 446 were confined to the northwestern stars ($\Delta\mathrm{R.A.} < -65\farcm0$ and $\Delta\mathrm{decl.} > 10\farcm0$). The rest of stars were considered as the members of NGC 2245/47. \begin{figure*}[t] \epsscale{1.0} \plotone{fig09.pdf} \caption{Relative PMs of stars in Mon OB1. The upper left panel shows the spatial distribution of its three stellar groups. Red, blue, and orange dots represent the S Mon group, the Cone group, and the THF15 group, respectively. The systemic motions of these groups relative to the median PM of Mon OB1 are shown by arrows with the same colors in the box, where the PM vectors were shifted along R.A. by 35$^{\prime}$ to avoid confusion. We plot the PM vectors (solid lines) of individual members relative to the systemic motion of a given group in the other panels. The star symbol denotes the O-type binary S Monocerotis. The size of dots is scaled by the brightness of individual stars. }\label{fig9} \end{figure*} We plot $V_{\mathrm{R. A.}}$, $V_{\mathrm{decl.}}$, and RVs of the Mon R1 members in Figure~\ref{fig8}. The three stellar groups show somewhat complicated kinematic substructures. The members of N2245/47 and IC 446 show a large scatter in tangential velocities compared to the members of IC 447. Also, the $V_{\mathrm{decl.}}$ of stars continuously vary with declination from IC 447 to IC 446 ($\sim$ 0.5 km s$^{-1}$ pc$^{-1}$). The YSOs with RV measurements are found in only the two groups IC 447 and N2245/47, and their total number is somewhat limited to probe the global variation. Nevertheless, the RVs of the members show a tendency similar to the tangential velocities. The RVs of IC 447 and N2245/47 follow the velocity fields of the remaining molecular gas. The former and latter correspond to the gas components in the RV ranges of $-4$ to 2 km s$^{-1}$ and 2 to 10 km s$^{-1}$ \citep{BDP20}, respectively. It suggests that the complicated kinematics of stars seen in the tangential velocity distributions might have been inherited from that of their natal cloud. We determined the central coordinates of the three groups from the median positions of the associated members. Their basic properties are summarized in Table~\ref{tab2}. \begin{figure}[t] \epsscale{1.0} \plotone{fig10.pdf} \caption{Vectorial angle distributions of stars in the stellar groups of Mon OB1. The upper panels display the vectorial angles with respect to the projected radial distances from the central positions of host groups. The distributions of vectorial angles are shown by histograms in the lower panels. The histograms were obtained with a bin size of $45^{\circ}$. }\label{fig10} \end{figure} \subsection{Kinematics}\label{ssec:42} \subsubsection{Internal kinematics in Mon OB1}\label{sssec:421} We investigated the internal motions of stellar groups in Mon OB1. The upper left panel of Figure~\ref{fig9} yields the spatial distribution of stars along with the PM vectors (arrows) of three stellar groups relative to the systemic motion of Mon OB1, where the relative PM vectors are the median PM vectors of individual groups subtracted by the median PM of the entire system. The three groups do not have any significant motion along declination. The S Mon and THF15 groups are moving eastward with similar velocities, while the Cone group is moving westward at a larger velocity. The relative PM vectors of individual members within a given group were obtained after subtracting their median PM. Figure~\ref{fig9} displays the relative PM vectors of members in the S Mon (upper right), Cone (lower left), and THF15 (lower right) groups. Many members in the S Mon and Cone groups tend to show outward motions from the central position of each group, while there is no clear pattern of expansion in the THF15 group. In order to quantitatively probe the internal motions of stars, we measured the vectorial angles of members as used in our previous studies \citep{LNGR19,LHY20,LNH21}. The vectorial angle ($\Phi$) is an angle between the position vector from the group center and the relative PM of a given star. A zero value means that a star is radially escaping from its host group. We present the $\Phi$ distribution of individual members belonging to each group in Figure~\ref{fig10}. The $\Phi$ values of members in the S Mon group are clustered around 0$^{\circ}$, which is indicative of expansion. The Cone group also shows a pattern of expansion as many members of this group have $\Phi$ values around 0$^{\circ}$. On the other hand, the members of THF15 do not show clear outward motions given that there is no strong peak around $\Phi = 0^{\circ}$. Recent studies also detected expansion for the S Mon group \citep{KHS19,BKG20}, but not for the southern groups. The southern stellar groups are seen as a single cluster in optical passbands \citep{SBC08}, while two well-defined subgroups, the so-called Cone(C) and Spokes \citep{TLY06,SSB09}, are found around the embedded YSOs NGC 2264 IRS 1 and IRS 2 in infrared passbands. The fact that a large fraction of YSOs in the Cone(C) and Spokes groups were not found in optical CMD indicates that these two groups are deeply embedded \citep{SB10}. The presence of molecular clouds toward the southern region can directly be confirmed from \citet{THF15}. Therefore, we speculate that there are four stellar groups (Cone, THF15, Cone(C), and Spokes) along the line of sight at the south of Mon OB1. If this is true, \citet{KHS19} might have probed small parts of the Cone and THF15 groups, not actually the Cone(C) and Spokes groups. Then, the investigators could not find any pattern of expansion. \begin{figure}[t] \epsscale{1.0} \plotone{fig11.pdf} \caption{Relative PMs of stars in Mon R1. The left panel shows the spatial distribution of three stellar groups. The systemic motions of these groups relative to the median PM of Mon R1 are shown by arrows. We plot the PM vectors (solid lines) of stars relative to the systemic motion of a given group in the right panel. The size of dots is scaled by the brightness of individual stars. The colors of the symbols are the same as those in Figure~\ref{fig7}.}\label{fig11} \end{figure} \subsubsection{Internal kinematics in Mon R1}\label{sssec:422} We investigated the kinematics of stellar groups in Mon R1 in the same way. The left panel of Figure~\ref{fig11} displays the motions of the three groups relative to the systemic motion of Mon R1. IC 447 is moving south, while N2245/47 and IC 446 are moving northeast and northwest, respectively. These groups are receding away from each other. The PM vectors of individual members relative to their host groups are shown in the right panel. Although it is difficult to define the center of each group due to the small number of stars, members tend to show outward motions. Figure~\ref{fig12} shows the $\Phi$ distributions. Stars in the three groups have $\Phi$ values around 0$^{\circ}$, indicating that the members of these groups are scattered radially outward. \begin{figure}[t] \epsscale{1.0} \plotone{fig12.pdf} \caption{Vectorial angle distributions of stars in the stellar groups of Mon R1. The upper panels display the vectorial angles with respect to the projected radial distances from the centers of IC 447, N2245/47, and IC 446 (from left to right), respectively. The distributions of vectorial angles are shown by histograms in the lower panels. The size of bins is $45^{\circ}$. }\label{fig12} \end{figure} \subsection{Rotation}\label{ssec:43} We searched for the signature of rotation for the three groups in Mon OB1 using the RVs of members as done by previous studies \citep{LKL09,MDFY13,LRN19,LNH21}. A projected rotational axis passing through the central position of a given group was set at a position angle of $0^{\circ}$ (from north to south) in the projected sky plane. We computed the difference between the mean RVs of stars in the two areas separated by the axis. The same computation was repeated for various position angles between 0$^{\circ}$ and 360$^{\circ}$ with an interval of 20$^{\circ}$ in a counterclockwise direction (north to east). If a given cluster is rotating, the mean RV differences appear as a sinusoidal curve. \begin{figure}[t] \epsscale{1.0} \plotone{fig13.pdf} \caption{Signature of rotation of the S Mon group (upper) and Cone group (lower). $\Delta$RV denotes the difference of mean RVs between two regions separated by a projected rotational axis at a given position angle ($\Theta$). The blue solid lines represent the best-fit sinusoidal curves. Half of the amplitude corresponds to the projected rotational velocity. }\label{fig13} \end{figure} We found the signature of rotation for the members within a projected radius of $7^{\prime}$ for the S Mon group and within $5^{\prime}$ for the Cone group. Figure~\ref{fig13} exhibits the variations of the mean RV differences with respect to position angles. These observed variations were fit by the sinusoidal curve : \begin{equation} \Delta\langle\mathrm{RV}\rangle = 2 V_{\mathrm{rot}}\sin i \sin(\Theta + \Theta_0) \end{equation} \noindent where $V_{\mathrm{rot}}$, $i$, and $\Theta_0$ represent the rotational velocity, inclination angle of a rotational axis along the line of sight, and phase, respectively. The amplitude of the best-fit sinusoidal curve corresponds to twice the projected rotational velocity ($V_{\mathrm{rot}} \sin i$). The S Mon and Cone groups are rotating at $0.23 \pm 0.02$ km s$^{-1}$ and $0.87 \pm 0.03$ km s$^{-1}$, respectively, if we assume an inclination angle of $90^{\circ}$. The position angle of the projected rotational axis can be estimated from $270^{\circ} - \Theta_0$. The projected rotational axes of the S Mon and Cone groups almost lie from north to south, but these groups are rotating in opposite directions. On the other hand, we could not find any signature of rotation for the THF15 group. The same method could not be applied to the stars in Mon R1 because of a small number of stars. \begin{figure}[t] \epsscale{1.0} \plotone{fig14.pdf} \caption{Color-magnitude diagram of stars in the survey region. The meaning of the symbols is shown in the upper-right corner of the panel. The $G_{\mathrm{RP}}$ magnitude of individual stars were corrected by the distance moduli of the associated SFRs. The black and gray dashed curves show the isochrones reddened by $A_V$ of 0.22 and 0.62 mag, respectively, for ages of 2, 3, 5, and 10 Myr \citep{D16,CDC16}. The black arrow represents the reddening vector corresponding to the total extinction of $A_V = 1.$ mag.}\label{fig14} \end{figure} \subsection{Ages of stellar groups}\label{ssec:44} Figure~\ref{fig14} displays the CMD of stars in Mon OB1, Mon R1, and the halo. The $G_{\mathrm{RP}}$ magnitudes of members were corrected by the distance moduli of their host groups (see Section~\ref{sec:3}). We superimposed four isochrones (gray curves) for 2, 3, 5, and 10 Myr \citep{D16,CDC16} on the CMD. A mean total extinction ($A_V$) of 0.22 mag in visual magnitude \citep{SBL97} was applied to the isochrones. The faint part of the CMD is significantly affected by some factors such as large photometric errors, high internal extinction of disk-bearing stars, and variabilities \citep[etc]{CSB14,SCB14,LSB15,SCR16}. In addition, the systematic difference between the isochrones from the adopted evolutionary models and the observed CMD is found for late-type pre-main sequence stars. For these reasons, we only considered the bright part of the CMD as seen in Figure~\ref{fig14}. The magnitude (or luminosity) of the main sequence turn-on is sensitive to the age of a given coeval stellar system. The ages of members in Mon OB1 roughly range from 2 to 10 Myr. An age spread of 4 -- 5 Myr has been expected from the lithium abundances of pre-main sequence stars \citep{LSK16} and from the CMD analysis of \citet{VPS18}. Hence, star formation in Mon OB1 might be sustained on a several Myr scale. The S Mon group (2 -- 3 Myr) seems to be younger than the Cone and THF15 groups ($\gtrsim$ 5 Myr) given that main sequence turn-on appears at higher luminosity. The members of Mon R1 have ages similar to those of the S Mon group. On the other hand, the CMD of the halo stars overlaps with that of the older populations of Mon OB1 (Cone and THF15). The photometric errors of stars around the main sequence turn-on ($G_{RP} < 12$ mag) are much smaller than 0.01 mag. Therefore, these are not the major source of uncertainties in age estimation. The error on distance ($\pm 40$ pc) corresponds to $\pm$0.1 mag in distance modulus. The contribution of this error to age estimation is also negligible. The other factor is differential reddening across the survey region. \begin{figure*}[t] \epsscale{1.0} \plotone{fig15.pdf} \caption{Distributions of stars and gas in our survey region. The left and right panels display the IRIS image at 100$\micron$ \citep{ML05} and the AKARI false-color image (blue : 65 $\micron$, green : 90 $\micron$, and red : 140 $\micron$ -- \citealt{DTO15}), respectively. The contours in the left panel trace the arc-like structure of the remaining cloud. In the right panel, the PM vectors of stars relative to the systemic motion of Mon OB1 are shown by solid lines. The black dots represent the halo stars. The white ellipse represents the position of the SFR G202.3+2.5. Some infrared and submillimetric sources \citep{BNH88,DJK08} in the northern knot were marked by purple dots. The colors of the other symbols are the same as in Figures~\ref{fig5} and \ref{fig7}. }\label{fig15} \end{figure*} The differential reddening over Mon OB1 is known to be small $\langle E(B-V) \rangle = 0.07 \pm 0.03$ \citep{SBL97}. \citet{SBC08} found higher reddening values of $E(B-V) \sim 0.2$ for low-mass pre-main sequence stars in the region (see also \citealt{RMS02}). We plotted the isochrones reddened by this high reddening value (gray dashed curves) in Figure~\ref{fig14}. But, the colors and magnitudes of main sequence stars are closer to those of the isochrones reddened by the mean reddening value of $E(B-V) = 0.07$ ($A_V = 0.22$ mag). This implies that the differential reddening is not high enough to affect the relative ages among the stellar groups in Mon OB1. Most of the members of Mon R1 seen in Figure~\ref{fig14} are the bright members of IC 447. The reddening of these stars due to the intracluster medium may be small. Indeed, they were found in the cavity of the dusty cloud (see figure 1 of \citealt{BDP20}). The B-type member with the bluest color in IC 447 has a color similar to those of members in Mon OB1. Therefore, the minimum reddening toward Mon R1 may be similar to the mean reddening of Mon OB1. There is a bright member of IC 446 in the CMD ($G_{RP} - DM = 1.24$, $G_{BP} - G_{RP} = 0.90$). The age of the star is about 2 Myr, which is comparable to the ages of the IC 447 members. The ages of the bright stars in IC 446 and IC 447 may not be significantly altered ($\lesssim$ 1 Myr) by differential reddening given the reddening vector. \subsection{A Large-scale Distribution of Stars and Gas}\label{ssec:45} We plotted the IRIS image at 100$\micron$ and the AKARI false-color (blue : 65 $\micron$, green : 90 $\micron$, and red :140 $\micron$) image of interstellar material over our survey region in Figure~\ref{fig15}. Interestingly, the IRIS image reveals a large arc-like structure across the survey region. The members of Mon OB1, Mon R1, and the halo were superimposed on the AKARI image. It seems clear that star formation is actively taking place along the arc-like structure. Indeed, bright knots notably host Mon OB and Mon R1, but also \object{G202.3+2.5} \citep{MJV19a,MJV19b} and some continuum sources that are bright at infrared and submillimeter wavelengths \citep{BNH88,DJK08}. Figure~\ref{fig15} also displays the PM vectors of members relative to the systemic motion of Mon OB1. A high fraction of the halo stars are found around Mon OB1 and tend to move outward from the association. Their $\Phi$ distribution has a strong peak at around 20$^{\circ}$. On the other hand, the members of Mon R1 are systematically moving toward south relative to Mon OB1. \section{Discussion}\label{sec:5} \subsection{Implication on cluster formation}\label{ssec:51} The S Mon and Cone groups show patterns of expansion as seen in the other stellar clusters \citep[etc]{CJW19,KHS19,LNGR19, LHY20,LNH21}. About 50\% of members in the S Mon group are radially escaping from this group, but some members beyond $5^{\prime}$ ($\Phi \sim \pm 180^{\circ}$, Figure~\ref{fig10}) are still sinking into the group center. The fraction of these members may be less than 20\% from the last two bins around $\Phi \sim \pm 180^{\circ}$ in the histogram of Figure~\ref{fig10}. The Cone group shows a similar pattern, but less clear than that of the S Mon group. Such a trend was also found in the Orion Nebula Cluster \citep{PRB20}. On the other hand, the young open clusters \object{IC 1805} and \object{NGC 2244} only show patterns of expansion without signature of collapse \citep{LHY20,LNH21}. The different internal kinematics among these clusters may result from the different initial conditions of cluster formation and evolution time. Many theoretical studies have tried to explain the expansion of stellar clusters as the result of their dynamical evolution after rapid gas expulsion \citep{T78,H80,LMD84,KAH01,BK13,BK15}. Our previous study \citep{LHY20} explained the expansion of the young open cluster \object{IC 1805} using an $N$-body simulation without the consideration of gas expulsion. This simulation considered a model cluster formed at an extremely subvirial state. The modelled cluster experienced collapse in the first 2 Myr and then expanded. As a result, the members of this clusters have $\Phi$ values around $\pm180^{\circ}$ within 2 Myr and then have $\Phi$ values around $0^{\circ}$ after the epoch of the major collapse. The members of the S Mon and Cone groups have $\Phi$ distributions similar to the snapshots of the modelled cluster evolution during the transition epoch from collapse to rebound. Therefore, the monolithic cold collapse scenario can provide a possible explanation for the formation and evolution of these two groups. In addition, the rapid gas ejection and stellar feedback could affect the structure and dynamics of these groups \citep{GBR17}. The S Mon and Cone groups also show the signature of rotation. Some young stellar clusters were also found to be rotating, e.g., \object{R136} \citep{HGE12}, \object{Trumper 15} \citep{KHS19}, and \object{NGC 2244} \citep{LNH21}. These results provide important constraints on the cluster formation process, such as the monolithic collapse of rotating clouds or the hierarchical assembly of subclusters \citep{CLG17,M17}. A large fraction of molecular clouds in external galaxies were found to be rotating \citep{REPB03,T11,BHR20}. Rotating clusters can naturally form in such rotating clouds. However, the groups in Mon OB1 have different directions of rotation, which cannot be explained by only the monolithic collapse of a single molecular cloud. Theoretically, collisions between molecular clouds can result in a larger cloud with retrograde rotation with respect to the galactic rotation on a large spatial scale \citep{DBP11}. If such collisions could occur between gaseous and stellar clumps on small spatial scales (several pc), then the angular momentum vector of the natal cloud could be changed \citep{M17}. In addition, the observational evidence for an on-going merger of stellar clusters was reported \citep{SLG12}. Therefore, our findings suggest that at least one of the groups in Mon OB1 might have been formed via the hierarchical merging process. On the other hand, the stellar groups in Mon R1 do not seem to form as self-gravitating systems given the weak central concentration of stars. As these groups expand and move away from each other, their members will eventually disperse to become field star population in the Galactic disk \citep{MS78,BPS07}. \begin{figure*}[t] \epsscale{1.0} \plotone{fig16.pdf} \caption{Schematic sketch of star formation history in Mon OB1 and Mon R1 from $>$ 5Myr ago (left) to present (right). Stellar groups are labelled at the epoch when they were formed. See the text for details.}\label{fig16} \end{figure*} \subsection{Star formation on different spatial scales}\label{ssec:52} The projected distance and the line of sight distance between Mon OB1 and Mon R1 are about 20 and 40 pc, respectively, which is the typical size of giant molecular clouds. Members in these two associations have velocities in almost the same range (see Figures~\ref{fig6} and ~\ref{fig8}). As a comparison point, note that the Orion A cloud shows an RV variation larger than 10 km s$^{-1}$ from north to south \citep{YLC21}. In addition, molecular gas constituting the arc-like structure as seen in Figure~\ref{fig15} was found in the same velocity channel ($0 < V [km/s] < 15$ from figure 2-b of \citealt{OMT96}). Hence, the two associations and the other small SFRs related to bright knots have formed in the same molecular cloud. However, it is unclear how their formation is related to each other. Based on the age distributions of pre-main sequence stars in Mon OB1, \citet{SB10} proposed an outside-in star formation history for Mon OB1. Star formation has been initiated from the halo and propagated via the S Mon and Cone groups to the Cone(C) and Spokes groups around the embedded YSOs NGC 2264 IRS 1 and IRS 2 (see also \citealt{VPS18}). However, this scenario should be slightly modified, according to our results. Indeed, star formation initiated in the southern region (Cone and THF15) and then occurred in the northern region (S Mon). Recent star-forming activity has ignited in the molecular gas behind the southern region, which may have resulted in the formation of the Cone(C) and Spokes groups. The star formation history of Mon OB1 and Mon R1 is illustrated in a simple cartoon (Figure~\ref{fig16}). A number of halo stars were found around Mon OB1, and they seem to be escaping from the association (Figure~\ref{fig15}). Most of them might thus have formed in Mon OB1 as the first generation of stars. The fact that the halo stars have almost the same ages as those of the older populations in Mon OB1 supports this argument. Therefore, the eastern halo might have been formed by stars escaping from the association rather than the star formation {\it in-situ} proposed by \citet{SB10}. The systemic motion of Mon R1 may provide clues to the formation of this association. If this association had been formed in the compressed clouds by feedback from the O-type binary S Monocerotis which is located to the east of Mon R1, its members would move westward. However, they do not show such a systematic motion toward west (Figure~\ref{fig15}). Furthermore, there is no significant age difference between the S Mon group and Mon R1. The stellar groups in Mon R1 might thus have spontaneously formed at about the same epoch as the formation of the S Mon group in Mon OB1. Herschel images at submillimeter wavelengths reveal the networks of filaments in many molecular clouds \citep{A15}. The presence of filamentary structures in molecular clouds has been accepted as a ubiquitous feature. Turbulence could play a key role in the formation of filamentary structures \citep{L81,PJG01}. Recently, there is increasing evidence that magnetic fields significantly contribute to the formation of filaments on a small scales \citep[etc]{WLE19,DHF20}. Cores and protostars form after the gravitational fragmentation of filaments \citep{AMB10}. Filament hubs with high densities are the sites of stellar cluster formation \citep{SCH12,GLZ13, TFS19}. The relics of such filaments have been found toward the Vela OB2 association and the Orion region \citep{JBB19,BBJ20,PYT21}. In our survey region, SFRs including Mon OB1 and Mon R1 are distributed along a large arc-like structure in a hierarchical way. The formation of structures on different spatial scales by the actions of turbulence, gravity, and magnetic field may be the essential formation process of Mon OB1 and Mon R1. \section{Summary}\label{sec:6} In this study, we investigated the spatial distributions and kinematic properties of young stellar population in the two stellar associations Mon OB1 and Mon R1 to understand star formation process and their physical association. We first isolated member candidates in a $6^{\circ} \times 6^{\circ}$ survey region using the published data sets. Then, a total of 728 members were finally selected from the criteria based on the Gaia parallaxes and PMs. The spatial distributions of these stars show substructures that are kinematically distinct. Mon OB1 contains three optically visible stellar groups, the S Mon, Cone, and THF15 groups. We also suggested the possibility that there are two embedded groups (Spokes and Cone(C)) behind the optically visible groups. Mon R1 hosts the open cluster IC 447 and two partially embedded groups (N2245/47 and IC 446). In addition, some stars were found in the halo region. The stellar groups, except for \object{THF15} show patterns of expansion as seen in many associations. In addition, the signature of rotation was detected for the S Mon and Cone groups. Interestingly, these groups are rotating in opposite directions, which could be a trace if clouds having merged in the past. We analyzed the CMD of members to infer the star formation history in the survey region. The members of Mon OB1 have ages ranging 2 Myr to $\gtrsim$ 5 Myr. The ages of the Mon R1 members (2 -- 3 Myr) are similar to those of the younger population in Mon OB1, while the halo stars ($\gtrsim$ 5 Myr) have ages similar to those of the older population. Furthermore, the motions in Mon R1 are not pointing away from Mon OB1. This suggests that Mon OB1 and Mon R1 might have formed independently in a giant molecular cloud. In addition, Mon OB1 and Mon R1 belong to a large scale arc-like structure comparable to the size of typical giant molecular clouds. Actually, more star formation activities on small scales are found along this large structure, forming a hierarchy: isolated stars, clusters, and associations. Hence, these two associations might have formed within the same cloud in a hierarchical way. In addition, the expansion of stellar groups plays a crucial role in the formation of the halo population. \begin{acknowledgments} This paper has made use of data obtained under the K-GMT Science Program (PID: GEMINI-KR-2020A-003 and Gemini program number: GS-2020A-Q-239) funded through Korean GMT Project operated by Korea Astronomy and Space Science Institute (KASI) and from the European Space Agency (ESA) mission {\it Gaia} (https://www.cosmos.esa.int/gaia), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has also made use of the SIMBAD database,operated at CDS, Strasbourg, France. Based on observations obtained at the international Gemini Observatory, a program of NSF’s NOIRLab [ include additional acknowledgment here, see section below ], which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio de Ciencia, Tecnolog\'ia e Innovaci\'on (Argentina), Minist\'erio da Ci\^encia, Tecnologia, Inova\c c\~oes e Comunicações (Brazil), and Korea Astronomy and Space Science Institute (KASI) (Republic of Korea). This work used the Immersion Grating Infrared Spectrometer (IGRINS) that was developed under a collaboration between the University of Texas at Austin and the KASI with the financial support of the US National Science Foundation under grants AST-1229522 and AST-1702267, of the University of Texas at Austin, and of the Korean GMT Project of KASI. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (Grant No : NRF-2019R1C1C1005224 and 2022R1C1C2004102). Y.N. acknowledges support from the Fonds National de la Recherche Scientifique (Belgium), the European Space Agency (ESA) and the Belgian Federal Science Policy Office (BELSPO) in the framework of the PRODEX Programme (contracts linked to XMM and Gaia). \end{acknowledgments} \vspace{5mm} \facilities{Gemini-South:8.2m} \software{{\tt xcsao} \citep{KM98}, {\tt SPECTRUM} \citep{GC94}, {\tt IGRINS pipeline 2} \citep{LGK17}, {\tt NumPy} \citep{HMvdW20}, {\tt Scipy} \citep{VGO20}} \newpage
2,869,038,155,796
arxiv
\section{Introduction} During their red supergiant (RSG) phase of evolution, massive stars (8 $<$ M $<$ 30 M$_\odot$; \citealt{2012A&A...537A.146E}) experience an important mass loss (\tento{-6} - \tento{-4}\msun\,yr$^{-1}$; \citealt{2006ASPC..353..211V}; \citealt{2010A&A...523A..18D}) which strongly impacts their final mass, hence the properties of the supernova (SN) progenitor and of the compact remnant that is left behind. Material that is lost in the stellar wind, together with that ejected in the final core collapse, contributes to the chemical enrichment of the interstellar medium. The mass-loss properties of RSGs are however poorly constrained, and little is known about the mechanism(s) driving material from the surface. Without this knowledge it is difficult to build reliable theoretical models to predict mass-loss rates and therefore difficult to deduce the initial mass of SN IIP and IIL progenitors. This situation is problematic, not only from the perspective of understanding massive star evolution and the role of RSG in stellar feedback, but also with respect to fathoming the so-called Red Supergiant Problem (i.e. the lack of detections of progenitors with initial mass $>$ 17\,M$_\odot$; \citealt{2009ARA&A..47...63S}), which may be connected to an underestimated RSG mass-loss (\citealt{2012A&A...537A.146E}; \citealt{2012A&A...538L...8G}). Several mechanisms have been proposed that could contribute to RSG mass loss. Most invoke a stellar wind driven by radiation pressure on dust grains, similar to the mechanism proposed for Asymptotic Giant Branch (AGB) stars (\citealt{2008A&A...491L...1H}; \citealt{2016A&A...594A.108H}). However, it is doubtful whether the prevailing thermodynamic conditions to grow grains in the first place are comparable for AGB and RSG stars \citep{2019A&A...623A.158H}. Radiation pressure exerted on spectral lines of molecular species may be an alternative, though at the present time it is unclear whether this provides sufficient driving in the onset region of the flow \citep{2010ASPC..425..181B}. The possibility of an Alfv\'{e}n wave-driven wind has also been explored by \citet{2000ApJ...528..965A} and a wave-driven mass-loss model for Betelgeuse can be seen in \citet{1984ApJ...284..238H}. A further alternative is that the mass-loss trigger is linked to surface activity, where pulsations and large convective cells upwelling from the sub-photosphere may lower the effective gravity allowing radiation pressure to launch material \citep{2007A&A...469..671J}. Observations of Antares, one of the closest RSGs, by \citet{2017A&A...605A.108M} with the VLTI/PIONIER instrument revealed that convective cells of various sizes cover the stellar surface, confirming early indications of the presence of such structures by \citet{1997MNRAS.285..529T} who detected variable hot spots on the stellar surface and \citet{1990A&A...230..355R} who suspected an asymmetric brightness profile. Above the surface (out to 1.7 stellar radii) turbulent motion of large clumps of gas were observed using VLTI/AMBER \citep{2017Natur.548..310O}. Moving further away from the star, \citet{2014A&A...568A..17O} detected large clumps containing dust within 40\,$-$\,96 R$_{\star}$ with VLT/VISIR, upholding first indications for the presence of dusty clumps by \citet{2001ApJ...548..861M}. To investigate whether there is a link between surface convection and these clumps in the ambient environment, observations of the innermost circumstellar environment are needed. This is made possible by the high spatial resolution polarimetry capabilities of SPHERE/ZIMPOL at ESO's VLT observatory in Paranal, Chile. This instrument, with angular resolution up to 23 mas is capable of resolving the surfaces of the two closest RSGs, Antares and Betelgeuse, allowing the dust in the inner wind to be probed in detail. Betelgeuse has previously been observed using this instrument by \citet{2016A&A...585A..28K}. In this paper we present SPHERE/ZIMPOL observations of the RSG Antares along with 3D radiative transfer modelling of its ambient surroundings in order to characterise the spatial distribution and amount of dust near the surface. The observations and data reduction are described in Sect. \ref{sec:obs} followed by their analysis in Sect. \ref{sec:analysis}. The model setup and results are described in Sect. \ref{sec:modelling}. We discuss our findings in Sect. \ref{sec:discussion} and end with a summary and conclusions in Sect.~\ref{sec:summary}. \section{Observations and data reduction} \label{sec:obs} Antares ($\alpha$\,Sco A, HD\,148478, HD\,6134) is an M0.5\,Iab \citep{1984ApJS...55..657C} RSG star at a distance of $170^{+29}_{-25}$\,pc \citep{2007A&A...474..653V}. As such, it is one of the largest and visually brightest stars in the sky. It has an angular diameter of $37.89 \pm 0.10$ mas at the H$^{-}$ opacity minimum at 1.61\,$\mu$m \citep{2017A&A...605A.108M} and $37.38 \pm 0.06$\,mas in the K-band continuum \citep{2013A&A...555A..24O}. Antares is known to have a companion, the B2.5\,V star $\alpha$\,Sco\,B, at a 2.73" angular separation (in 2006), approximately 224\,AU behind the supergiant \citep{2008A&A...491..229R}. Observations of Antares, and a corresponding point spread function (PSF) calibrator star, $\epsilon$ Sco, were taken on 25 June 2015. These observations were carried out using SPHERE/ZIMPOL \citep{2019A&A...631A.155B}, a high resolution adaptive optics imaging polarimeter, at ESO's Very Large Telescope (VLT). Antares and its calibrator were observed in six filters in the visible. The log of the observations is presented in Table \ref{obstab}. The filter characteristics are given in Table \ref{fluxtab}. \begin{table*} \caption{Log of SPHERE/ZIMPOL observations of Antares and reference star $\epsilon$ Sco.} \label{table:1} \centering \begin{tabular}{l l l c c c c c } \hline Star & \thead{Time UT \\2015-06-25} & Filter & ND & $\theta$ ["] & DIT[s] $\times$ NDIT & AM & $\theta_\text{PSF}$ [mas] \\ \hline \noalign{\vskip 2mm} Antares & 01:29:08 & CntH$\alpha$ & ND\_2 & 0.61 & 1.2 $\times$ 20 & 1.069 & - \\ & & NH$\alpha$ & ND\_2 & 0.61 & 1.2 $\times$ 20 & 1.069 & - \\ & 01:55:12 & BH$\alpha$ & ND\_2 & 0.61 & 1.2 $\times$ 20 & 1.034 & - \\ & & TiO & ND\_2 & 0.61 & 1.2 $\times$ 20 & 1.034 & - \\ & 02:27:17 & V & ND\_1 & 0.66 & 1.2 $\times$ 20 & 1.009 & - \\ & & KI & ND\_1 & 0.66 & 1.2 $\times$ 20 & 1.009 & - \\ \noalign{\vskip 2mm} $\epsilon$ Sco & 00:40:42 & CntH$\alpha$ & ND\_1 & 0.70 & 1.2 $\times$ 20 & 1.236 & 26 \\ & & NH$\alpha$ & ND\_1 & 0.70 & 1.2 $\times$ 20 & 1.236 & 27 \\ & 00:53:29 & BH$\alpha$ & ND\_1 & 0.83 & 1.2 $\times$ 20 & 1.197 & 25 \\ & & TiO & ND\_1 & 0.83 & 1.2 $\times$ 20 & 1.197 & 24 \\ & 01:05:17 & V & ND\_1 & 0.61 & 1.2 $\times$ 20 & 1.166 & 30 \\ & & KI & ND\_1 & 0.61 & 1.2 $\times$ 20 & 1.166 & 25 \\ \hline\noalign{\vskip 2mm} \end{tabular} \label{obstab} \vspace{1ex}\\ \begin{flushleft} \textbf{Note}: ND indicates which neutral density filter has been used, $\theta$ is the visible seeing, AM gives the airmass and $\theta_\text{PSF}$ is the FWHM of the PSF images. DIT gives the integration time of each frame and NDIT is the number of integrations. The filter pairs grouped together in time were observed simultaneously using the two arms of the detector. The characteristics of the filters are given in Table \ref{fluxtab}. \end{flushleft} \end{table*} \begin{table} \caption{Filter characteristics and calculated photometry of Antares.} \centering \begin{tabular}{l l l c} \hline Filter & $\lambda$ [nm] & $\Delta\lambda$ [nm] & \thead{Flux \\ $10^{-8}$ W m$^{-2}$ $\mu$m$^{-1}$} \\ \hline \noalign{\vskip 2mm} V & 554 & 80.6 & 1.555 $^{+0.642}_{-0.561}$\\ \noalign{\vskip 1.2mm} CntH$\alpha$ & 644.9 & 4.1 & 2.894 $^{+1.003}_{-0.844}$ \\ \noalign{\vskip 1.2mm} BH$\alpha$ & 655.6 & 5.5 & 2.556 $^{+0.852}_{-0.722}$\\ \noalign{\vskip 1.2mm} NH$\alpha$ & 656.34 & 0.97 & 2.964 $^{+0.968}_{-0.831}$ \\ \noalign{\vskip 1.2mm} TiO & 716.8 & 19.7 & 2.545 $^{+0.794}_{-0.689}$\\ \noalign{\vskip 1.2mm} KI & 770.2 & 21.2 & 3.679 $^{+1.053}_{-0.922}$\\ \noalign{\vskip 1.2mm} \hline \end{tabular} \label{fluxtab} \vspace{1ex}\\ \begin{flushleft} \textbf{Note}: $\Delta\lambda$ indicates the FWHM of the filter. \end{flushleft} \end{table} Due to atmospheric or instrumental conditions, the adaptive optics loop of the instrument frequently opened during the observations. To eliminate the corrupted frames, we performed a selection on the raw data before executing the instrument pipeline. We determined the average of the flux of each frame in a circular area centred on the maximum. For each couple target/filter, we rejected the frames for which the mean central flux was below a specific threshold. The threshold in each case is then defined by 0.5, 2 or 5 standard deviations from the mean depending on the dispersion of the flux points for the individual case. Next, the raw data were processed using the ESO reflex data reduction pipeline v0.24.0 \citep{2013A&A...559A..96F}. From this data we then computed the total intensity, the degree of linear polarisation (DoLP), the polarised intensity and the polarisation position angle as described by \citet{2015A&A...578A..77K}. The linear polarised intensity (Fig. \ref{observations}) is defined by $\sqrt{Q^2 + U^2}$ where Q and U are Stokes parameters. The DoLP, being the polarised intensity divided by the intensity, gives us the fraction of the light that is linearly polarised. The observations were centred by fitting a Gaussian function to the intensity images to locate the centre of the star. The total intensity images of Antares were deconvolved using the Lucy - Richardson deconvolution algorithm implemented in IRAF \citep{1974AJ.....79..745L} using the intensity images of $\epsilon$ Sco as the PSF (Fig. \ref{psf}). The number of iterations needed in this process was judged by matching the measured angular diameter of the star to that of the deconvolved image. Beyond five iterations no further changes were measured in the full width half maximum (FWHM) of the intensity profiles. The SPHERE/ZIMPOL pipeline products are not flux calibrated. In order to derive the observed flux in each filter for Antares we first matched a stellar atmosphere model from \citet{2003IAUS..210P.A20C} to the calibrator star, $\epsilon$ Sco. As $\epsilon$ Sco is a K1III star \citep{2006AJ....132..161G}, we selected a model with an effective temperature of 4500K, log g = +2.50 with solar metallicity in agreement with \citet{1990ApJS...74.1075M}. We scaled this theoretical spectral energy distribution (SED) using the angular diameter of the star, 5.747 $\pm$ 0.008 mas \citep{2009MNRAS.399..399R}, and compared to existing photometry \citep{2002yCat.2237....0D} to confirm the suitability of the model. The theoretical flux for each of the SPHERE filters which were observed was calculated by integrating over the SED after convolving with the SPHERE transmission filters. We determined the observed flux in the image by summing the intensity over a circular aperture of 184 mas centred on the star. The background flux was estimated using a ring, with inner and outer radii of 184 and 220 mas respectively, and subtracted. From this we obtained a conversion factor that was applied to the Antares data. The uncertainty estimation on the calibrated flux is dominated by that on the effective temperature of the calibrator star. This uncertainty is therefore set by the spacing of the stellar atmosphere model grid as this sampling is certainly larger than the uncertainty of the effective temperature leading to a conservative uncertainty estimate of $\pm$ 250K. The flux calibrated intensity images of Antares are represented in Fig. \ref{observations} together with the polarimetric maps. The same for $\epsilon$ Sco is shown in Fig. \ref{psf}. The deconvolved intensity images of Antares can be seen in Fig.\ref{decon}. \begin{figure*} \includegraphics[width = 0.87\textwidth]{ant_obs.png} \caption{First column: Intensity images (W m$^{-2}$ $\mu$m$^{-1}$ sr$^{-1}$) of Antares, in square root scale. The filled white circle indicates the beam size and the green circle indicates the size of the photosphere \citep{2013A&A...555A..24O}. Second column: Polarised flux (W m$^{-2}$ $\mu$m$^{-1}$ sr$^{-1}$), in square root scale. Third column: Degree of linear polarisation in linear scale spanning 0 - 8\%. Last column: Angle of the polarisation vector. The magnitude of the vector is scaled to the strength of the degree of linear polarisation at each point. } \label{observations} \end{figure*} \begin{figure*} \includegraphics[width = 0.87\textwidth]{ant_psf.png} \caption{Same as Fig. \ref{observations} but for $\epsilon$ Sco.} \label{psf} \end{figure*} \begin{figure*} \includegraphics[width = \textwidth]{deconvolved.png} \caption{The deconvolved intensity images (W m$^{-2}$ $\mu$m$^{-1}$ sr$^{-1}$) of Antares in each filter, shown in square root scale. The photospheric size, measured by \citet{2013A&A...555A..24O} in the near-infrared, is shown in green.} \label{decon} \end{figure*} \section{Data analysis} \label{sec:analysis} \subsection{Intensity} Figure \ref{flux_cal} shows the flux obtained in each ZIMPOL filter we observed. As Antares is a semi-regular variable we also compared our flux with a measurement from the American Association of Variable Star Observers (AAVSO) in the V band (Fig. \ref{flux_cal}) taken within 24 hours of our SPHERE observations and found our results to be in agreement. They are broadly consistent with previous measured photometry by \citet{2002yCat.2237....0D}. With a beam size (see Table \ref{obstab}) for each of the observations smaller than that of the projected surface size of the star the stellar disk is resolved in all filters. Fitting a two dimensional Gaussian function to the deconvolved intensity images (Fig. \ref{decon}), reveals that the visible photosphere departs from spherical symmetry in all filters giving eccentricities between 0.4 and 0.52 which could perhaps indicate that there is a temperature variation on the surface of the star. \begin{figure} \centering \includegraphics[width=\hsize]{flux_cal_red.png} \caption{Antares photometry. A stellar atmosphere model from \citet{2003IAUS..210P.A20C} with T$_\text{eff} = 3500$\,K, log g = 0 and solar metallicity, scaled to the angular diameter of Antares and reddened is shown in grey. Similarly a model with T$_\text{eff} = 3750$\,K is shown by the dashed line. The grey shaded areas represent our six ZIMPOL filters (NH$\alpha$ and BH$\alpha$ overlap). The horizontal bars on the previously measured photometry by \citet{2002yCat.2237....0D} give the width of the Johnson bands. The blue triangle is a measurement in the V band from AAVSO that was taken on 26-06-2015 (one day apart from our observations).} \label{flux_cal} \end{figure} \subsection{Polarised flux} Significant signal can be seen in the polarised flux and DoLP across the six filters (see 3$^{\rm rd}$ column in Fig.~\ref{observations}). The DoLP that we see in each of the images is considerably higher than the polarisation that is caused by the instrument itself, which is approximately 0.5\% \citep{2019A&A...631A.155B}. There is a dark lane that runs through the centre of all polarisation images which is nonphysical and is due to a beam shift effect. This effect is introduced by the instruments mirrors and is further detailed by \citet{2018A&A...619A...9S}. Outside of the area plotted in Fig. \ref{observations} and Fig. \ref{psf} the images are dominated by noise (causing the large polarisation vectors at the edges of the images; for the corresponding RMS maps see Fig. \ref{rms}). No significant stellar signal is seen in the DoLP of the calibrator star (Fig. \ref{psf}), the signal present in these images follows the diffraction rings caused by the telescopes mirror and is split East-West by the beam shift effect. A large, conspicuous feature can be seen to the south of Antares in the DoLP in all filters. In the plane of the sky, the onset of the feature appears to be right at the stellar surface with a projected surface size greater than that of the star. This polarisation could be caused by the scattering of the light by circumstellar dust grains or scattering off molecules or free electrons \citep{2000ApJS..128..245Z}. The latter option seems unlikely given the low ionisation temperature associated to the radiation field from Antares. The polarisation seen in the observations is consistent with the dust hypothesis as it is present throughout the entire wavelength range which would not be expected if the polarisation was a result of light scattering off molecules within specific lines. The directions of the polarisation vectors at the location of the feature are tangential to the stellar photosphere, as expected from the scattering of light from the central source by dust in the circumstellar environment. Henceforth, we refer to the feature as the clump. \section{\textsc{MCMax} modelling} \label{sec:modelling} In order to determine if the dust hypothesis to explain the polarisation signal around Antares is plausible and to characterise the dust causing the scattering, we run a radiative transfer model using \textsc{MCMax3D} \citep{2009A&A...497..155M}, a 3-D Monte Carlo radiative transfer code. \textsc{MCMax3D} implements Mie scattering on a distribution of dust grains modelled as hollow spheres. The latter does not imply that the grains are truly hollow spheres; rather this assumption assures that the symmetrical nature of solid spherical particles, as in standard Mie theory, is broken \citep{2005A&A...432..909M,2004ldce.conf...28M}. An ensemble of hollow spheres thus better represents the properties of a true distribution of non-spherically symmetric particles. The images of the Stokes vectors (I, Q and U) are computed taking into account the diameter of the telescope mirror and can therefore be compared directly to the SPHERE observations. Through this modelling of our observations we aim to provide constraints on the spatial distribution and total mass of the dust. We focus the modelling on the conspicuous clump to the south of Antares that is seen in the DoLP. The polarisation signal to the north of the star is several times fainter than that in the south and not well defined in all filters, for these reasons we concentrate our modelling efforts on the southern clump. We approximate the clump as a sphere of dust with a constant density as the number of resolution elements over the clump is a few. Therefore we would have no diagnostic tools to constrain properties of a (spherical) density structure in the clump. The centre of the clump is placed at different ($x,y,z$) positions relative to the stellar centre, such that if we can constrain its 3D position we may link it to a surface release location assuming the clump is moving radially outwards. For clarity, $z$ is the position along the line of sight (positive in the direction of the observer), $y$ is the north-south axis (where north is positive), and $x$ the east-west axis (where west is positive). The $x$ coordinate of the clump centre is kept fixed based on the observations. Below we discuss the assumptions regarding the composition and size of the dust particles. If we ignore the latter properties for now, our modelling space consists of four variables: $z$ and $y$ position, radius $R_{\rm clump}$ and dust mass of the clump $M_{\rm clump}$. To probe these dimensions we have constructed a grid of models of varying step-size in each dimension. The ranges and step sizes of each of the four parameters are visualised in Fig.~\ref{chi2}. We note here, first, that $R_{\rm clump}$ was varied between 1$-$5 $R_{\star}$, where $R_{\star} = 680\,$R$_{\odot}$ is the radius of Antares. Second, the parameter controlling the mass of the dust is sampled rather unevenly to ensure an unbiased density sampling as the volume of the dust sphere varied. Third and finally, the $y$ coordinate was allowed to vary as it was difficult to determine how far south the centre of the clump is in the observations as the models show that the DoLP signal is not constant through the large clump, i.e. the signal is concentrated in a smaller portion of the dust sphere (see also Sect.~\ref{sec:discussion}). \subsection{Parameters and Assumptions} \subsubsection{Stellar parameters} To determine the stellar energy distribution to use as input for the radiative transfer models we first applied a reddening law to two stellar atmosphere models from \citet{2003IAUS..210P.A20C} with T$_\text{eff} = 3500$\,K and 3750\,K, and surface gravity $g = 1$\,cm\,s$^{-1}$. We use A$_V$ = 0.43 and R$_V$ = 3.1 from \citet{2013A&A...555A..24O} and follow the law described in \citet{1989ApJ...345..245C}. Both models are plotted in Fig. \ref{flux_cal} and show that the photometry we retrieved from the SPHERE data falls between the two models. As \citet{2013A&A...555A..24O} determines a T$_\text{eff}$ of 3660 $\pm$ 120\,K this is unsurprising. A $\chi^2$ test determines the model with T$_\text{eff} = 3500$\,K to be the better fit so we proceed with a stellar atmosphere model at this temperature for the \textsc{MCMax3D} modelling input. A test between two radiative transfer models with T$_\text{eff} = 3500$\,K and 3750\,K, showed no significant difference in the DoLP. The luminosity of 62,500 L$_\odot$ was set so that the angular diameter of the model star was consistent with previous interferometric measurements from \citet{2013A&A...555A..24O} at a distance of 170 pc. The star is modelled as a spherical object though both simulations and observations of RSGs indicate that this is not exactly the case due to their large convective cells. Similarly, the code assumes isotropic light emission. For the scope of this study these assumptions are reasonable as we focus on the modelling of a large dust clump illuminated by a large fraction of the surface of the star. \subsubsection{Dust composition and grain size} \label{sec:dust_composition} \citet{2017A&A...603A.116A} report that the chemical composition of circumstellar dust grains cannot be reliably determined using observations in the visible regime alone but also need measurements in the near-IR. We therefore do not attempt to do so here. \citet{2009A&A...498..127V} review studies addressing this topic and provide an overview of likely dust constituents: aluminium oxide (Al$_{2}$O$_{3}$); melilite (Ca$_{2}$Al$_{2}$SiO$_{7}$); olivine (Mg$_{2x}$Fe$_{2-2x}$SiO$_{4}$; $0 \leq x \leq 1$); iron magnesium oxide (MgFeO); metallic iron (Fe), and carbon (C). Aluminium oxide is expected to condense early on in the condensation cycle \citep[e.g.][]{1990fmpn.coll..186T}. A study of dust precursors in asymptotic giant branch (AGB) star winds by \citet{2019MNRAS.489.4890B} finds Al$_2$O$_3$ to be a potential first species to condense from the gas phase on the condition that the monomer (Al$_2$O$_3$)$_{n=1}$ forms. Depending on local density, cluster formation starts at temperatures as high as 1600 $-$ 2200\,K the highest of all species trialled by them. As the dust in the observations appears close to the stellar surface and therefore at high temperatures, for this study we used dust composed of Al$_2$O$_3$ adopting the optical properties from \citet{1997ApJ...476..199B} derived in the range 7.8\,$-$\,200 $\mu$m. For shorter wavelengths, the optical constants are extrapolated following \citet{1998asls.book.....B}. This extrapolation implies that the alumina grains are almost transparent at optical wavelength, in line with findings by \citet{1995Icar..114..203K}. A study by \citet{2016A&A...594A.108H} explores Al$_2$O$_3$ formation around M-type AGB stars. In this study they show that Al$_2$O$_3$ forms closer to the star than silicates and may act as seed particles for the condensation of silicates further out. Two other compositions were trialled, a mixture of MgSiO$_3$ + amorphous carbon, and the dust composition found by \citet{2009A&A...498..127V} comprising majorly of melilite with smaller amounts of olivine, alumina and carbon. We found that the DoLP is not strongly sensitive to composition, though the best fit parameters ($x$, $y$, $R_{\rm dust}$, $M_{\rm dust}$) will differ somewhat from those derived using aluminium oxide. Specifically, these other compositions produce slightly less polarisation. These differences are, however, so small that they do not impact our conclusions. We do point out that these alternative grains are more opaque than Al$_{2}$O$_{3}$, therefore their temperatures are higher. We established that if the grain composition is actually a mixture of species -- which is very likely -- part of the volume of the best solution for our aluminium oxide only model would be too hot for these other grains to exist, either as thermally isolated species or as species in thermal contact. For the \citet{2009A&A...498..127V} mixture this volume fraction is about 8 percent (10 percent when T$_\text{eff}$ = 3750\,K) if a condensation temperature of 1500\,K is assumed, a result that has a modest impact on the assumption for the dust-to-gas ratio of the clump (see Sect.~\ref{sec:cloud_properties}). One should note that this estimation does not take into account the presence of a warm chromosphere (see e.g. \citet{2001ApJ...551.1073H} and references therein) around a RSG such as Antares. Recently, \citet{2020A&A...638A..65O} have shown that the interaction between the warm chromosphere at several thousand Kelvin and the cool gas which allows dust condensation in the same location is very complex: the detection of one or another is highly dependent on the wavelength used for the observations. It is likely that the warm gas is not dense and has a limited role on the dust condensation sequence. Therefore, as \textsc{MCMax3D} cannot take this temperature profile into account, we do not include it in our models. In conclusion: we expect the composition (in the bulk of the clump) to be a mixture typical for RSG outflows, however, we adopt the optical properties of aluminium oxide in our modelling to avoid partial dust condensation issues in the clump. Derived dust masses represent that of the actual dust mixture. The DoLP is sensitive to the size distribution of the grain population; large grains producing less polarisation relative to small grains in the wavelength range of our observations. Our observational data does not allow to place firm constraints on these properties, and we limit our investigation to assessing whether the wavelength dependence of the DoLP in the best fit model is consistent with our measurements (see Sect.~\ref{sec:modelling} and Fig.~\ref{dolp_wave}). We follow \citet{2009A&A...498..127V} and adopt an MRN distribution of sizes described in \citet{1977ApJ...217..425M}, $n(a) \propto a^{-3.5}$, characteristic for interstellar particles, with sizes $a$ in the range 0.01$-$1 $\mu$m. We prefer this approach over adopting a single particle size (e.g. \citealt{2001A&A...368..950V}, \citealt{2016MNRAS.463.1269B}) as likely stochastic processes play a role in dust formation and growth. Our adopted size distribution is shifted to slightly larger grains in comparison with \citet{2012ApJ...759...20K} and \citet{2014A&A...568A..17O}. Micron sized grain in the circumstellar environment of Antares are reported in \citet{1987ApJ...321..921S} giving weight to theoretical considerations of dust driving in cool star outflows which seem to favour fairly large grains \citep{2008A&A...491L...1H}. \subsection{Modelling results} \label{sec:modellingres} To determine the best-fitting model to the observations we used a $\chi^2$ minimisation technique. First we compute $\chi^2$ for each of our models using: \begin{equation} \chi^2=\sum^{n}_{i=1} \frac{(O_i - E_i)^2}{\sigma_i^2} \end{equation} \noindent where O and E is the flux in pixel $i$ for our observations and models respectively, $\sigma$ is the error in the observed data (see Fig. \ref{rms}) and $n$ is the total number of pixels in our cubes ($n = 33620$). In this case we directly compared the DoLP image outputted by our models to the observed images in all filters by summing over the total number of pixels after cutting the images to a 300 $\times$ 300 mas field of view. In order to calculate the confidence interval on the parameter ranges we use the same method as outlined in \citet{2011ApJ...741L...8T} and \citet{2019ApJ...880..115A}. The $\chi^2$ values from the grid were normalised such that the best-fitting model had a $\chi^2_{\text {red}}$ value of 1. From here the P-value was calculated, all models with a P-value higher than 0.05 (therefore within the 95\% confidence interval) are deemed acceptable models that are statistically indistinguishable from each other. It is from these models that we are taking our parameter ranges for the position, radius and dust mass of the clump. Due to computational limitations we had to limit the sampling rate. To account for this we perform a linear interpolation to the upper P-value points to give a better estimate on the confidence interval on each parameter. Figure \ref{chi2} shows the $\chi^2$ distribution for our grid of models. For each of our variable parameters relatively clear minima can be seen in the $\chi^2$ distribution. Table \ref{model_params} shows the parameters of the best-fitting models from the grid using Al$_2$O$_3$ for the dust composition. As can be seen in Fig. \ref{chi2} all of these models place the dust behind the plane of the star at close proximity to the photosphere. Table \ref{model_params} shows that as the mass and radius of the dust clump display large confidence intervals, the dust density in the clump remains relatively constant for our best fitting models. Figure \ref{model} shows the intensity, polarised flux and DoLP for the best fitting model. A side-by-side comparison of the observations and best fitting model can be seen in Fig. \ref{sidebyside}. At longer wavelengths the DoLP diminishes relative to the observations. However, the fit remains within the uncertainties of the (polarised)-flux calibration (see Fig.~\ref{dolp_wave}). \begin{table} \caption{Summary of adopted stellar parameters and fitted clump parameters. \label{model_params}} \begin{tabular}{lrcl} \hline \hline\\[-9pt] Parameter & Value & Confidence interval & Unit \\ \hline\\[-9pt] $T_{\rm eff}$ & 3500 & & K \\ $g$ & 1 & & cm/s$^{2}$ \\ $R_{\star}$ & 680 & & R$_{\odot}$ \\ \hline\\[-9pt] $x$ & -0.3 & & $R_{\star}$ \\ $y$ & -4.4 & -5.0 $-$ -3.2 & $R_{\star}$ \\ $z$ & -2.5 & -3.1 $-$ -1.3 & $R_{\star}$ \\ $d_{\rm clump}$ & 5.1 & 4.1 $-$ 5.2 & $R_{\star}$ \\ $\rho_{\rm clump}$ & 1.07 & 0.70 $-$ 1.10 & $10^{-18}$\,g\,cm$^{-3}$ \\ $R_{\rm clump}$ & 3.8 & 2.6 $-$ 4.4 & $R_{\star}$ \\ $M_{\rm clump}$ & 1.3 & 0.3 $-$ 1.5 & $10^{-8}\,$M$_{\odot}$ \\ \hline \end{tabular} \vspace{1ex}\\ \begin{flushleft} \textbf{Note}: Where $d_{\rm clump}$ is the distance from the centre of the star to the centre of the clump \end{flushleft} \end{table} \begin{figure*} \includegraphics[width = \textwidth]{chi2.png} \caption{$\chi^2$ values derived from the comparison of the degree of linear polarisation from the ZIMPOL observations and MCMax3D simulations against the four variables. Here, $z$ is the line of sight and $y$ is the North-South axis. The solid line indicates where the best-fitting model falls and the shaded region shows the confidence intervals.} \label{chi2} \end{figure*} \begin{figure*} \includegraphics[width = 0.7\textwidth]{bestmodel.png} \caption{Best matched \textsc{MCMax3D} model as determined by comparison of the DoLP to the observations. First column: Intensity images (W m$^{-2}$ $\mu$m$^{-1}$ sr$^{-1}$). The green circle indicates the size of the photosphere. Second column: Polarised flux (W m$^{-2}$ $\mu$m$^{-1}$ sr$^{-1}$). Third column: Degree of linear polarisation. } \label{model} \end{figure*} \begin{figure} \centering \includegraphics[width=\hsize]{sbs_comparison.pdf} \caption{Comparison of observations (left) to the models (right) in the V filter. First row: Intensity images (W m$^{-2}$ $\mu$m$^{-1}$ sr$^{-1}$). The green circle indicates the size of the photosphere. Second row: Polarised flux (W m$^{-2}$ $\mu$m$^{-1}$ sr$^{-1}$). Third row: Degree of linear polarisation.} \label{sidebyside} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{wavelength_dep.png} \caption{The average DoLP within the southern clump of our best fit model (see Table~\ref{model_params}) as a function of wavelength. Blue circles and blue homogeneous background denote the observations and their uncertainties.} \label{dolp_wave} \end{figure} \section{Discussion} \label{sec:discussion} \subsection{Properties of the dusty clump} \label{sec:cloud_properties} For our homogeneous spherical distribution of dust grains, the best fit to the DoLP images places the centre of the clump at 5.1 $R_{\star}$ and yields a clump radius of 3.8\,$R_{\star}$. Figure ~\ref{dust_schematic} provides a schematic of the geometry of the system. The clump is considerably larger than the star itself with the smaller clumps failing to produce as much polarisation signal. The reason for this is the combined effect of the non-isotropic nature of scattering of light off of dust grains (which has a maximum polarisation for 90$^{\circ}$ scattering angle) and the marginally optically thick nature of the clump at optical wavelengths. One therefore most prominently observes polarisation from the part of the clump that is closest to the star. The visual extinction in the V-band of a line-of-sight through the centre of the clump is about $A_{\rm V} = 3.6$. A similar clump perfectly aligned in front of the stellar disk would have reduced the visual light flux by a large factor ($\simeq$ 35). This could be an explanation of what has happened to Betelgeuse in 2019-2020, where a significant dimming was observed in visible light (photo release ESO/Montarg\`{e}s et al.\footnote{https://www.eso.org/public/news/eso2003/}; \citealt{2020ApJ...891L..37L}). Given the dimensions of the clump, it fills 18 percent of the total sky as seen from Antares, i.e. a minimum of about 5$-$6 such clump are needed to cover the entire surface. \citet{2011A&A...526A.156M} assume the gas-to-dust ratio in the circumstellar environment of red supergiants to be $\psi = 200$ in the limit of full condensation of refractory metals. However, a general value of $\psi$ for RSGs still remains highly uncertain. For aluminium oxide the dust temperatures in the clump range from approximately 40 to 1000\,K. In Sect.~\ref{sec:dust_composition} we pointed out that about 8\% of the clump is, however, too close to the surface to allow silicate or carbon-based grains to survive. Assuming rapid formation (for time-scales refer to \citealt{2019MNRAS.489.4890B}) of these species once the local temperature drops below the condensation temperature as the clump moves away from the surface, we derive a mean gas-to-dust ratio in the clump using $1/\psi = 0.92/200 + 0.08/7000$, i.e. $\psi \simeq 215$, applying 7000 for the gas-to-dust ratio of fully condensed alumina in a solar abundance mixture \citep{2009ARA&A..47..481A} and gas-to-dust ratio of 200 for the rest of the clump. Using a total mass of grains of $1.3 \times 10^{-8}\,$M$_{\odot}$, this implies a total mass of the clump of $\sim 2.8 \times 10^{-6}\,$M$_{\odot}$. We lack velocity information for the conspicuous nearby dusty clump. Obtaining such kinematic information through resolved spectroscopy is crucially important for identifying the launching and/or driving mechanism of the clump and the time of ejection. Using a speculative velocity of 30\,km\,s$^{-1}$ (inspired from findings by \citealt{2014A&A...568A..17O}; see below), which is lower than the local escape velocity at $r = 5.11\,R_{\star}$ of 43 km\,s$^{-1}$, the clump would have been ejected about 2 years prior to our observations. Note that within this hypothesis the clump would fall down on the star and sublimate, if the radiative pressure on the newly formed dust does not succeed in accelerating it further. \begin{figure} \centering \includegraphics[width=\hsize]{dust_schematic.pdf} \caption{Schematic showing the dust clump and star relative to the direction toward the observer. The furthest edge of the clump from the star is at 165 mas or 8.9\,$R_{\star}$, corresponding to a dynamical flow time from Antares of about 4 years assuming a radial wind speed of 34\,km\,s$^{-1}$. Only part of the clump (approximated by the shaded area) is prominently visible in polarised light. See text for a discussion.} \label{dust_schematic} \end{figure} \subsection{Dusty clumps further away from Antares} \label{sec:furtherclouds} \citet{2014A&A...568A..17O} use VLT/VISIR at a spatial resolution of 0.5" to probe the $6" \times 6"$ surroundings of Antares in the Q1 filter at 17.7\,$\mu$m. They identify six dusty clumps (at distances ranging from 40 - 96 R$_{\star}$), as well as unresolved emission from the innermost region -- i.e. the region that is probed here in more detail. Assuming the dust consists of an astronomical silicate mixture \citep{2007ApJ...657..810D} they find typical dust masses of their resolved clumps of $(3 - 6) \times 10^{-9}\,$M$_{\sun}$, hence total clump masses of $(0.6 - 1.2) \times 10^{-6}\,$M$_{\sun}$ adopting $\psi = 200$. These values are similar to the total mass we derive for the near-surface clump that is scrutinised here. The clumps studied by \citet{2014A&A...568A..17O} are spatially unresolved or only marginally resolved, therefore we cannot directly compare clump sizes. Assuming all clumps have a similar size when ejected by the star, the VISIR image implies that internal expansion may have increased their radii by at most a factor of a few. By comparing to an earlier image of Antares' nearby environment by \citet{2001ApJ...548..861M}, taken with the MIRLIN focal-plane array camera at Keck\,II, \citet{2014A&A...568A..17O} estimate the clumps to move out with projected velocities of 13\,$-$\,40 km s$^{-1}$ which the authors conclude is not consistent with a simple monotonically accelerating outflow. \subsection{Modes and potential driving mechanisms of mass loss from Antares} \citet{2012A&A...546A...3B} combine measurements of notably Zn\,{\sc ii} absorption line strengths in the line of sight toward the companion Antares B and hydrodynamical simulations of the way in which the B2.5\,V star creates density perturbations in a radial wind from Antares, as well as an H\,{\sc ii} region, to derive the mass loss in gas. They find $\dot{M} = (2.0 \pm 0.5) \times 10^{-6} $\,M$_{\odot}$\,yr$^{-1}$. How does this total mass loss rate compare to a mean mass loss in ejected dusty clumps? \citet[][see Sect. \ref{sec:furtherclouds}]{2014A&A...568A..17O} estimated that the 6 clumps they observed around Antares to have a mean projected outward directed dusty clump velocity of 34 km\,s$^{-1}$, implying a dynamical crossing time of the projected zone from the closest observed clump to the star to the most distant (40 - 96 R$_{\star}$) of 24.7 years. This yields a typical clump ejection timescale of $\sim$5 yrs and a mean mass loss rate in clumps of approximately $1.5 \times 10^{-7}$\,M$_{\odot}$\,yr$^{-1}$ adopting $\psi = 200$. However, this is a lower limit to the clump mass-loss as it relies on a velocity estimation based on the clump motion in the VISIR field of view. The latter captures only the displacement in the plane of the sky and neglects the motion in the line of sight. Consequently, the actual velocity of each clump could be higher leading to a larger mass-loss rate. Additionally the adopted gas-to-dust ratio of 200 is highly uncertain which would significantly affect our calculation of the total mass lost through clumps. The clump in this present study had likely not yet been ejected at the time of \citeauthor{2014A&A...568A..17O}'s 2010 observations, however, its dust mass is comparable to those of the outer clumps studied by \citet{2014A&A...568A..17O}. Taken at face value, the findings of \citeauthor{2014A&A...568A..17O} and the present results suggest that the mass-loss in ejected clumps contributes significantly to the total mass-loss. To provide insight on whether it represents the main mass-loss mechanism or is one of more contributors would require further knowledge of the 3D kinematics of the dusty clumps. At present, we may only speculate as to the mechanism ejecting clumps of material from the surface of Antares. Variability in both light and radial velocity reveals two preferred characteristic timescales for the star, one of 100\,$-$\,350 days and one of about 6 yrs (e.g. \citealt{2010ApJ...725.1170S}; \citealt{2013AJ....145...38P,2013ApJ...777...10P}). The former has been associated with the typical lifetime of convective cells at the surface \citep[e.g][]{2011A&A...535A..22C} and with fundamental mode or first-order overtone pulsations \citep[e.g.][]{1969ApJ...156..541S}. The latter may be connected to stochastic oscillations, presumably due to the interaction of convection and pulsations \citep{2006MNRAS.372.1721K}, or to the turnover time of convective motions \citep{2010ApJ...725.1170S}. The typical timescale for clump ejection seems to agree best with the latter, longer timescale. 3D hydrodynamical simulations show that the surface of red supergiant stars are covered by a few large convective cells only \citep[e.g.][]{2015ASPC..497...11C} with timescales dependent on the size and depth of the cell \citep{1975ApJ...195..137S}. If gas would be released over the full extent of such a cell -- provided conditions were right -- the surface covering factor would be in line with the 18 per cent derived for the clump that is studied here. Given that not each and every surfacing convective cell releases a cloud of gas, suitable conditions for launching material may depend on the interplay of the multiple processes, likely including convection and pulsations. Does dust also form in the (initially) radial outflow from Antares as e.g. probed and modelled by \citet{2012A&A...546A...3B}? The top four panels of Fig.~\ref{outflow} show predictions of the DoLP for a radial flow from the star with a mass-loss rate of $2 \times 10^{-6}\,$ \msun\,yr$^{-1}$ and a terminal velocity of 30\,km\,s$^{-1}$. The gas-to-dust ratio is set to $\psi = 1000, 1500, 2000$ and 4000 (gas-to-dust ratios below these values show a clear detection in the SPHERE images), and silicate dust is assumed to form instantaneously at 5\,R$_{\star}$. Silicate dust is chosen for the outflow as we expect the lower temperature at this distance to allow for the condensation of less temperature sensitive dust species. The bottom panels show the DoLP predicted by these models divided by the root mean square (RMS) map of the observations. The models were run for the V-filter as it has the lowest uncertainties (being the widest of the six filters used). In these bottom plots the blue areas correspond to where the detection is below 1$\sigma$ and the red areas are above 1$\sigma$. We conclude that the minimum gas-to-dust ratio is $\psi_{\rm min} \sim 2000$ given this mass loss rate, as otherwise we would have detected a polarised signal at the 1$\sigma$ level. These results clearly point to at most partial dust formation in the radially streaming wind from the star. Dust nucleation computations by \citet{2019MNRAS.489.4890B} indicate a critical density $n_{\rm H} \sim 5 \times 10^{10}$\,g\,cm$^{-3}$ for dust nucleation to set in a galactic environment. This is about three orders of magnitudes larger than the density at 5\,R$_{\star}$ in Antares's flow of $2 \times 10^{-6}\, $ \msun\,yr$^{-1}$, and may help explain the high value for the lower limit $\psi_{\rm min}$ that we report here. \citet{2016A&A...594A.108H} constrain the minimum density of dust grains in AGB outflows for a dust-driven outflow to develop at $n_{\rm d} \geq 4 \times 10^{-6}$\,cm$^{-3}$. For a population of 0.1\,$\mu$m silicate grains this converts to a maximum gas-to-dust ratio for dust driving to occur of $\psi_{\rm max-dd} \sim 500$. Given their higher luminosities, this value may be somewhat higher for RSGs. Still, the derived $\psi_{\rm min}$ and estimated $\psi_{\rm max-dd}$ seem to suggest that the amount of solid state material that may actually form in the radial outflow from Antares is too little to efficiently power a dust-driven wind. If so, the radial outflow requires an altogether different driving mechanism. However, it should be noted that our observations and models here can only account for the inner wind (as pictured in Fig. \ref{outflow}) and may not be representative of what is happening further out from the star. Modelling of ISO-SWS spectra of Antares by \citet{1999A&A...345..605J} also gives a high minimum gas-to-dust ratio in the wind of 600. Dust driving may then only be relevant for the episodic ejection of clumps of gas, in which apparently dust is condensing efficiently, possibly because associated shocks and turbulent eddies produce significant small-scale over-densities. \begin{figure*} \includegraphics[width = \textwidth]{outflow.png} \caption{Models of the DoLP produced for a dusty radial outflow in the V-filter. The mass loss rate is set at $2 \times 10^{-6}\,$M$_{\odot}$\,yr$^{-1}$ and the terminal flow velocity at 30\,km\,s$^{-1}$. Dust condensation is assumed to start at 5\,$R_{\star}$. From left to right, the gas-to-dust ratio is 1000, 1500, 2000, and 4000. Top panels show the predicted DoLP in the model. Bottom panels show the predicted DoLP divided by the RMS map of the observations. Blue indicates a less than 1$\sigma$ detection; red a detection that is above 1$\sigma$. As we do not detect significant polarisation in the region outside of the clump, this yields a lower limit to the gas-to-dust ratio of the radial outflow in this region of $\psi_\text{min}$ = 2000.} \label{outflow} \end{figure*} \subsection{Comparison to other RSGs} Antares is not the only RSG that has shown evidence for a more inhomogeneous wind. The few RSGs of which the circumstellar environment has been studied show a wide range of characteristics possibly connected to their evolutionary stage. The RSG VY CMa has shown extreme episodic mass-loss (\citealt{2015A&A...584L..10S}; \citealt{2019A&A...627A.114K}). A study by \citet{2019A&A...627A.114K} using ALMA has shown a dusty envelope containing several large clumps. Modelling of the observations suggests that it is these clumps -- being episodically expelled into the interstellar medium -- that are responsible for the high mass-loss (instead of being a result of a steady spherical wind). NOEMA observations of the CO J=2-1 line \citep{2019MNRAS.485.2417M} of the RSG $\mu$ Cep also suggest that the ejection of clumps from the circumstellar environment is a large contributor ($\geq$ 25\%) to the mass-loss. SPHERE/ZIMPOL observations of Betelgeuse \citep{2016A&A...585A..28K} also show a patchy and clumpy inner circumstellar environment and a departure from spherical symmetry in the visible.The clumpy nature of the environment of Betelgeuse has also been observed out to tens of stellar radii with VLT/VISIR by \citet{2011A&A...531A.117K}. A study of the variations of the silicate feature by comparison of IRAS LRS spectra and other ground-based spectra of RSGs spanning 25 years by \citet{1999ApJ...521..261M} shows that the mass-loss characteristics and dust signatures can vary over both short and long timescales. Multiple modes of mass loss, i.e. a clumpy and dusty episodic mass loss and a dust-poor radial outflow of gas, therefore is likely a general phenomenon among RSGs. Which of these modes is dominant may depend on stellar properties and surface conditions \section{Summary and conclusions} \label{sec:summary} The SPHERE/ZIMPOL observations of Antares show a strong localised signal in the DoLP which indicates the presence of a large dusty clump in the inner circumstellar environment. Modelling the observations using the radiative transfer code \textsc{MCMax3D} shows that the clump is 2.6 - 4.4 $\text{R}_{\star}$ in size with a dust mass of about $(0.3 - 1.5) \times 10^{-8}$ M$_\odot$. Our models place the edge of the clump beyond the plane of the sky through the centre of Antares (so, the dusty clump is `behind' the star) and its inner edge within 0.5 $ \text{R}_{\star}$ from the stellar surface. Adopting full condensation of solids in the dusty clumps ($\psi \sim 200$) and incorporating findings by \citet{2014A&A...568A..17O}, we find a minimum mass-loss rate from clumps of $1.5 \times 10^{-7}$ \msun\,yr$^{-1}$. No significant polarisation is measured in the rest of the probed ambient environment, placing constraints on the abundance of dust in a radially streaming stellar wind. Using the canonical value for this radial mass loss of $2 \times 10^{-6}\,$ \msun\,yr$^{-1}$ \citep{2012A&A...546A...3B} the gas-to-dust ratio in this flow must be at least $\psi_{\rm min} = 2000$ in the field of view of ZIMPOL. This suggests that the inner region of the radial flow is not dust driven. The surface covering factor of the dusty clump (18\%) agrees quite well with the expected size of surfacing convective cells in RSGs. Moreover, the estimated typical ejection timescale for clumps (of $\sim$5 yrs; \citealt{2014A&A...568A..17O}) matches well with a characteristic timescale for photometric and radial velocity variability (of $\sim$6 yrs) that has been associated to (an interplay of) pulsational and convective behaviours. This points towards convection and pulsation playing a role in the launching mechanism of the dusty clumps. The methodology developed here, i.e. to constrain the 3D position of recently ejected dusty-clumps, in principle allows to empirically study a possible connection of this mode of mass loss with surface activity. This requires long-term simultaneous interferometric monitoring of surface structures and the direct stellar surroundings. Supplementing this with kinematic information through resolved spectroscopy may further aid in establishing or rejecting such a connection. \section*{Acknowledgements} The authors acknowledge funding from the KU Leuven C1 grant MAESTRO C16/17/007. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sk\l{}odowska-Curie Grant agreement No. 665501 with the research Foundation Flanders (FWO) ([PEGASUS]$^2$ Marie Curie fellowship 12U2717N awarded to M.M.). L.D. acknowledges support from the ERC consolidator grant 646758 AEROSOL. This work has made use of the the SPHERE Data Centre, jointly operated by OSUG/IPAG (Grenoble), PYTHEAS/LAM/CESAM (Marseille), OCA/Lagrange (Nice), Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. This research made use of IPython \citep{PER-GRA:2007}, Numpy \citep{5725236}, Matplotlib \citep{Hunter:2007}, SciPy \citep{2020SciPy-NMeth}, Astropy\footnote{Available at \url{http://www.astropy.org/}}, a community-developed core Python package for Astronomy \citep{2013A&A...558A..33A}, and Uncertainties\footnote{Available at \url{http://pythonhosted.org/uncertainties/}}: a Python package for calculations with uncertainties. \section*{Data availability} The data underlying this article are available from the ESO archive under programme ID 095.D-0458. \bibliographystyle{mnras}
2,869,038,155,797
arxiv
\section{Introduction} \label{sec:intro} Clustering is a fundamental task with applications in medical imaging, social network analysis, bioinformatics, computer graphics, etc. Applying classical clustering methods directly to high dimensional data may be computational inefficient and suffer from instability. Many recent papers have shown that clusters for high dimensional data lie only in subsets of the full space and good data representations are beneficial to clustering \cite{Xie, AE-Data-Song,Yang,Li,Bo}. Deep Embedded Clustering (DEC) was proposed to jointly learn feature representations and assign clusters using a class of feedforward artificial neural networks \cite{Xie}. It achieved impressive performances on clustering tasks and it is often treated as the baseline for deep clustering methods. With the same motivation, the authors of \cite{Yang} learned Joint Unsupervised LEarning (JULE) to combine the agglomerative clustering with a Convolutional Neural Network (CNN) and formulated them as a recurrent process. In \cite{Bo}, an identification criterion was proposed to address identifiability issues for nonlinear mixture models. Variational Deep Embedding (VaDE) applied a mixture of Gaussian models as the prior distribution of latent representations in Variational Autoencoders (VAEs), therefore modeling the data generative procedure jointly with the clusters' distribution \cite{Vade}. On the other hand, lossy compression achieves a high compression ratio to reduce computation and communication costs. In recent works, VAEs was applied for lossy image compression and achieved comparable results \cite{Zhou,Johannes,Lucas}. The authors of aforementioned papers presented an end-to-end image compression framework for low bit-rate compression by applying VAEs with a Laplacian distribution \cite{Zhou}. A similar method was described for effective capturing of spatial dependencies in the latent representations based on VAEs in \cite{Johannes}. It has been shown that VAEs have the potential to address an increasing need for flexible image compression \cite{Lucas}. Taken together, prior research provides evidence that a better model fitting leads to better compression performance, and consequently enables a more accurate clustering assignment. Performing clustering on compressed data is a potential solution to problems arising in storage, computing, and communicating unstructured and unlabelled image collections. In contrast with previous works, where the proposed methods learned the data representation either specifically for clustering or for compression, we explore both tasks simultaneously by a new method, namely joint Variational Autoencoders and Bernoulli mixture models (VAB). This performs deep clustering on binary representations of data with state-of-the-art performance at a high-compression regime. The model is trained in two steps: First, Variational Autoencoders (VAEs) are jointly trained with Bernoulli mixture models (BMMs), where a mixture of Bernoulli models provides a probabilistic distribution of latent representations. Subsequently, the classifier is updated by Bernoulli distributed samples produced in the first step. This is optimized by the loss function consists of a reconstruction loss and a clustering loss. Learning discrete representations with neural network architectures is challenging because of the inability to backpropagate through non-differentiable samples. In our work, we propose to use the Gumbel-Softmax \cite{cVAE}, which provides a differentiable sampling mechanism that trains the neural network with a categorical reparameterization trick. This framework explores the connection between directed probabilistic models and compressed data representations, therefore making it possible to consider interpretable and computationally efficient binary code. To the best of our knowledge, what we present is the first methodology for simultaneous data compression and clustering in compressed domains. \section{Methods} \label{sec:method} \subsection{The generative model} Considering the dataset ${\mathbf{x}}$ with $N$ identically independently distributed (i.i.d) samples $\{x_i\}_{i=1}^N$ and $x_i\in \mathbf{R}^d$, we assume that the data is generated by some random process, involving an unobserved Bernoulli random variable ${\mathbf{z}}$ which belongs to one of $k$ classes. The joint distribution is formulated as \begin{equation} p_{\boldsymbol{\theta}}({\mathbf{x}},{\mathbf{z}},{\textnormal{c}})=p_{\boldsymbol{\theta}}({\textnormal{c}})p_{\boldsymbol{\theta}}({\mathbf{z}}|{\textnormal{c}})p_{\boldsymbol{\theta}}({\mathbf{x}}|{\mathbf{z}},{\textnormal{c}}), \end{equation} where $\boldsymbol{\theta}$ stands for the generative model parameters. It says that an observe ${\mathbf{x}}$ is generated from a latent variable ${\mathbf{z}}$, and ${\mathbf{z}}$ follows the mixture distributions with respect to ($w.r.t.$) the classes variable ${\textnormal{c}}$. Their distributions are described as: \begin{align} {\mathbf{x}} \sim Bernoulli(\boldsymbol{\mu_{x}}) \;or\;{\mathbf{x}} \sim N(\boldsymbol{\mu_{x}},\boldsymbol{\sigma_{x}}\mathbf{I}) \\ {\mathbf{z}} \sim Bernoulli(\boldsymbol{\mu_{z}})\\ {\textnormal{c}} \sim Categorical(\boldsymbol{\pi}). \end{align} Along with this generative process, we assume \begin{equation} p_{\boldsymbol{\theta}}({\mathbf{x}}|{\mathbf{z}},{\textnormal{c}})=p_{\boldsymbol{\theta}}({\mathbf{x}}|{\mathbf{z}}), \end{equation} which means that ${\mathbf{x}}$ and ${\textnormal{c}}$ are independent conditioning on ${\mathbf{z}}$. We define a recognition model $q_{\boldsymbol{\phi}}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}})$ as the variational approximation under the KL-divergence to the intractable posterior $p_{\boldsymbol{\theta}}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}})$ and $\boldsymbol{\phi}$ stands for the recognition model parameters. \subsection{The variational lower bound} The log-likelihood of $N$ observed data ${\mathbf{x}}$ is \begin{align*} \log p({\mathbf{x}}^{(1)},{\mathbf{x}}^{(2)},\cdots,{\mathbf{x}}^{(N)})=\sum_{i=1}^N\log p({\mathbf{x}}^{(i)}). \end{align*} Each element of the whole loglikelihood for the observed data is written as \begin{align*} \log p({\mathbf{x}}^{(i)})=D_{\textsc{KL}}(q_{\phi}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}}^{(i)})||p_{\boldsymbol{\theta}}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}}^{(i)})) + \mathcal{L}(\boldsymbol{\theta},\boldsymbol{\phi};{\mathbf{x}}^{(i)}), \end{align*} where the first term is the KL-divergence of the learned posterior distribution $q_{\phi}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}}^{(i)})$ from the true $p_{\boldsymbol{\theta}}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}}^{(i)})$, and the second term $\mathcal{L}(\boldsymbol{\theta},\boldsymbol{\phi};{\mathbf{x}}^{(i)})$ is \begin{align} \label{eq:elbo} E_{q_{\phi}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}}^{(i)})}\left[\log p_{\theta}({\mathbf{x}}^{(i)},{\mathbf{z}},{\textnormal{c}})-\log q_{\phi}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}}^{(i)})\right], \end{align} known as the evidence lower bound. Since the KL-divergence is non-negative and the value of $\log p_{\boldsymbol{\theta}}({\mathbf{x}}^{(i)})$ does not depend on $\boldsymbol{\phi}$, minimizing the KL-divergence amounts to maximizing the evidence lower bound. \subsection{The reparameterization method with Gumbel-softmax} The reparameterization method follows the following two steps. In step 1, we reparameterize the random variable ${\mathbf{z}} \sim q_{\boldsymbol{\phi}}({\mathbf{z}}|{\mathbf{x}}^{(i)})$ with a deterministic and differential transformation $g_{\boldsymbol{\phi}}(\boldsymbol{\epsilon},{\mathbf{x}}^{(i)})$ of a noise variable $\boldsymbol{\epsilon}$:\begin{equation} {\mathbf{z}}=g_{\boldsymbol{\phi}}(\boldsymbol{\epsilon},{\mathbf{x}}^{(i)}) \quad \text { where }\quad \boldsymbol{\epsilon} \sim p(\boldsymbol{\epsilon}). \end{equation} In step 2, we estimate the expectation of some function $h({\mathbf{z}})$ $w.r.t$ $q_{\boldsymbol{\phi}}({\mathbf{z}}|{\mathbf{x}}^{(i)})$ by \begin{multline} \mathbb{E}_{q_{\boldsymbol{\phi}}({\mathbf{z}}|{\mathbf{x}}^{(i)})}[h({\mathbf{z}})]=\mathbb{E}_{p(\epsilon)}\left[h\left(g_{\phi}\left(\boldsymbol{\epsilon}, {\mathbf{x}}^{(i)}\right)\right)\right] \\= \frac{1}{L} \sum_{l=1}^{L} h\left(g_{\phi}\left(\boldsymbol{\epsilon}^{(l)}, {\mathbf{x}}^{(i)}\right)\right)+o_{p}(1) \end{multline} where $\boldsymbol{\epsilon}^{(l)}$ are samples generated from $ p(\boldsymbol{\epsilon})$. In training the recognition model $q_{\boldsymbol{\phi}}$, non-differentiable categorical samples ${\bm{z}}$ are replaced with Gumbel-Softmax estimators ${\bm{y}}$. It results to approximating $\nabla _ { \theta } {\mathbf{z}}$ with $\nabla _ { \theta } {\mathbf{y}}$ for the back pass. It has been shown that samples ${\bm{y}}$ will become one-hot and the Gumbel-Softmax distribution will converge to the Categorical distribution \cite{cVAE}. $g_{\boldsymbol{\phi}}(\boldsymbol{\epsilon},{\mathbf{x}}^{(i)})$ is given as \begin{align*} g_{\boldsymbol{\phi}}(\boldsymbol{\epsilon},{\mathbf{x}}^{(i)})=\frac{\exp \left(\left(\log \left(\mu_{i}\right)+\epsilon_{i}\right) / \tau\right)}{\sum_{j=1}^{k} \exp \left(\left(\log \left(\mu_{j}\right)+\epsilon_{j}\right) / \tau\right)}, \end{align*} where $\epsilon_1...\epsilon_k$ are i.i.d samples drawn from Gumbel (0,1) distribution, $\mu_i$ are the probability of belonging to classes $i$ conditioning on ${\mathbf{x}}^{(i)}$ and $\tau$ is the softmax temperature. When $\tau \to 0$, \begin{align*} p(\boldsymbol{\epsilon}) \prod_i d\epsilon_i =q_{\boldsymbol{\phi}}({\mathbf{y}}|{\mathbf{x}}) \prod_i d{\textnormal{y}}_i \to q_{\boldsymbol{\phi}}({\mathbf{z}}|{\mathbf{x}}) \prod_i d{\textnormal{z}}_i \end{align*} and \begin{multline} \label{eq:etm} \int q_{\phi}({\mathbf{y}} | {\mathbf{x}}) f({\mathbf{y}}) d {\mathbf{y}} =\int p(\boldsymbol{\epsilon}) f\left(g_{\phi}(\boldsymbol{\epsilon}, {\mathbf{x}})\right) d \boldsymbol{\epsilon}\\= \frac{1}{L} \sum_{l=1}^{L} f\left(g_{\phi}\left(\boldsymbol{\epsilon}^{(l)}, {\mathbf{x}}\right)\right) +o_p(1). \end{multline} for some function $f({\mathbf{y}})$ corresponding with $h({\mathbf{z}})$. \subsection{Clustering with variational models} \label{sec:dnns} To perform clustering embedded in training VAEs, we optimize the lower bound $\mathcal{L}(\boldsymbol{\theta},\boldsymbol{\phi};{\mathbf{x}}^{(i)})$ $w.r.t.$ recognition model parameters $\boldsymbol{\phi}$ and generative parameters $\boldsymbol{\theta}$ and assign clusters simultaneously. The value of the evidence lower bound (\ref{eq:elbo}) can be wrote as, \begin{multline} \label{eq:elbbo} \mathcal{L}(\boldsymbol{\theta},\boldsymbol{\phi};{\mathbf{x}}^{(i)})=E_{q_{\boldsymbol{\phi}}({\mathbf{z}},{\textnormal{c}}|{\mathbf{x}}^{(i)})}[\log p_{\boldsymbol{\theta}}({\mathbf{x}}^{(i)}|{\mathbf{z}}) +\log p_{\boldsymbol{\theta}}({\mathbf{z}}|{\textnormal{c}})\\ + \log p_{\boldsymbol{\theta}}({\textnormal{c}}) - \log q_{\boldsymbol{\phi}}({\mathbf{z}}|{\mathbf{x}}^{(i)}) - \log q_{\boldsymbol{\phi}}({\textnormal{c}}|{\mathbf{z}})]. \end{multline} With the approximation (\ref{eq:etm}) to (\ref{eq:elbbo}), (\ref{eq:elbo}) is estimated with Stochastic Gradient Variational Bayes (SGVB) estimators and optimized by Auto-Encoding Variational Bayes (AEVB) algorithm \cite{VAE}. The recognition model $q_{\boldsymbol{\phi}}({\bm{z}},c|{\mathbf{x}}^{(i)})$ and generative model $p_{\boldsymbol{\theta}}({\mathbf{x}}^{(i)}|{\mathbf{z}})$ are jointly trained by the encoder and the decoder respectively. For each generated sample ${\bm{y}}^{(i,l)}$ corresponding to each input ${\mathbf{x}}^{(i)}$, we update the classes by \begin{equation} q_{\boldsymbol{\phi}}({\textnormal{c}}|{\bm{y}}^{(i,l)})=\frac{p_{\boldsymbol{\theta}}({\textnormal{c}} )p_{\boldsymbol{\theta}}({\bm{y}}^{(i,l)}|{\textnormal{c}})}{\sum_{c=1}^kp_{\boldsymbol{\theta}}({\textnormal{c}} )p_{\boldsymbol{\theta}}({\bm{y}}^{(i,l)}|{\textnormal{c}})}. \end{equation} Note that parameters $\boldsymbol{\pi}$ in $p_{\boldsymbol{\theta}}(c)$ and $\boldsymbol{\mu}_z$ in $p_{\boldsymbol{\theta}}({\mathbf{z}}|{\textnormal{c}})$ are trained as the model parameters. Finally, we construct an estimator of the marginal likelihood lower bound of the full $N$ sample data set based on mini-batches M: \begin{align} \begin{split} \mathcal{L}(\boldsymbol{\theta}, \boldsymbol{\phi} ; {\mathbf{x}}) &= \widetilde{\mathcal{L}}\left(\boldsymbol{\theta}, \boldsymbol{\phi} ; {\mathbf{x}}\right)+o_p(1)\\&=\frac{N}{M} \sum_{i=1}^{M} \widetilde{\mathcal{L}}\left(\boldsymbol{\theta}, \boldsymbol{\phi} ; {\mathbf{x}}^{(i)}\right)+o_p(1) \end{split} \end{align} with the mini-batches ${\mathbf{x}}^M = \{{\mathbf{x}}^{(i)}\}_{i=1}^M$ randomly drawn from the full data set X. It is pointed that the number of samples $L$ for each data point can be set to 1 as long as the mini-batch size M is large enough, e.g. $M$ = 100. \section{Experiments} Our work, to the best of our knowledge, is the only one performing deep clustering on binary data representations in the literature. The most related work is the VaDE \cite{Vade}, a deep clustering method that also trains VAEs with a embedded mixture model but focuses on jointly optimize clustering and generation. The performance of our method will be evaluated with classical clustering methods K-means and Gaussian mixture models (GMMs), as well as deep clustering methods on the hand-written digit image dataset MNIST~\cite{mnist}. \subsection{Evaluation Metric} It is not trivial to evaluate the performance of clustering algorithm. We follow the same evaluation metric mentioned in \cite{Xie, Vade} to perform a comparison. With a given number of clusters, the clustering accuracy (ACC) is defined as \[ ACC=\max_{m\in\mathcal{M}}\frac{\sum_{i=1}^NI\{l_i=m(c_i)\}}{N}, \] where $N$ is the total number of samples, $l_i$ is i-th ground-truth label, $c_i$ is i-th cluster assignment obtained by the model and $\mathcal{M}$ ranges over all possible mappings between predicted labels and true labels. The best mapping can be efficiently computed by the Hungarian algorithm \cite{doi:10.1002/nav.3800020109}. ACC values varies from 0 to 1 and a higher ACC value indicates a more accurate a clustering performance. To evaluate the compression quality, the peak signal-to-noise ratio (PSNR) is generally used by measuring the distance of the reconstructed image with the original image. The higher the PSNR, the better the quality of the reconstruction. It is noted that acceptable PSNR for wireless transmission is from 20 dB to 25 dB \cite{PSNR}. To see the advantage of our model in low compressed rate scenario, we will evaluate both clustering performance and compression quality in terms of bits per pixel (BPP). Here, BPP is defined by the number of bits of information stored per pixel. More BPP indicates more memory required to store or display the image. \subsection{Experiment Settings} We trained the model on the train set and then test the performance of our best model on the tested set in order to make the performance convincing and applicable in general. In training, we use feedforward artificial neural networks as the encoder and the decoder. All layers are fully connected and followed with a rectified linear unit (ReLU). For optimizer, we use Adam \cite{Adam} to jointly optimize the full set of parameters with $\beta=(0.9, 0.999)$. The learning rate is initialized as $0.001$ and decreases every 10 epochs with a decay rate of 0.9 down to the minimum of $0.0002$. The true number of classes $K=10$ is assigned as known. We repeated the experiments on the different values of BPP, which represents different compression rates. BPP value varies from the dimension of latent ${\mathbf{z}}$. For example, with $dim({\mathbf{z}})=28$, one gray-scale image input will generate the binary code with size $(1, 28)$ after compression, then we will have $28/1024 = 0.02734375$ BPP in this compression step. Classical clustering method K-means and GMMs are applied directly on raw image pixels with default settings. The results of VaDE will be reported by re-running the code released from the original paper. The result we obtain is somewhat different from the reported one because of different experimental settings and random seeds. \begin{table*}[ht] \centering \begin{tabular}{c|cccc} \hline Method& K-means&GMM&VaDE&VAB \\ Best Clustering Acuraccy (\%)&55.37&42.22&95.30&71.69 \\ \hline \end{tabular} \caption{The clustering performance is compared on the MNIST test data. The performance of VAB is much better than the classical methods, K-means and GMMs. Although it is not comparable with the performance of VaDE, VAB achieves this result at a much lower bits per pixel as shown in Figure 1(b), more suitable for compressed data.} \label{tab:my_label} \end{table*} \subsection{Experimental Results} \begin{figure*}[tb] \centering \subfloat[Subfigure 1 list of figures text][Clustering performance of VaDE and VAB against BPP]{ \includegraphics[width=0.5\textwidth]{cluster_acc.png} \label{cluster_acc}} \subfloat[Subfigure 2 list of figures text][Compression performance of VaDE and VAB against BPP]{ \includegraphics[width=0.5\textwidth]{psnr.png} \label{psnr}} \caption{The clustering performance and compression performance are shown in (a) and (b) respectively. All results are averaged from 10 experiments and presented by the solid line and the dashed dotted line, showing our method (VAB) and VaDE respectively in both Figures 1(a) and Figure 1(b). The grey area between two dashed lines shows the standard errors of the mean from 10 replications. Figure 1(a) shows that in the low BPP regime, the clustering accuracy of VAB is comparable with VaDE, while its compression performance is much better than VaDE.} \label{fig} \end{figure*} We report all results averaged over 10 experiments corresponding to different BPP values. Figure \ref{fig} shows the clustering performance and the compression quality against BPP respectively. The solid line and the dashed dotted line stands for the mean ACC of our method (VAB) and VaDE respectively in both Figures 1(a) and Figure 1(b), while filling grey area represents the standard errors of the mean over 10 runs. It can be seen that our method can achieve comparable clustering accuracy at a very low bit rate. Meanwhile, the compression quality of our method outperforms the VaDE framework as shown in Figure \ref{fig}(b). Table \ref{tab:my_label} presents the clustering accuracy of our method with other baselines. Results indicate that our model VAB is more suitable for unsupervised clustering on compressed data, when compared with the state-of-the-art methods. \section{Discussion} In this paper, we proposed a new methodology that enables deep clustering in the compressed data domain. The method is presented as a novel amalgamation of Variational Autoencoders (VAEs) with Bernoulli mixture models (BMMs), where VAEs compressed the raw data into representations with generative probability models and BMMs updated clusters of sampled binary representations. Gumbel-Softmax distribution is applied to address the issues caused by discrete values. In the algorithm, we utilized a deep learning architecture to jointly train the model. The optimization target can be treated as a \textit{two-party loss function} consisting of a reconstruction loss and a clustering loss. The learned model greatly improves clustering accuracy compared with well-known clustering methods. Through an approximate mixture of discrete probability models, the proposed solution requires less storage complexity and has the potential to reduce transmission bandwidths. A direction of future works is to develop more learning tools and applications based on compressed data, such as high-dimensional binary data. \section{Acknowledgements} The authors thank Professor Yuhong Yang and Feng Qian from the University of Minnesota for valuable discussions. \balance \bibliographystyle{IEEEbib}
2,869,038,155,798
arxiv
\section{Introduction} One of the most surprising discovery is that black holes behave as thermodynamic objects. Original forms of the first law of black hole thermodynamics do not include a pressure term, while the mass of a black hole is interpreted as the internal energy of spacetime. However, in this case, the Smarr relation [1] is no longer satisfied for a black hole with cosmological constant. To obtain the generalized first law, some works suggested that the cosmological constant should be considered as a thermodynamic variable [2-6]. However, to our best knowledge, no physical interpretation was ever given. In 2009, Kastor \emph{et al.} [7] proposed that the cosmological constant can be thought of as a pressure. From geometric derivations of the Smarr formula for static anti-de Sitter (AdS) black holes, the mass of an AdS black hole can naturally be interpreted as the enthalpy of the spacetime. It should be noted that an extended version of the fist law is completely consistent with the general Smarr formula [8], which obtained by scaling argument. Of especial interest is that, in the extended phase space including pressure, the equation of state for the charged AdS black hole has the behaviour of the van der Waals (vdW) fluid [9]. This has led to numbers of investigation of black hole phase transitions from the viewpoint of traditional thermodynamics and chemistry. Two excellent review papers have been written on recent developments in this field [10,11]. However, despite extensive discussions, exact thermodynamical solutions for the black hole phase transition are still lacking. In 1982, Lekner [12] provided a parametric solution of the vdW liquid-vapor coexistence curve, which is applied to the research of various phase transtions[13-15]. Johnston [16] commented it as an elegant and very useful solution, and made a significant extension. It is found that the phase transitions of a charged AdS black hole are similar to vdW fluid, but of course they have fundamental difference. For example, in a black hole system, the Maxwell construction is effective for the pressure-thermodynamic volume plane rather the pressure-specific volume. It leads to complexity for the research of the phase transition for a charged AdS black hole. In this paper, the first-order phase transition of charged AdS black holes are discussed, and parametric solutions for a small-large black hole (SBH-LBS) coexistence curve are given. Once the analytical solution is obtained, different thermodynamic quantities can be calculated. In the next section we discuss the behaviour of the equation of state for a charged AdS black hole, and derive the coupled equations for the thermodynamic volumes of the SBH/LBH phases from the equation of state, with the aid of the Maxwell construction and the Clausius-Clapeyron equation. In section 3 we first introduce a new parameter $\omega$, and derive the decoupled reduced volume equation. We then give a exact solution of this equation, and show that all thermodynamic quantities in phases SBH and LBH can be obtained from the solution. In section 4 we discuss the thermodynamic behaviors near the critical point, and show that, for large $Q$, the thermodynamic quantities have similar behaviors. Finally, section 5 contains further discussion and some concluding remarks. \section{Volume equations for SBH/LBH phase transtion} In this section, the entropy dependence and the charge dependence of the thermodynamic volumes along a SBH-LBH coexistence curve are discussed, and then the equations of the volume including the entropy and charge are derived. We consider the proposal that, in four dimensions, thermodynamic pressure is given by $p=-\Lambda/8\pi$ , where $\Lambda$ is the cosmological constant. Then the equation of state for charged AdS black hole reads [9] (we work in the geometric units) \begin{equation} p=\frac{T}{v}-\frac{1}{2\pi v^{2}}+\frac{2Q^{2}}{\pi v^{4}} , \end{equation} where \begin{equation} v=2r_{+}=2(\frac{3V}{4\pi})^{1/3}. \end{equation} Here $v$ and $V$ are respectively the specific volume and the thermodynamic volume, given in terms of the event horizon radius $r_{+}$, $T$ is the black hole temperature, and $Q$ its charge. It has been shown that the equation of state (2.1) for the fixed charge $Q$ has the behaviour of the vdW fluid. According to the ordinary thermodynamic, when the two phases are in equilibrium, their pressures and chemical potentials are equal. Unfortunately, it is impossible to define the chemical potential (Gibbs function per particle) for a black hole system. Therefore, the Maxwell construction is effective for the $p-V$ plane rather the $p-v$ plane. In order to use the Maxwell construction, it is necessary to rewrite the equation of state in terms of the thermodynamic volume. Using $V=\pi v^{3}/6$, Eq.(2.1) becomes \begin{equation} p=\frac{1}{6}(\frac{4\pi}{3})^{1/3}[\frac{3T}{V^{1/3}}-(\frac{3}{4\pi})^{2/3}\frac{1}{V^{2/3}}+\frac{Q^{2}}{V^{4/3}}]. \end{equation} As similar to the $p-v$ plane, there exists a critical point: \begin{equation} P_{c}=\frac{1}{96\pi Q^{2}},\\ \\ V_{c}=8\sqrt{6}\pi Q^{3}, \\ \\ T_{c}=\frac{1}{3\sqrt{6}\pi Q}, \end{equation} and hence $P_{c}V_{c}/T_{c}=3\pi Q^{2}/2$ (Note that is not 3/8). The isotherm that passes through the point has an inflexion there. At temperatures $T<T_{c}$, each isotherm has a maximum and a minimum, while it becomes monotonic at temperatures $T>T_{c}$. A straight horizontal segment intersecting the isotherm corresponds to a first-order SBH/LBH phase transition, which terminate at the critical point with increasing temperature, meanwhile, convert it to a second-order phase transition. In an extended phase space, the fist law of black hole thermodynamics is given by \begin{equation} dM=TdS+Vdp+\Phi dQ. \end{equation} Here $\Phi$ and $M$ stand for the horizon electrostatic potential and black hole mass, $S$ its entropy. The Gibbs free energy can be obtained by Legendre transformation as \begin{equation} dG=-SdT+Vdp+\Phi dQ. \end{equation} As similar to the vdW fluid, the first-order SBH/LBH phase transition for fixed charge $Q$ satisfies the conditions, $T_{s}=T_{l}$, $p_{s}=p_{l}=P$, and $G_{s}=G_{l}$, where the subscripts "$s$" and "$l$" denot the thermodynamic variables in phases SBH and LBH. By integrating $dG$ along the isotherm from one intersection ($V_{s}$) with the abscissa axis to the other ($V_{l}$) as shown in Fig.1, we find \begin{figure} \centering \includegraphics[width=1.2\textwidth,trim=100 320 0 320,clip]{fig1.pdf} \caption{\label{fig:i} The isotherm of a charged AdS black hole at temperature $T=0.95T_{c}$ with $Q=1$. The origin is located at $(V=0, p=P)$.} \end{figure} \begin{equation} P=\frac{1}{V_{l}-V_{s}}\int_{V_{s}}^{V_{l}}pdV, \end{equation} which is known as the Maxwell construction. When Eq.(2.3) is substituted into Eq.(2.7) and the integration is carried out, the Maxwell construction can be written in the form \begin{equation} P(V_{l}-V_{s})=(\frac{\pi}{6})^{1/3}[\frac{3}{2}T(V_{l}^{2/3}-V_{s}^{2/3})-(\frac{3}{4\pi})^{2/3}(V_{l}^{1/3}-V_{s}^{1/3})-Q^{2}(V_{l}^{-1/3}-V_{s}^{-1/3})]. \end{equation} In terms of Eq.(2.3), the condition $P=p_{s}=p_{l}$ reads \begin{equation} P=\frac{1}{6}(\frac{4\pi}{3})^{1/3}[\frac{3T}{V_{s}^{1/3}}-(\frac{3}{4\pi})^{2/3}\frac{1}{V_{s}^{2/3}}+\frac{Q^{2}}{V_{s}^{4/3}}] =\frac{1}{6}(\frac{4\pi}{3})^{1/3}[\frac{3T}{V_{l}^{1/3}}-(\frac{3}{4\pi})^{2/3}\frac{1}{V_{l}^{2/3}}+\frac{Q^{2}}{V_{l}^{4/3}}]. \end{equation} We can use these to eliminate $P$, obtaining \begin{equation} T=\frac{1}{3}(\frac{1}{V_{l}^{1/3}}+\frac{1}{V_{s}^{1/3}})[(\frac{3}{4\pi})^{2/3}-Q^{2}(\frac{1}{V_{l}^{2/3}}+\frac{1}{V_{s}^{2/3}})]. \end{equation} Equations (2.8)-(2.10) together yield the desired equation \begin{equation} (\frac{v_{l}}{2Q}\frac{v_{s}}{2Q}-1)^{2}-(\frac{v_{l}}{2Q}+\frac{v_{s}}{2Q})^{2}=1, \end{equation} where $v_{l}=2(\frac{3V_{l}}{4\pi})^{1/3}$ and $v_{s}=2(\frac{3V_{s}}{4\pi})^{1/3}$ are the specific volumes in phases SBH and LBH. On the other hand, as shown in Fig.1, $V_{l}$ and $V_{s}$ depend on the temperature $T$. We differentiate (2.7) and have \begin{equation} (\frac{dp}{dT})_{Q}=\frac{1}{V_{l}-V_{s}}\int_{V_{s}}^{V_{l}}\frac{\partial p}{\partial T}dV. \end{equation} Compared with the standard form of the Clausius-Clapeyron equation [17] \begin{equation} (\frac{dp}{dT})_{Q}=\frac{\Delta S}{V_{l}-V_{s}}, \end{equation} where $\Delta S=S_{l}-S_{s}$. From Eqs.(2.12) and (2.13), The change of entropy $\Delta S$ can be written as \begin{equation} \Delta S=\int_{V_{s}}^{V_{l}}\frac{\partial p}{\partial T}dV. \end{equation} Integration then yields \begin{equation} (\frac{v_{l}}{2\sqrt{\Delta S/\pi}})^{2}-(\frac{v_{s}}{2\sqrt{\Delta S/\pi}})^{2}=1. \end{equation} When a SBH crosses the coexistence curve and becomes a LBH, the thermodynamic volumes are not merely allowed, but required to satisfy Eqs.(2.11) and (2.15). In this paper the equations are called the volume equations at SBH/LBH transition range. \section{Exact solutions to volume equations} In this section, we discuss the analytical solutions for the volume equations (2.11) and (2.15). For this purpose, we introduce a new parameter $\omega$ defined by \begin{equation} \omega=(\frac{\Delta S}{\pi Q^{2}})^{2}. \end{equation} Then the decoupled equation for the volume $\widehat{V_{s}}$ can be written in terms of the parameter $\omega$ as \begin{equation} 1296\widehat{V_{s}}^{8/3}+432(\sqrt{\omega}-2)\widehat{V_{s}}^{6/3}+36(\omega-6\sqrt{\omega}-12)\widehat{V_{s}}^{4/3}-12(\omega+6\sqrt{\omega})\widehat{V_{s}}^{2/3}+\omega=0, \end{equation} where $\widehat{V_{s}}=V_{s}/V_{c}$, which is called the reduced volume. In general, the reduced variables can be obtained by dividing a quantity by its value at the critical point. Equation (3.2) is the quartic equation with $\widehat{V_{s}}^{2/3}$. According to algebraic theory, the solution is \begin{equation}\large \widehat{V_{s}}^{2/3}=\left\{\begin{array}{lll}\frac{1}{12}(2-\sqrt{\omega})+\frac{1}{12\sqrt{3}}\sum ^{1}_{k=-1}[\omega+36+(-1)^{k}2\sqrt{\omega^{2}+72\omega+144}\cos\theta_{k}]^{1/2}, \\\ \textmd{if}\ \omega\leq12(2\sqrt{3}-3),\\&\\ \frac{1}{12}(2-\omega+\frac{1}{\sqrt{3}}\sqrt{\omega+\Omega_{+}+36})+\frac{1}{6\sqrt{3}}[(\omega-\frac{\Omega_{+}}{2}+36)^{2}+\frac{3}{4}\Omega_{-}^{2}]^{1/4}\cos\frac{\theta}{2}, \\\ \textmd{if}\ \omega\geq12(2\sqrt{3}-3), \end{array} \right. \end{equation} where \begin{equation} \theta_{k}=k\frac{\pi}{3}+\frac{1}{3}\arccos\frac{\omega^{3}+108\omega^{2}+2160\omega-1728}{(\omega^{2}+72\omega+144)^{3/2}}, \\\ (k=-1, 0, 1), \end{equation} \begin{equation} \theta=\arctan\frac{\sqrt{3}\Omega_{-}}{2(\omega+36)-\Omega_{+}}, \end{equation} \begin{eqnarray}\large \Omega_{\pm}&=&[\omega^{3}+108\omega^{2}+2160\omega-1728+96\sqrt{3\omega(\omega^{2}+72\omega-432)}]^{1/3}\nonumber\\&&\pm [\omega^{3}+108\omega^{2}+2160\omega-1728-96\sqrt{3\omega(\omega^{2}+72\omega-432)}]^{1/3}. \end{eqnarray} It should be noted that other solutions of Eq.(3.2) are unphysical. When Eqs.(2.2), (2.3) and substituted Eq.(2.15), the expression for the volume $\widehat{V_{l}}=(V_{l}/V_{c})^{2/3}$ can be obtained, \begin{equation} \widehat{V_{l}}^{2/3}=\widehat{V_{s}}^{2/3}+\frac{1}{6}\sqrt{\omega}. \end{equation} \begin{figure} \centering \includegraphics[width=1.2\textwidth,trim=100 320 0 300,clip]{fig2.pdf} \caption{\label{fig:i}The behaviour of $\widehat{V_{l}}^{2/3}$ and $\widehat{V_{s}}^{2/3}$ as a function of parameter $\omega$. The curve above the abscissa axis is for $\widehat{V_{l}}^{2/3}$, while the one below is for $\widehat{V_{s}}^{2/3}$. The origin is located at $(\omega=24\sqrt{3}-36, \widehat{V}^{2/3}=1)$.} \end{figure} From Eqs.(3.3) and (3.7), it is clear that the reduced volumes in phases SBH and LBH are determined only by the parameter $\omega=(\Delta S/\pi Q^{2})^{2}$. Interestingly enough, there exists a demarcation point, $\omega_{d}=12(2\sqrt{3}-3)$, and each reduced volume has different functions on both sides of the demarcation point. As in Fig.2, the curve in the different quadrants corresponds to the different functions. In particular, a reduced volume is continuous at $\omega_{d}$, but not its derivative. Note that expressions (3.3) and (3.7) are exact, and all other thermodynamic quantities in phases SBH and LBH can be derived from these two fundamentals. \section{Thermodynamic behaviors near the critical point} The physical connotation of the critical point is very rich. The phase equilibrium curve terminates at the critical point in the $p-T$ plane, while the SBH and LBH coexisting states become identical there, etc. So it is very important to investigate the thermodynamic behaviors near the critical point. In this section, we give the approximation expressions of some thermodynamic functions near the critical point. When the black hole system passes the critical point, the parameter $\omega$ vanishes (due to $\Delta S=0$). Using Eqs.(2.4), (2.9), (2.10), (3.3), (3.7), and the Taylor expansion about $\omega=0$, we find the approximation values of the reduced variables, $\widehat{P}$, $\widehat{T}$, $\widehat{V_{s}}$ and $\widehat{V_{l}}$ near the critical point as follows: \begin{equation} \widehat{P}=\frac{P}{P_{c}}=1-\frac{1}{432}\omega+\frac{5}{373248}\omega^{2}+O(\omega^{3}), \end{equation} \begin{equation} \widehat{T}=\frac{T}{T_{c}}=1-\frac{1}{1152}\omega+\frac{11}{2654208}\omega^{2}+O(\omega^{3}), \end{equation} \begin{equation} \widehat{V_{s}}=\frac{V_{s}}{V_{c}}=1-\frac{1}{8}\omega^{1/2}+\frac{11}{1152}\omega+O(\omega^{3/2}), \end{equation} \begin{equation} \widehat{V_{l}}=\frac{V_{l}}{V_{c}}=1+\frac{1}{8}\omega^{1/2}+\frac{11}{1152}\omega+O(\omega^{3/2}). \end{equation} Similarly, one can write the Clausius-Clapeyron equation (2.13) in terms of the reduced variables as \begin{equation} (\frac{d\widehat{P}}{d\widehat{T}})_{Q}=\frac{8}{3}-\frac{7}{1296}\omega+\frac{299}{8957952}\omega^{2}+O(\omega^{3}). \end{equation} On the other hand, Ref.[18] introduced the number density, which is defined as $n=1/v$, to investigate the microscopic structure of charged AdS black hole phase transitions, When one black hole phase changes into another, the number density also suffers a sudden change. Of interest are the ratio of the LBH and SBH reduced densities \begin{equation} \frac{\widehat{n_{l}}}{\widehat{n_{s}}}=(\frac{V_{s}}{V_{l}})^{1/3}=1-\frac{1}{12}\omega^{1/2}+\frac{1}{288}\omega+O(\omega^{3/2}), \end{equation} the reduced density average, \begin{equation} <\widehat{n}>=\frac{1}{2}(\widehat{n_{l}}+\widehat{n_{s}})=\frac{1}{2}(\widehat{V_{l}}^{-1/3}+\widehat{V_{s}}^{-1/3})=1+\frac{1}{3456}\omega-\frac{37}{23887872}\omega^{2}+O(\omega^{3}), \end{equation} and the reduced density difference, \begin{equation} \Delta \widehat{n}=\widehat{n_{s}}-\widehat{n_{l}}=\frac{1}{12}\omega^{1/2}-\frac{1}{4608}\omega^{3/2}+\frac{125}{95551488}\omega^{5/2}+O(\omega^{7/2}). \end{equation} Equations (4.1)-(4.8) described the thermodynamic behaviors as $\omega\rightarrow0$. For which, there are two cases: $\Delta S\rightarrow0$ and $Q\rightarrow\infty$. This shows that the behaviors of some reduced variables for large $Q$ are the same as for near the critical point. \section{Discussion and conclusion} We have investigated the parametric solution of a SBH/LBH coexistence curve. When a SBH crosses the coexistence in the $p-T$ plane and becomes a LBH, the volumes satisfy Eqs.(2.11) and (2.15), which have elegant algebraic structures. Actually Eq.(2.15) confirms what we expected: the entropy change is one quarter of the area difference of the LBH and SBH. The physical solutions of Eqs.(2.11) and (2.15) are given by expressions (3.3) and (3.7). All properties of the coexistence curve in terms of the parameter $\omega=(\Delta S/\pi Q^{2})^{2}$ can be studied from the expressions. It should be noted that each thermodynamic quantity is described by a piecewise analytic function with the demarcation point locates at $\omega_{d}=12(2\sqrt{3}-3)$. The physical interpretation of it is still an open question. We have also considered the thermodynamic behaviors as $\omega\rightarrow0$. From Eqs.(4.1)-(4.8), one can easily obtain some crtical exponents and amplitudes for SBH-LBH phase transitions. For example, $\Delta \widehat{n}\rightarrow2\sqrt{2}(1-\widehat{T})^{1/2}$, which shows that the $\beta$ exponent and $b$ amplitude, defined by [16] $\Delta \widehat{n}=b(1-\widehat{T})^{\beta}$, take the values $1/2$ and $2\sqrt{2}$, respectively. The exponent is exactly the same as the vdW fluid, but the amplitude is different. These properties for near the critical point are same as for large $Q$. \\\\{\bf Acknowledgments} \\\\This work was supported by the National Natural Science Foundation of China under Grants No. 11475148 and No. 11075141.
2,869,038,155,799
arxiv
\section{Introduction} \emph{Weak type estimates} play a fundamental role in harmonic analysis. A prevalent example of a weak type estimate is one satisfied by the Hardy-Littlewood maximal operator $M_{HL}$. Recall that this operator is defined on $L^1(\mathbb{R}^n)$ by $$M_{HL}f(x) = \sup_{x \in B}\frac{1}{|B|}\int_B |f|\;,$$ where the supremum is over balls $B$ in $\mathbb{R}^n$ containing $x$. $M_{HL}$ satisfies the weak type $(1,1)$ inequality $$\left|\left\{x \in \mathbb{R}^n : M_{HL}f(x) > \alpha\right\}\right| \leq C_n \frac{1}{\alpha} \int_{\mathbb{R}^n}|f|\;.$$ The \emph{strong maximal operator} $M_{str}$ is defined on $L^1(\mathbb{R}^n)$ by $$M_{str} f(x) = \sup_{x \in R}\frac{1}{|R|}\int_R |f|\;,$$ where the supremum is over all rectangular parallelepipeds in $\mathbb{R}^n$ whose sides are parallel to the coordinate axes. $M_{str}$ does not enjoy a weak type $(1,1)$ inequality; instead it satisfies the weak type $(L (\log L)^{n-1}, L^1)$ estimate \begin{equation}\label{e000}\left|\left\{x \in \mathbb{R}^n : M_{str} f(x) > \alpha\right\}\right| \leq C_n \int_{\mathbb{R}^n}\frac{|f|}{\alpha}\left(1 + \log^{+}\frac{|f|}{\alpha}\right)^{n-1}\;.\end{equation} Proofs of these estimates may be found in, e.g., \cite{guzman}. Given a collection of sets $\mathcal{B}$ in $\mathbb{R}^n$, we may define the associated maximal operator $M_\mathcal{B}$ by $$M_\mathcal{B}f(x) = \sup_{x \in R \in \mathcal{B}} \frac{1}{|R|}\int_R |f|\;.$$ If $\mathcal{B}$ is a proper subset of the collection of rectangular parallelepipeds, we refer to $\mathcal{B}$ as a \emph{rare basis} of parallelepipeds. We would expect smaller collections $\mathcal{B}$ to be associated to better optimal weak type estimates for $M_\mathcal{B}$. Indeed, if $\mathcal{B}$ were the collection of $n$-dimensional cubes in $\mathbb{R}^n$ we would have that $M_\mathcal{B}$ behaves like the Hardy-Littlewood maximal operator and satisfies a weak type $(1,1)$ inequality; if $M_\mathcal{B}$ were the collection of all rectangular parallelepipeds in $\mathbb{R}^n$ with sides parallel to the axes we would have that $M_\mathcal{B}$ is the strong maximal operator $M_{str}$ and satisfies the $(L (\log L)^{n-1}, L^1)$ estimate indicated above. It is the case where $\mathcal{B}$ is an intermediate collection that most interests us here. The most satisfying result to date along these lines is due to Stokolos, who in \cite{stokolos1988} (see also \cite{stokolos2005, stokolos2006}) proved the following. \begin{prop}\label{prop2} Let $\mathcal{B}$ be a translation invariant basis of rectangles in $\mathbb{R}^2$ whose sides are parallel to the coordinate axes. If $\mathcal{B}$ does not satisfy the weak type $(1,1)$ estimate $$|\{x \in \mathbb{R}^2 : M_\mathcal{B} f(x) > \alpha\}| \leq C \int_{\mathbb{R}^2} \frac{|f|}{\alpha}$$ then $M_\mathcal{B}$ satisfies the weak type estimate $$\left|\left\{x \in \mathbb{R}^2 : M_\mathcal{B} f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^2} \frac{|f|}{\alpha} \left(1 + \log^+ \frac{|f|}{\alpha}\right)\;$$ but does not satisfy a weak type estimate of the form $$|\{x \in \mathbb{R}^2 : M_\mathcal{B} f(x) > \alpha\}| \leq C \int_{\mathbb{R}^2} \phi\left(\frac{|f|}{\alpha}\right)$$ for any nonnegative convex increasing function $\phi$ such that $\phi(x) = o(x \log x)$ as x tends to infinity. \end{prop} This result tells us that there is a certain ``discreteness'' associated to optimal weak type estimates for maximal operators associated to rare bases of rectangles in $\mathbb{R}^2$; the optimal weak type estimate must be of weak type $(1,1)$ or of weak type $(L \log L, L^1)$, but not of the form, say, $(L(\log L)^{1/2}, L^1)\;.$ At the present time there are no satisfactory analogues of Proposition \ref{prop2} for rare bases of rectangular parallelepipeds whose sides are parallel to the coordinate axes in $\mathbb{R}^n$ for $n \geq 3$. We conjecture, however, the following. \begin{con}\label{con1} Let $\mathcal{B}$ be a translation invariant collection of rectangular parallelepipeds in $\mathbb{R}^n$ whose sides are parallel to the coordinate axes. Then there exists an \emph{integer} $1 \leq k \leq n$ such that $M_\mathcal{B}$ satisfies the weak type estimate \begin{equation}\label{e0}\left|\left\{x \in \mathbb{R}^n : M_\mathcal{B}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^n} \frac{|f|}{\alpha}\left(1 + \log^+ \frac{|f|}{\alpha}\right)^{k-1}\end{equation} but such that $M_\mathcal{B}$ satisfies no estimate of the form \begin{equation}\label{e1}\left|\left\{x \in \mathbb{R}^n : M_\mathcal{B}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^n} \phi\left(\frac{|f|}{\alpha}\right)\end{equation} whenever $\phi: [0, \infty) \rightarrow [0, \infty)$ is a convex increasing function satisfying $\phi(x) = o(x (\log x)^{k-1})$. \end{con} A significant contributor to the lack of progress on Conjecture \ref{con1} in dimensions $n=3$ and higher is the relative lack of meaningful classes of rare bases exhibiting known sharp weak type estimates. In \cite{dm2017}, D'Aniello and Moonens provided a sufficient condition on a basis $\mathcal{B}$ of rectangular parallelepipeds in $\mathbb{R}^n$ so that estimate (\ref{e0}) is optimal for $k = n$. Using the Fubini theorem and the weak type estimate (\ref{e000}) for the strong maximal operator one can show that if the basis $\mathcal{B}$ consists of all rectangular parallelepipeds in $\mathbb{R}^3$ with sidelengths of the form $s, s, t$ the associated maximal operator $M_\mathcal{B}$ satisfies estimate (\ref{e0}) for $k=2$ but not estimate (\ref{e1}) for any nonnegative convex increasing function $\phi$ satisfying $\phi(x) = o(x (\log x))$. (See a related discussion of this basis in the seminal paper \cite{zygmund1967} of Zygmund which initiated the topic of rare bases in the subject of differentiation of integrals.) In the more involved argument in \cite{soria}, Soria proved that if $\mathcal{B}$ consists of all rectangular parallelepipeds in $\mathbb{R}^3$ with sidelengths of the form $s, \frac{1}{s}, t$, then $M_\mathcal{B}$ also satisfies estimate (\ref{e0}) for $k=2$ but not estimate (\ref{e1}) for any nonnegative convex increasing function $\phi$ satisfying $\phi(x) = o(x (\log x))$. \footnote{ In was in this same paper that Soria disproved the \emph{Zygmund Conjecture}. \cite{fefbeijing} provides a good introduction to the Zygmund Conjecture for the interested reader. A recent class of counterexamples to the Zygmund Conjecture due to Rey may be found in \cite{rey2020}. Important contexts where the Zygmund Conjecture does hold are due to A. C\'ordoba \cite{cordoba}; extensions of C\'ordoba's associated covering lemma techniques to higher dimensions due to R. Fefferman and Pipher may be found in \cite{fp2005}.} The most recent result to date along these lines is due to Dmitrishin, Hagelstein, and Stokolos, who proved in \cite{dms2021} that if $\mathcal{B}$ be a collection of rectangular parallelepipeds in $\mathbb{R}^3$ whose sides are parallel to the coordinate axes and such that $\mathcal{B}$ contains parallelepipeds with side lengths of the form $s, \frac{2^N}{s} , t $, where $s, t > 0$ and $N$ lies in an infinite subset of the integers, then the associated geometric maximal operator $M_\mathcal{B}$ satisfies the weak type estimate (\ref{e0}) for $k=3$ but does not satisfy the estimate (\ref{e1}) for any nonnegative convex increasing function $\phi$ satisfying $\phi(x) = o(x (\log x)^2)$. The argument in the latter paper is quite delicate and utilizes the concept of \emph{crystallization}, introduced by Stokolos in \cite{stokolos1988} and developed further in \cite{dms2021, hs2011, stokolos2005,stokolos2006}. The purpose of this paper is to provide another class of natural examples of rare bases in $\mathbb{R}^3$ for which Conjecture \ref{con1} holds. The associated proof also involves crystallization, but in many respects it is more straightforward than the argument in \cite{dms2021} as we are here able to exploit the \emph{homothecy invariance} of the bases considered. It is our hope that the theorem is not only of intrinsic interest, but the techniques of proof might be used in the future to help resolve Conjecture \ref{con1} in the special but important case of homothecy invariant bases. Our main result is the following. \begin{thm}\label{t1} Let $\mathcal{B}$ be a homothecy invariant collection of rectangular parallelepipeds in $\mathbb{R}^3$ whose sides are parallel to the coordinate axes and with sidelengths of the form $s, 2^j s, t$, where $j$ lies in a nonempty set $S \subset \mathbb{Z}$. If $S$ is a finite set, then the associated geometric maximal operator $M_\mathcal{B}$ satisfies the weak type estimate of the form \begin{equation} \label{e2}\left|\left\{x \in \mathbb{R}^3 : M_{\mathcal{B}}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^3} \frac{|f|}{\alpha}\left(1 + \log^+ \frac{|f|}{\alpha}\right)\;\end{equation} but does not satisfy an estimate of the form \begin{equation}\label{e5}\left|\left\{x \in \mathbb{R}^3 : M_{\mathcal{B}}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^3} \phi\left(\frac{|f|}{\alpha}\right)\end{equation} for any convex increasing function $\phi: \mathbb[0, \infty) \rightarrow [0, \infty)$ satisfying the condition \begin{equation}\label{e6}\lim_{x \rightarrow \infty}\frac{\phi(x)}{x (\log(1 + x))} = 0\;.\end{equation} \\ If $S$ is an infinite set, then the associated geometric maximal operator $M_\mathcal{B}$ satisfies a weak type estimate of the form $$\left|\left\{x \in \mathbb{R}^3 : M_{\mathcal{B}}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^3} \frac{|f|}{\alpha} \left(1 + \log^+ \frac{|f|}{\alpha}\right)^{2}$$ \noindent but does not satisfy an estimate of the form $$\left|\left\{x \in \mathbb{R}^3 : M_{\mathcal{B}}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^3} \phi\left(\frac{|f|}{\alpha}\right)$$ for any convex increasing function $\phi: \mathbb[0, \infty) \rightarrow [0, \infty)$ satisfying the condition $$\lim_{x \rightarrow \infty}\frac{\phi(x)}{x (\log(1 + x))^2} = 0\;.$$ \end{thm} The remainder of the paper is devoted to a proof of this theorem. We remark that the statement of Theorem \ref{t1} is very similar to that of Theorem 1 of \cite{dms2021} but there are nonetheless significant differences between these two results. In particular, the basis $\mathcal{B}$ in the latter paper consists of rectangular parallelepipeds parallel to the coordinate axes with sidelengths of the form $s, \frac{2^N}{s}, t$ where $N$ lies in a nonempty set $S$ of integers. To the best of our understanding neither result follows from the other, in large part because the latter basis lacks the \emph{dilation invariance} enjoyed by the former as well as the fact that, even for finite sets $S$, the possible ratios of the first two sidelengths of parallelepipeds in the latter basis are all of $(0,\infty)$. \\ {{\bf{Acknowledgment:} } We wish to thank the referee for helpful suggestions regarding this paper.} \section{Proof of Theorem \ref{t1}} \begin{proof}[Proof of Theorem \ref{t1}] If $S$ is finite, we may itemize the elements of $S$ as $j_1 , \ldots, j_N$. Subsequently we may express $M_\mathcal{B}$ as a supremum of maximal functions of the form $M_{\mathcal{B}_k}$, where $\mathcal{B}_k$ consists of all parallelepipeds in $\mathbb{R}^3$ with sidelengths of the form $s, 2^{j_k}s, t$. From the paper \cite{fava1972} of Fava it readily follows that estimate (\ref{e2}) is satisfied. As can be seen by testing the maximal operator $M_\mathcal{B}$ on the characteristic functions associated to cubes in $\mathbb{R}^3$, one can readily show that estimate (\ref{e5}) does not hold for any convex increasing function $\phi:[0,\infty) \rightarrow [0, \infty)$ satisfying the limit (\ref{e6}). \\ We now turn to case that $S$ is an infinite set. First we note that $M_\mathcal{B}$ satisfies the estimate $$\left|\left\{x \in \mathbb{R}^3 : M_{\mathcal{B}}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^3} \frac{|f|}{\alpha} \left(1 + \log^+ \frac{|f|}{\alpha}\right)^{2}$$ as it is dominated by the strong maximal operator $M_{str}$ acting on measurable functions in $\mathbb{R}^3$. It remains to show that $M_\mathcal{B}$ does not satisfy an estimate of the form $$\left|\left\{x \in \mathbb{R}^3 : M_{\mathcal{B}}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^3} \phi\left(\frac{|f|}{\alpha}\right)$$ for any convex increasing function $\phi: \mathbb[0, \infty) \rightarrow [0, \infty)$ satisfying the condition $$\lim_{x \rightarrow \infty}\frac{\phi(x)}{x (\log(1 + x))^2} = 0\;.$$ This is the primary difficulty we face. Note that we cannot show that $M_\mathcal{B}$ fails to satisfy such an estimate simply by testing $M_{\mathcal{B}}$ on the characteristic function of a cube. Instead we need to utilize ideas involving \emph{crystallization} introduced by Stokolos in \cite{stokolos1988} and further developed by Dmitrishin, Hagelstein and Stokolos in \cite{dms2021, hs2011, stokolos2005, stokolos2006}. \\ Let $N > 1$ be a positive integer. Since $S$ is an infinite set contained in the integers, we may assume without loss of generality that there exist natural numbers \mbox{$j_1 > j_2 > \cdots > j_N$} so that any parallelepiped in $\mathbb{R}^3$ whose sides are parallel to the coordinate axes with sidelengths of the form $ 2^{-k + 1}, 2^{-j_k}, t$ lie in $\mathcal{B}$. Moreover for technical reasons we will see later we also assume without loss of generality that $j_k > N + j_{k-1}$. Let \begin{align} R_1 &= [0,1] \times [0, 2^{-j_1}],\notag \\R_2 &= [0, \frac{1}{2}] \times [0, 2^{-j_2}], \notag \\ &\vdots \notag \\R_N &= [0, 2^{-N+1}] \times [0, 2^{-j_N}]\;.\notag \end{align} We define the Rademacher function $r_0(t)$ by $$r_0(t) = \chi_{[0, \frac{1}{2}]}(t) - \chi_{(\frac{1}{2},1)}(t)\;,$$ where we extend $r_0(t)$ to be periodic by $r_0(t+1) = r_0(t)\;.$ Let $C_{j_1, \ldots, j_N}$ be defined by $$C_{j_1, \ldots, j_N} = \left\{t \in [0,1] : \sum_{k=1}^N r_0(2^{j_k}t) = N\right\}\;.$$ Observe that $m_1(C_{j_1, \ldots, j_N}) = 2^{-N}$, where we let $m_k(A)$ denote the $k$-dimensional Lebesgue measure of a set $A \subset \mathbb{R}^k$. (That this holds can be seen by noting that the sets $\{t \in [0,1]: r_0(2^{j_k}t) = 1\}$ correspond to $N$ mutually independent events all of probability $\frac{1}{2}$.) Define now the set $E_{j_1, \ldots, j_N}$ (which we will abbreviate as $E_N$) in $\mathbb{R}^2$ by $$E_{j_1, \ldots, j_N} = [0, 2^{-N+1}] \times C_{j_1, \ldots, j_N}\;.$$ \\ \begin{figure}[ht] \centering \def<desired width>{300pt} \begingroup% \makeatletter% \providecommand\color[2][]{% \errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}% \renewcommand\color[2][]{}% }% \providecommand\transparent[1]{% \errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}% \renewcommand\transparent[1]{}% }% \providecommand\rotatebox[2]{#2}% \newcommand*\fsize{\dimexpr\f@size pt\relax}% \newcommand*\lineheight[1]{\fontsize{\fsize}{#1\fsize}\selectfont}% \ifx<desired width>\undefined% \setlength{\unitlength}{299.61127461bp}% \ifx\svgscale\undefined% \relax% \else% \setlength{\unitlength}{\unitlength * \real{\svgscale}}% \fi% \else% \setlength{\unitlength}{<desired width>}% \fi% \global\let<desired width>\undefined% \global\let\svgscale\undefined% \makeatother% \begin{picture}(1,1)% \lineheight{1}% \setlength\tabcolsep{0pt}% \put(0,0){\includegraphics[width=\unitlength,page=1]{p45figddesktop.pdf}}% \put(0.28200542,0.40657373){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$R_3$\end{tabular}}}}% \put(0.53114208,0.14186605){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$R_2$\end{tabular}}}}% \put(0.20415019,0.73356564){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$E_{5,3,1}$\end{tabular}}}}% \put(0.82699191,0.04843978){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$R_1$\end{tabular}}}}% \put(0,0){\includegraphics[width=\unitlength,page=2]{p45figddesktop.pdf}}% \end{picture}% \endgroup% \caption{The set $E_{5,3,1}$ together with rectangles $R_1$, $R_2$, and $R_3$} \label{aa3} \end{figure} Figure 1 provides an illustration of the case that $j_1 = 5$, $j_2 = 3$, and $j_3 = 1$, indicating the set $E_{5,3,1}$ together with associated rectangles $R_1$, $R_2$, and $R_3$. \\ We now assume without loss of generality that $j_{k-1} > N + j_{k}$. \\ Note that $$\frac{1}{m_2(R_k)}\int_{R_k}\chi_{E_N} = 2^{-N}$$ for each $k$. For each $k$, we let $\{R_{k,j}\}_j$ be the collection of all the vertical translates $ R^\prime$ in $\mathbb{R}^2$ of the rectangle $R_k$ that are dyadic rectangles and such that $$\frac{1}{m_2(R^\prime)}\int_{R^\prime} \chi_{E_N} = 2^{-N}\;.$$ Note that the number of such translates is \begin{align} \#\{R_{k,j}\} &= \frac{m_2(E_N)}{m_2(E_N \cap R_k)} \notag \\&= \frac{2^{-N+1}\cdot 2^{-N}}{2^{-N+1}\cdot\left(2^{-k}\cdot 2^{-j_k}\right)}\notag \\&= 2^{-N +k + j_k}\;\notag \end{align} and moreover these translates, for fixed $k$, are a.e. pairwise disjoint. \\ It will now be convenient to have a notation indicating homothecies of a rectangle $R$ that share the lower left corner of $R$. This is done as follows. Given a rectangle $R = [a, a + \Delta_1]\times[b, b + \Delta_2]\subset \mathbb{R}^2$ and $\delta > 0$, we let $$\delta R = [a, a + \delta \Delta_1]\times[b, b + \delta\Delta_2]\;.$$ Given $s \in \mathbb{R}$, let $\tau_s R$ denote the vertical translate of the rectangle $R \subset \mathbb{R}^2$ given by $$\chi_{\tau_s R}(x_1, x_2) = \chi_R(x_1, x_2 - s)\;.$$ Here $\chi_A$ denotes the usual indicator function of set $A$. Given $k$, $1 \leq j \leq 2^{-N + k + j_k}$, a nonnegative integer $l$, and $1 \leq i \leq 2^l$, we define $R_{k, j, 2^l, i}$ (a translate of a dilate of $R_k$) by $$R_{k, j, 2^l, i} = \tau_{(i-1)\cdot 2^{-l}\cdot 2^{-j_k}}(2^{-l}R_{k,j})\;.$$ (Note that we can recognize that $-N + k + j_k$ is nonnegative since the conditions \mbox{$j_1 > j_2 > \cdots > j_N$} together with the fact that each $j_i$ is a natural number guarantee that $j_k \geq N - k + 1$.) One can compute that $$\frac{1}{m_2(R_{k, j, 2^0, 1})}\int_{R_{k, j, 2^0, 1}} \chi_{E_N} = 2^{-N}\;,$$ $$\frac{1}{m_2(R_{k, j, 2, 1})}\int_{R_{k, j, 2^1, 1}} \chi_{E_N} = 2^{-N+2}\;,$$ and more generally $$\frac{1}{m_2(R_{k,j,2^l, i})}\int_{R_{k,j,2^l, i}}\chi_{E_N} = 2^{-N + l + 1}$$ for $i = 1, \ldots, 2^{l-1}$, \emph{provided} $1 \leq l$ and $2^{-N + 1} \leq 2^{-l}\cdot 2^{-k+1}$, i.e., provided $1\leq l \leq N-k$. It is for these estimates that we require the sparseness condition $j_{k-1} > N + j_{k}$, as these enable us to have that, for $1 \leq l \leq N-k$ and fixed $j,k$, the intersections of $E_N$ with the left hand sides of the $R_{k,j,2^l, i}$ are vertical translates of one another for $1 \leq i \leq 2^{l-1}$. \\ It is important here to recognize that each $R_{k, j, 2^l, i}$ is indeed a translate of a dilate of $R_{k}$, so rectangular parallelepipeds of the form $R_{k,j,2^l, i}\times [0,t]$ lie in the basis $\mathcal{B}$ since $\mathcal{B}$ is \emph{homothecy invariant.} This is a point where the argument we provide here differs significantly from the crystallization argument provided in the related paper \cite{dms2021}. \\ Define now the set $Z_{j_1, \ldots, j_N}$ (which we will refer to simply as $Z_N$) in $\mathbb{R}^3$ by $$Z_{j_1, \ldots, j_N} = E_{j_1, \ldots, j_N} \times [0,1]\;.$$ Given a rectangle $R \subset \mathbb{R}^2$ whose sides are parallel to the coordinate axes, we let $rh(R)$ denote the right half of $R$. The above averages over the $R_{j,k,2^l, i}$ yield the inclusion $$ \left\{x \in \mathbb{R}^3 : M_\mathcal{B}\chi_{Z_N}(x) \geq 2^{-N} \right\} \supset \bigcup_{j,k,2^l,i \atop 1 \leq l \leq N - k}^\cdot (rh(R_{k,j,2^l,i})) \times [2^{l}, 2^{l+1}]\;,$$ where the sets in the union are a.e. pairwise disjoint. This yields that \begin{align} \left| \left\{x \in \mathbb{R}^3 : M_\mathcal{B}\chi_{Z_N}(x) \geq 2^{-N} \right\} \right| \notag &\geq \sum_{k = 1}^N \sum_{j = 1}^{2^{-N + k + j_k}}\sum_{l=1}^{N-k}\sum_{i = 1}^{ 2^{l-1}} \left|(rh(R_{k,j,2^l,i})) \times [2^{l}, 2^{l+1}]\right|\; \\&= \sum_{k = 1}^N \sum_{j = 1}^{2^{-N + k + j_k}}\sum_{l=1}^{N-k}\sum_{i = 1}^{ 2^{l-1}}\frac{1}{2}2^{-2l}|R_k|\cdot 2^l \notag \\&= \sum_{k = 1}^N \sum_{j = 1}^{2^{-N + k + j_k}}(N-k)\cdot2^{-k-j_k-1}\notag \\&= \sum_{k = 1}^N (N-k)\cdot 2^{-N + k + j_k}\cdot 2^{-k-j_k-1}\notag \\&\gtrsim \sum_{k=1}^N (N-k)\cdot 2^{-N}\notag \\&\gtrsim N^2 2^{-N}\;. \notag \end{align} As $|Z_N| = 2^{-2N + 1}$, we see $M_\mathcal{B}$ does not satisfy an estimate of the form \begin{equation}\label{e21}\left|\left\{x \in \mathbb{R}^3 : M_{\mathcal{B}}f(x) > \alpha\right\}\right| \leq C \int_{\mathbb{R}^3} \phi\left(\frac{|f|}{\alpha}\right) \end{equation} for any convex increasing function $\phi: \mathbb[0, \infty) \rightarrow [0, \infty)$ satisfying the condition $$\lim_{x \rightarrow \infty}\frac{\phi(x)}{x (\log(1 + x))^2} = 0\;.$$ To see this, suppose $\phi$ satisfies (\ref{e21}). Setting $f = \chi_{Z_N}$, $\alpha = 2^{-N}$ then yields that $$N^2 2^N \leq C \phi(2^N)\;,$$ providing the desired result. \end{proof} \begin{bibsection} \begin{biblist} \bib{favacapri}{article}{ author = {O. N. Capri}, author = {N. A. Fava}, journal = {Studia Math.}, volume = {78}, year = {1984}, title = {Strong differentiability with respect to product measures}, pages = {173--178}, review ={\MR{0766713}}, } \bib{cordoba}{article}{ author = {A. C\'ordoba}, journal = {Harmonic analysis in Euclidean spaces (Proc. Sympos. Pure Math., Williams Coll., Williamstown, Mass., 1978) Part 1}, venue = {Williams Coll., Williamstown, Mass.} volume = {35}, year = {1979}, title = {Maximal functions, covering lemmas and Fourier multipliers}, pages = {29--50}, review ={\MR{0545237}}, } \bib{cf1975}{article}{ author = {A. C\'ordoba}, author = {R. Fefferman}, journal = {Ann. of Math.}, volume = {102}, year = {1975}, title = {A geometric proof of the strong maximal theorem}, pages = {95--100}, review={\MR{0379785}}, } \bib{dm2017}{article}{ author = {E. D'Aniello}, author = {L. Moonens}, journal = {Ann. Acad. Sci. Fenn. Math.}, volume = {42}, year = {2017}, pages = {119--133}, title = {Averaging on $n$-dimensional rectangles}, review = {\MR{3558519}}, } \bib{dms2021}{article}{ author = {D. Dmitrishin}, author = {P. Hagelstein}, author = {A. Stokolos}, title = {Sharp weak type estimates for a family of Soria bases}, journal = {submitted for publication}, eprint = {2101.08736}, } \bib{fava1972}{article}{ author = {N. Fava}, journal = {Studia Math.}, volume = {42}, year = {1972}, title = {Weak type inequalities for product operators}, pages = {271--288}, review = {\MR{308364}}, } \bib{fefbeijing}{article}{ author={R. Fefferman}, title={Multiparameter Fourier analysis}, journal={Beijing lectures in harmonic analysis (Beijing, 1984), Ann. of Math. Stud.} volume={112}, pages={47--130}, publisher={Princeton Univ. Press}, review={\MR{0864655}}, } \bib{fp2005}{article}{ author = {R. Fefferman}, author = {J. Pipher}, title = {A covering lemma for rectangles in $\mathbb{R}^n$}, journal = {Proc. Amer. Math. Soc.}, year = {2005}, volume={133}, pages = {3235--3241}, review = {\MR{2161145}}, } \bib{guzman1974}{article}{ author = {M. de Guzm\'an}, journal = {Studia Math.}, volume = {49}, year = {1974}, pages = {188--194}, title = {An inequality for the Hardy-Littlewood maximal operator with respect to a product of differentiation bases}, review = {\MR{0333093}}, } \bib{guzman}{book}{ author = {M. de Guzm\'an}, title = {Differentiation of integrals in $\mathbb{R}^n$}, series = {Lecture Notes in Mathematics}, volume = {481}, publisher = {Springer-Verlag}, year = {1975}, review = {\MR{0457661}}, } \bib{hs2011}{article}{ author = {P. Hagelstein}, author = {A. Stokolos}, journal = {New York J. Math.}, volume = {17}, year = {2011}, title = {Weak type inequalities for maximal operators associated to double ergodic sums}, pages = {233--250}, review = {\MR{2781915}}, } \bib{rey2020}{article}{ author={G. Rey}, title={Another counterexample to Zygmund's conjecture}, journal={Proc. Amer. Math. Soc.}, volume ={148}, year={2020}, pages={5269--5275}, review={\MR{4163839}}, } \bib{soria}{article}{ author = {Soria, F.}, journal = {Ann. of Math.}, volume = {123}, title = {Examples and counterexamples to a conjecture in the theory of differentiation of integrals}, year = {1986}, pages = {1--9}, review={\MR{0825837}}, } \bib{stokolos1988}{article}{ author = {A. M. Stokolos}, journal = {Studia Math.}, volume = {88}, title = {On the differentiation of integrals of functions from $L \phi(L)$}, year = {1988}, pages = {103--120}, review = {\MR{931036}}, } \bib{stokolos2005}{article}{ author = {A. M. Stokolos}, journal = {Ann. Inst. Fourier (Grenoble)}, title = {Zygmund's program: some partial solutions}, volume = {55}, year = {2005}, pages = {1439--1453}, review = {\MR{2172270}}, } \bib{stokolos2006}{article}{ author = {Stokolos, A. M.}, journal = {Colloq. Math.} title = {On weak type inequalities for rare maximal functions in $\mathbb{R}^n$}, volume = {104}, year = {2006}, pages = {311--315}, review = {\MR{2197080}}, } \bib{zygmund1967}{article}{ author = {Zygmund, A.}, journal = {Colloq. Math.}, volume = {16}, year = {1967}, title = {A note on the differentiability of integrals}, pages = {199--204}, review = {\MR{0210847}}, } \end{biblist} \end{bibsection} \end{document}
2,869,038,155,800
arxiv
\section{Introduction} \IEEEPARstart{I}{nternet} of things (IoT) devices such as mobile sensors, drones and vehicles, are widely used with emerging applications. Reference \cite{iot_device} mentions that the number of active IoT devices is expected to be over 75 billion by 2025. The massive data generated from these devices are commonly collected and stored in a distributed manner. It is often impractical or inefficient to send all data to a centralized location due to the limitations of communication costs or latency \cite{big_data2}. Thus, data processing close to the sources or devices plays a pivotal role in avoiding high latency and communication costs. Contrary to cloud computing with centralized data processing, edge/fog computing with distributed data processing is one alternative solution to data analysis especially for large-scale machine learning models. Commonly, machine learning with distributed computing can be formulated in the following form, among which $N$ agents cooperatively solve one optimization problem: \begin{equation}\label{eq:main_problem1} \min_{x}~ \sum_{i=1}^{N}f_i(x;\mathcal{D}_i), \end{equation} where $f_i:\mathbbm{R}^{p\times d}\to\mathbbm{R}$ is the local loss function of agent $i$, and $\mathcal{D}_i$ is the private dataset at agent $i$. The variable $x$ is shared among all agents. Distributed machine learning has recently received growing attention from both academia and industry. In \cite{wadmm, pwadmm, wpg, DGD, EXTRA, COCA, DADMM}, a few distributed algorithms have been developed to address optimization problem (\ref{eq:main_problem1}). Currently, primal and primal-dual methods are two main widely used solutions, which include e.g., gradient descent (GD) based methods and alternating direction method of multipliers (ADMM) based methods, respectively. In general, compared to GD, ADMM is better suited for decentralized optimization and has been demonstrated to have fast convergence in many applications, such as smart grids \cite{smart_grid}, wireless sensor networks (WSNs) \cite{wsn}, and cognitive radio networks \cite{DADMM}. The performance of distributed consensus optimization as in (\ref{eq:main_problem1}) is commonly measured by computation time and communication costs. In state-of-the-art approaches, agents exchange information with all, or a subset of, their one-hop neighbors. Existing distributed optimization schemes, such as decentralized gradient descent (DGD), and EXTRA, decentralized ADMM, Jacobi-Proximal ADMM, proposed in \cite{DGD, EXTRA, DADMM,jacobi_admm}, have good convergence rates with respect to the number of iterations (corresponding to the computation time). However, for large-scale machine learning problem such as distributed systems with unstable links in federated learning \cite{feder_learning}, the impact of communication costs becomes pronounced while computation is relatively cheap. The methods in \cite{DGD, EXTRA, DADMM,jacobi_admm} are not communication efficient since multiple agents are active in parallel, and multiple communication links are used for information sharing in each iteration. Thus, alternative techniques such as the distributed ADMM (D-ADMM) in \cite{d-admm}, the communication-censored ADMM (COCA) in \cite{COCA}, and Group ADMM (GADMM) in \cite{gadmm}, have been proposed to limit the overall communication load in each iteration. Specifically, for reducing communication costs, eliminating less informative message sharing is preferred. In \cite{COCA}, the proposed COCA was able to adaptively determine whether or not a message is informative during the optimization process. Following COCA, communication-censored linearized ADMM (COLA) was introduced in \cite{cola} to take into account hardware or time constraints in applications such as an IoT network equipped with cheap computation units or in a rapidly changing environment. Furthermore, an incremental learning method has also been recognized as a promising approach to reduce communication costs, which activates one agent and one link at any given time in a cyclic or a random order whilst keeping all other agents and links idle. W-ADMM \cite{wadmm}, PW-ADMM \cite{pwadmm}, and WPG \cite{wpg} are typical examples of the incremental method. Moreover, due to the limited communication bandwidth, transmitting compressed messages via quantization \cite{qsgd, quantized_admm} or sparsification \cite{sparsified_sgd, qsparse} is also an effective method to alleviate the communication burden. Following this rationale, quantized stochastic GD (SGD) and quantized ADMM were proposed in \cite{qsgd}, and \cite{quantized_admm}, respectively. In \cite{qsparse}, the Qsparse-local-SGD algorithm was proposed, which combines aggressive sparcification with quantization and local computation along with error compensation. However, in these methods, accuracy is sacrificed to achieve lower communication costs \cite{compressed_commu}. Apart from the communication bottleneck, the challenge of straggler nodes is also significant due to the possible presence of slow or unresponsive agents in the distributed machine learning. To address this problem, error control coding has been applied to distributed edge computing and machine learning algorithms via computational redundancy. Coded distributed machine learning, e.g., those based on matrix multiplication \cite{speed-up}, and GD \cite{gradient_coding, jingyue, rscode}, have gained substantial research attention in various aspects. For instance, in \cite{gradient_coding}, gradient coding (GC) based on maximum distance separable (MDS) codes was first proposed to mitigate the effect of stragglers in distributed GD. In \cite{rscode}, the authors proposed a novel framework based on Reed-Solomon (RS) codes accompanied with an efficient decoder, which was used to recover the full gradient update from a fixed number of responding machines. Fountain code based schemes were developed for large-scale networks, especially when the quality of communication links was relatively low in \cite{jingyue}. Most existing GC schemes aim at recovering the full gradient. However, in practical large-scale distributed machine learning systems, when the amount of data is tremendously large, an approximate gradient via cheap unreliable nodes is more appealing as it exhibits a low computational complexity by recovering an inexact gradient in each iteration \cite{ cyclic_mds, sgc_straggler, wang2019erasurehead, ldgm_code}. Approximate gradient codes (AGCs) were first analyzed in \cite{cyclic_mds}. stochastic gradient coding (SGC) was proposed for situations when the stragglers are random in \cite{sgc_straggler}. In \cite{ldgm_code}, a low density generator matrix (LDGM) code based distributed SGD scheme was proposed to recover the gradient information during the existence of slow-running machines. To the best of our knowledge, however, there is no result applying error-control coding for ADMM. In addition, there are many research activities on mini-batch stochastic optimization in distributed settings, e.g., \cite{ouyang_admm, lian2018asynchronous, amiri2019computation, ferdinand2020anytime}. Notably, in \cite{lian2018asynchronous}, the proposed asynchronous decentralized parallel stochastic gradient descent (AD-PSGD) enabled wait-free computation and communication. To relieve the impact of stragglers, an online distributed optimization method called Anytime Minibatch was proposed in \cite{ferdinand2020anytime}, which prevented stragglers from holding up the system without wasting the work that stragglers already completed. However, the relation between mini-batch size and stragglers has not been unveiled. Motivated by these observations, we investigate decentralized learning by utilizing ADMM as a parallel optimization tool. We extend our preliminary work in \cite{ye_isit} and investigate the possibility of coding for stochastic incremental ADMM (sI-ADMM) for combating straggler nodes and reducing the communication cost. The main contributions of our work can be summarized as follows: \begin{itemize} \item We propose an inexact proximal stochastic incremental ADMM (sI-ADMM) to solve the decentralized consensus optimization problem, the updating order of which follows a predetermined circulant pattern. Moreover, to reduce the response time on agents, computing resources at the edge are applied to calculate partitioned gradients. \item To provide tolerance to link failures and straggler nodes for the edge computing with ADMM, we present the coded stochastic incremental algorithm (csI-ADMM) by using coding strategies to explore the redundancy over the partitioned gradients computed by edge nodes. \item The convergence and communication properties of sI-ADMM algorithms are provided through theoretical analysis and experiments. We show that our proposed csI-ADMM has a $O(\frac{1}{\sqrt{k}})$ convergence rate and $O(\frac{1}{\upsilon ^2}) $ communication. Besides, the trade-off between convergence rate and the number of straggler nodes as well as relation between mini-batch and stragglers are theoretically analyzed. Numerical results from experiments reveal that the proposed method can be communication efficient, rapidly responding and robust against the straggler nodes. \end{itemize} The rest of the paper is organized as follows. Section \ref{system_model} presents the problem statement. We provide the description of the stochastic incremental ADMM algorithms in Section \ref{proposed_algorithm} and the performance analyses are presented in Section \ref{section:analysis}. To validate the efficiency of proposed methods, we provide numerical experiments in Section \ref{sec:results}. Finally, we conclude the paper in Section \ref{conclusion}. \subsection*{Notation} Throughout the paper, we adopt the following notation: $\mathbb{E}\left[\cdot\right]$ denotes the expectation with respect to a set of variables $\bm{\xi}_i^k=\{\xi_{i,l}^k\}_{M}$. $|\cdot|$ is the absolute value. $\norm{\cdot}$ denotes the Euclidean norm $\norm{\cdot}_2$. $| \bm{\xi}_{i,j}|$ represents the cardinality of set $\bm{\xi}_{i,j}$. $\lfloor{\cdot} \rfloor$ is the floor function. $\nabla f(\cdot)$ denotes the gradient of a function $f$. $\langle \cdot, \cdot \rangle $ denotes the inner product in a finite dimensional Euclidean space. $x^*\in \mathcal{X}$ denotes the optimal solution to (\ref{eq:main_problem1}), where $\mathcal{X}$ is the domain. Besides, we define $ D_{\mathcal{X}} \overset{\Delta}{=} \mathop{ \text{sup}}\nolimits_{x_a, x_b \in \mathcal{X}} \norm{x_a - x_b}$. \section{System Model and Problem Formulation}\label{system_model} As depicted in Fig. \ref{fig_network}, we consider a distributed computing network consisting of dispersed network elements (usually called agents in multi-agent collaborative systems) that are connected with several edge computing nodes (ECNs). Agents can communicate with each other. ECNs are capable of processing data collected from sensors, and transferring desired messages (e.g., gradient updates) back to the connected agent. Denote the decentralized network as $\mathcal{G}=(\mathcal{N},\mathcal{E})$, where $\mathcal{N}=\{1,...,N\}$ is the set of agents and $\mathcal{E}$ is the set of links connecting agents. Based on the agent coverage and computing resources, the ECNs connected to agent $i(\in \mathcal{N} )$ are denoted as $\mathcal{K}_i = \{1,...,K_i\}$. This architecture is common in wireless sensor networks (WSNs), such as smart home systems. \begin{figure} [t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=86mm]{model1112.pdf}} \caption{Traversing patterns over different network typologies: (a) Hamiltonian network; (b) The shortest path cycle based network.} \label{fig_network} \end{center} \vskip -0.2in \end{figure} We consider training a machine learning model of interest over this network, where agents can collaboratively learn a shared model parameter while keeping all the data locally. For the agents, we make the following assumptions: 1) local views, no agent has a global view of the whole system, but behaves solely on the basis of local information (i.e., local data and model parameter); 2) decentralization, no agent controls the other agents as the master server; and 3) one-hop communication, each agent only exchanges global model parameter information with directly connected neighboring agents. Multi-agent systems can obtain complex behaviors (i.e., global model) based on the interactions among agents, each of which has a simple behavior (local model). The formulation of decentralized optimization problem can be described as follows. The multi-agent system seeks to find out the optimal solution $x^*$ by solving (\ref{eq:main_problem1}). $\mathcal{D}_i$ is the private dataset, which is collected from sensors such as drones and will be allocated into $K_i$ dispersed ECNs. By defining $ \bm{x}=[x_1,...,x_N]\in\mathbbm{R}^{pN\times d}$ and introducing a global variable $z\in\mathbbm{R}^{p\times d}$, problem (\ref{eq:main_problem1}) can be reformulated as \begin{equation}\label{eq2} \begin{aligned} (\text{P-1}): \min_{\bm{x}, z}~\sum_{i=1}^{N}f_i(x_i;\mathcal{D}_i), ~~~ s.t.~ \mathbbm{1}\otimes z-\bm{x}=\bm{0}, \end{aligned} \end{equation} where $\mathbbm{1}=[1,...,1]^T\in\mathbbm{R}^{N}$, and $\otimes$ is the Kronecker product. In the following, $f_i(x_i,\mathcal{D}_i)$ is denoted as $f_i(x_i)$ for notational simplicity. Our objective is to devise a both communication-efficient and straggler-tolerant decentralized algorithm such that the agents can collaboratively find an optimal solution through local computations and limited information exchange among neighbors. In our scheme, local gradients are calculated in dispersed ECNs, while variables including primal and dual variables and global variables $z$ are updated in the corresponding agent. \section{Proposed Stochastic ADMM algorithms} \label{proposed_algorithm} For illustration, we will first review the standard incremental ADMM iterations for decentralized consensus optimization. Then we present the stochastic incremental ADMM with coding in our networks. The augmented Lagrangian function of problem (P-1) is \begin{equation} \label{eq:lagrangian} \mathcal{L}_{\rho}(\bm{x},\bm{y}, z)=\sum _{i=1}^{N}f_i(x_i) + \left\langle \bm{y} ,\mathbbm{1}\otimes z-\bm{x} \right\rangle + \frac{\rho}{2}\norm{\mathbbm{1}\otimes z-\bm{x}}^2, \end{equation} where $\bm{y}=[y_1,...,y_N]\in\mathbbm{R}^{pN\times d}$ is the dual variable, and $\rho>0$ is a penalty parameter. Following our preliminary work in \cite{ye_isit}, incremental ADMM (I-ADMM), with guaranteeing $\sum_{i=1}^{N}(x_i^1 - \frac{y_i^1}{\rho}) = \bm{0}$ (e.g., initialize $x_i^1 =y_i^1=\bm{0}$), the updates of $\bm{x}$, $\bm{y}$ and $z$ at the ($k+1$)$-$th iteration follow: \begin{subequations} \begin{align} &x_i^{k+1}:=\left\{\begin{aligned} &\arg \min_{x_i}~f_i(x_i) + \frac{\rho}{2}\norm{z^k-x_i+ \frac{y_i^k}{\rho}}^2, ~i=i_k ;\\ &x_i^{k },~\text{otherwise}; \end{aligned} \right. \label{old_x}\\ &y_i^{k+1}:=\left\{\begin{aligned} & y_i^{k} + \rho \left( z ^{k }-x_{i}^{k +1} \right),~i=i_k ;\\ & y_i^{k },~\text{otherwise} ;\label{old_y} \end{aligned} \right. \\ &z ^{k+1}:= z^{k } + \frac{1}{N}\left[ \left(x_{i_k}^{k+1}- x_{i_k}^{k } \right) -\frac{1}{ \rho} \left (y_{i_k}^{k+1} - y_{i_k}^{k } \right) \right] . \label{old_z} \end{align} \end{subequations} The local loss function $f_i(x_i)$ may be non-differentiable and non-convex. For ADMM, solving augmented Lagrangian especially for $x-$update above may lead to rather high computational complexity. Approximation fitting with \textit{first-order Taylor} approximation and \textit{mini-batch stochastic} optimization will be proposed to approximate such non-linear functions and to give fast computation for $x-$update. To stabilize the convergence behavior of the inexact augmented Lagrangian method, a quadratic proximal term with parameter $\tau^k$ is considered. Moreover, we also introduce the updating step-size $\gamma^k$ for the dual update. Both parameters $\tau^k$ and $\gamma^k$ may be varying with iteration $k$. Then, the updates of $\bm{x}$ and $\bm{y}$ at the $(k+1)$-th iteration can be presented as follows: \begin{subequations} \begin{align} &x_i^{k+1}:=\left\{\begin{aligned} &\arg \min_{x_i} ~\mathcal{G}_i(x_i^k;\bm{\xi}_i^k)\left(x_i-x_i^k\right) + \left\langle y_i^k,z^k-x_i \right \rangle \\ &~~~ + \frac{\rho}{2}\norm{z^k-x_i}^2 + \frac{\tau^k }{2} \norm{x_i - x_i^k}^2 , ~i=i_k ;\\ &x_i^{k },~\text{otherwise}; \end{aligned} \right. \label{new_x}\\ &y_i^{k+1}:=\left\{\begin{aligned} & y_i^{k} + \rho \gamma^k \left( z ^{k }-x_{i}^{k +1} \right),~i=i_k ;\\ & y_i^{k },~\text{otherwise} ;\label{new_y} \end{aligned} \right. \end{align} \end{subequations} where $\mathcal{G}_i(x_i^k; \bm{\xi}_i^k)$ is the mini-batch stochastic gradient, which can be obtained through $\mathcal{G}_i(x_i^k; \bm{\xi}_i^k) = \frac{1}{M} \sum _{l=1}^{M} \nabla F_i(x_i^k; \xi_{i,l}^k)$. To be more specific, $M$ is the mini-batch size of sampling data, $\bm{\xi}_i^k=\{\xi_{i,l}^k\}_{M}$ denotes a set of i.i.d. randomly selected samples in one batch and $\nabla F_i(x_i^k; \xi_{i,l}^k)$ corresponds to the stochastic gradient of a single example $\xi_{i,l}^k$. \subsection{Edge Computing for Mini-Batch Stochastic I-ADMM} We define response time as the execution time for updating all variables in each iteration. In above updates, we assume all steps including $x $-update, $y $-update and $z$-update in agents rather than ECNs. In practice, the update is often computed in a tandem order, which leads to long response time. With the fast development of edge/fog computing, it is feasible to further reduce the response time since computing the local gradients can be dispersed to multiple edge nodes, as shown in Fig. \ref{fig_network}. Each ECN computes a gradient using local data and shares the result with its corresponding agent and no information is directly exchanged among ECNs. For simplicity and analysis convenience, we focus only on the scenarios where agents are activated in a predetermined circulant pattern, e.g., according to a Hamiltonian cycle, and ECNs are activated whenever the connected agent is active, as shown in Fig. \ref{fig_network} (a). A Hamiltonian cycle based activation pattern is a cyclic pattern through a graph that visits each agent exactly once (i.e., $1\rightarrow{} 2\rightarrow{}4\rightarrow{}5\rightarrow{}3$ in Fig. \ref{fig_network} (a)). The scenario of non-Hamiltonian cycle based traversing pattern shown in Fig. \ref{fig_network} (b), i.e., shortest path cycle based walking pattern, will be discussed in Section \ref{sec:results}. Correspondingly, the proposed mini-batch stochastic incremental ADMM (sI-ADMM) is presented in Algorithm \ref{algorithm:1}. At agent $i_k$, global variable $z^{k+1}$ gets updated and is passed as a token to the next agent $i_{k+1}$ via a pre-determined traversing pattern, as shown in Fig. \ref{fig_network}. Specifically, in the $k$-th iteration with cycle index $m=\lfloor{k/N} \rfloor$, agent $i_k$ is activated. Token $z^k$ is first received and then the active agent broadcasts the local variable $x_i^k$ to its attached ECNs $\mathcal{K}_i$. According to batch data with index $I_{i,j}^k$, new gradient $g_{i,j}$ is calculated in each ECN, followed by gradient update, $x $-update, $y $-update and $z$-update in agent $i_k$, via steps 20-23 in Algorithm \ref{algorithm:1}. At last, the global variable $z^{k+1}$ is passed as a token to its neighbor $i_{k+1}$. \begin{algorithm}[t] \caption{ Mini-batch Stochastic I-ADMM (sI-ADMM) } \label{algorithm:1} \begin{algorithmic}[1] \STATE \textbf{initialize}: $\{z^1 = x_i^1 = y_i^1=\bm{0},|i\in\mathcal{N}\}$, batch size $M$; \STATE \ul{\textbf{Local Data Allocation:}} \FOR{agent $i \in \mathcal{N} $} \STATE \textbf{divide} $\mathcal{D}_i$ labeled data into $K_i$ equally disjoint partitions and denote each partition as $\bm{\xi}_{i,j}, j \in \mathcal{K}_i$; \FOR{ECN $j \in \mathcal{K}_i$} \STATE \textbf{allocate} $\bm{\xi}_{i,j}$ to ECN $j$; \STATE \textbf{partition} $\bm{\xi}_{i,j}$ examples into multiple batches with each size $M/K_i$; \ENDFOR \ENDFOR \FOR{$k=1,2,...$} \STATE \ul{\textbf{Steps of Active Agent $i=i_k = (k-1)\mod N +1$:}} \STATE \textbf{receive} token $z^{k }$; \STATE \textbf{broadcast} local variable $x_{i}^k$ to ECNs $\mathcal{K}_i$; \STATE \ul{\textbf{ECN $j\in \mathcal{K}_i$ computes gradient in parallel}:} \STATE \hspace{0.2cm} \textbf{receive} local primal variable $x_i^k$; \STATE \hspace{0.2cm} \textbf{select} batch $I_{i,j}^k= m \mod \lfloor{|\bm{\xi}_{i,j}| \cdot K_i/ M} \rfloor $; \STATE \hspace{0.2cm} \textbf{update} $g_{i,j}$ based on selected batch data; \STATE \hspace{0.2cm} \textbf{transmit} $g_{i,j}$ to the connected agent; \STATE \textbf{until} the $K_i$-th responded message is received; \STATE \textbf{update} gradient via gradient summation: \begin{equation} \mathcal{G}_i(x_i^k; \bm{\xi}_i^k)=\frac{1}{K_i}\sum _{j=1}^{K_i} g_{i,j}; \end{equation} \STATE \textbf{update} $\bm{x}^{k+1}$ according to (\ref{new_x}); \STATE \textbf{update} $\bm{y}^{k+1}$ according to (\ref{new_y}); \STATE \textbf{update} $z^{k+1}$ according to (\ref{old_z}); \STATE \textbf{send} token $z^{k+1} $ to agent $i_{k+1}$ via link ($i_k, i_{k+1}$); \STATE \textbf{until} the stopping criterion is satisfied. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Coding Schemes for sI-ADMM} \begin{algorithm}[t] \caption{Coded sI-ADMM (csI-ADMM) } \label{algorithm:2} \begin{algorithmic}[1] \STATE \textbf{initialize}: $\{z^1 = x_i^1 = y_i^1=\bm{0}|i\in\mathcal{N}\}$, batch size $\overline{M}$; \STATE \ul{\textbf{Local Data Allocation:}} \FOR{$\text{agent } i \in \mathcal{N} $} \STATE \textbf{divide} $\mathcal{D}_i$ labeled data based on repetition schemes in \cite{gradient_coding} and denote each partition as $\bm{\xi}_{i,j}, j \in \mathcal{K}_i$; \FOR{$\text{ECN } j \in \mathcal{K}_i$} \STATE \textbf{allocate} $\bm{\xi}_{i,j}$ to ECN $j$; \STATE \textbf{partition} $\bm{\xi}_{i,j}$ examples into multiple batches with each size $(S_i+1)\overline{M}/K_i$; \ENDFOR \ENDFOR \FOR{$k=1,2,...$} \STATE \ul{\textbf{Steps of Active Agent $i=i_k = (k-1)\mod N +1$:}} \STATE \textbf{run} steps 12-13 of Algorithm \ref{algorithm:1} \STATE \ul{\textbf{ECN $j\in \mathcal{K}_i$ computes gradient in parallel}: } \STATE \hspace{0.2cm} \textbf{run} step 15 of Algorithm \ref{algorithm:1} \STATE \hspace{0.2cm} \textbf{select} batch \begin{equation} I_{i,j}^k= m \mod\lfloor{|\bm{\xi}_{i,j}| \cdot K_i/ (S_i+1)\overline{M} } \rfloor ; \end{equation} \STATE \hspace{0.2cm} \textbf{update} $g_{i,j}$ via encoding function $p_{enc}^j(\cdot)$; \STATE \hspace{0.2cm} \textbf{transmit} $g_{i,j}$ to the connected agent; \STATE \textbf{until} the $R_i$-th fast responded message is received; \STATE \textbf{update} gradient via decoding function $q_{dec}^i(\cdot)$; \STATE \textbf{run} steps 21-25 of Algorithm \ref{algorithm:1}; \ENDFOR \end{algorithmic} \end{algorithm} With less reliable and limited computing capability of ECNs, straggling nodes may be a significant performance bottleneck in the case of learning networks. To address this problem, error control codes have been proposed to mitigate the impact of the straggling nodes without knowing their locations by leveraging data redundancy. One type of optimal linear codes, $(n,k)$ MDS codes $(k< n)$ are proposed to combat stragglers, which have $n$ coded blocks such that all $k$ message blocks can be reconstructed from any $k$ coded blocks. Following the work in \cite{gradient_coding}, two MDS-based coding methods over real field $\mathbbm{R}$, i.e., \textit{Fractional} repetition scheme and \textit{Cyclic} repetition scheme are adopted and integrated with sI-ADMM for reducing the responding time in the presence of straggling nodes. The details for the two schemes can be found in \cite{gradient_coding}. We formally present the proposed coded sI-ADMM (csI-ADMM) in Algorithm \ref{algorithm:2}. Different from Algorithm \ref{algorithm:1}, in Algorithm \ref{algorithm:2}, encoding and decoding processes are introduced to calculate $\mathcal{G}_i(x_i^k; \bm{\xi}_i^k)$ via computation redundancy, thereby reducing the impact of straggling ECNs. Denoting $R_i$ as the minimum required ECNs number, each agent $i$ updates local variables with the updated gradient from any $R_i$ out of $K_i$ ECNs per iteration to combat slow links and straggler nodes from stalling overall computation. Thus, agent $i$ can tolerate $\mathbf{any}$ $(S_i=K_i-R_i)$ stragglers. Fig. \ref{fig:coded_edge_compute} illustrates an example on how coded edge computing can reduce response time. Here we assume at most $S_i=1$ ECN may be slow during each iteration (e.g., ECN $2$). The extension to multiple stragglers is straightforward. In Fig. \ref{fig:coded_edge_compute}, three ECNs have overlapped labeled data allocated privately (i.e., $\tilde{\bm \xi}_{i,1}, \tilde{\bm \xi}_{i,2} \text{ and } \tilde{\bm {\xi}}_{i,3}$) and share local primal variable $x_i$. For the gradient update, once agent $i$ is activated, current local variable $x_i$ is broadcasted to all connected ECNs. ECN $1$ calculates the gradients of the shared model with $\tilde{\bm \xi}_{i,1} \text{ and } \tilde{\bm \xi}_{i,2}$ separately, where the corresponding gradients are denoted as $\tilde g_{i,1}$ and $\tilde g_{i,2}$, respectively. Denoting $\tilde {\bm g}_i = [\tilde g_{i,1},\tilde g_{i,2},\tilde g_{i,3}]$ and following the $(K_i, R_i)$ MDS codes in \cite{gradient_coding}, ECN~1 then computes the encoded gradient $g_{i,1}=p_{enc}^1(\tilde {\bm g}_i)=\frac{1}{2}\tilde g_{i,1} + \tilde g_{i,2}$, i.e., the linear combination of $\tilde g_{i,1}$ and $\tilde g_{i,2}$. Similarly, ECNs 2 and 3 compute $g_{i,2}=p_{enc}^2(\tilde {\bm g}_i) = \tilde g_{i,2} - \tilde g_{i,3}$ and $g_{i,3}=p_{enc}^3(\tilde {\bm g}_i) = \frac{1}{2}\tilde g_{i,1} + \tilde g_{i,3}$, respectively. Then three coded gradients denoted by $\bm g_i=[g_{i,1},g_{i,2},g_{i,3}]$ are transmitted back to the agent, where any of first two arrived messages can recover the summation $\tilde g_{i,1}+ \tilde g_{i,2}+\tilde g_{i,3}$ (i.e., $\mathcal{G}_i(x_i^k;\bm{\xi}_i^k)$) through decoding function $q_{dec}^i(\bm g_i)$ followed by $x$-update, $y$-update and $z$-update for optimizing problem (P-1). Thus, comparing with uncoded learning schemes with labeled data disjointedly allocated (i.e., sI-ADMM), the response time for updating local and global variables is only decided by the first two fast ECNs instead of the slowest ECN. This may cumulatively reduce the total running time significantly. \begin{figure} [b] \vskip 0 in \begin{center} \centerline{\includegraphics[width=89mm]{model1113.pdf}} \caption{Coded edge computing for mitigating the straggler nodes.} \label{fig:coded_edge_compute} \end{center} \vskip 0 in \end{figure} \section{Algorithm Analyses}\label{section:analysis} In this section, we will first provide the convergence properties for both sI-ADMM and csI-ADMM, in terms of the convergence rate. Then we also analyze communication cost, defined as the amount of communication among agents and the impact of straggling nodes. In the following analysis, communication of variables between a pair of agents is taken as 1 unit of communication. \subsection{Convergence Analysis} We first analyze the convergence properties for the proposed algorithms. Without loss of generality, the updating order for the proposed sI-ADMM based algorithm follows a predetermined pattern, i.e., Hamiltonian cycle order: $1\rightarrow{} 2\rightarrow{}...\rightarrow{}N\rightarrow{}1\rightarrow{}2...$ as I-ADMM in \cite{ye_isit}. To establish the convergence for sI-ADMM algorithms, we first make the following assumptions, which are commonly employed in the analysis of stochastic optimization. \begin{assumption}[Connectivity] \label{assump1} The graph $\mathcal{G}$ is connected and there exists at least one Hamiltonian cycle. \end{assumption} \begin{assumption}[Lipschitz continuous gradient] \label{assump2} The local loss function $f_i({x})$ is lower bounded over ${x}$, and is coercive over $x$, i.e., $f_i(x)\to \infty$ if $x\in\mathcal{X}$ and $\|x \|\to \infty$. $f_{i}(x)$ is $L$-Lipschitz differentiable, i.e., for any $x,y \in\mathbbm{R}^{p\times d}$, \begin{equation} \begin{aligned} \norm{ \nabla f_{i}(x) - \nabla f_{i}(y) } \leqslant L \norm{ x -y }, \forall i \in\mathcal{N}, \end{aligned} \end{equation} and this is equivalent to \begin{equation} f_i(x) \leqslant f_i(y) + \left\langle \nabla f_i(y), x-y \right\rangle + \frac{L}{2} \norm{x-y}^2. \end{equation} \end{assumption} \begin{assumption}[Unbiased estimation]\label{assump:unbias_g} For the differential function $f_{i}(x)$, there exists a stochastic first-order oracle that returns a noisy estimate of the gradient of $f_{i}(x)$, and the unbiased estimate $\nabla F_{i}(x, \xi_{i,l})$ satisfies \begin{equation} \begin{aligned} \mathbbm{E}_{\xi_{i,l} \in \mathcal{D}_i} \big[\nabla F_{i}(x; \xi_{i,l})\big] = \nabla f_i(x). \end{aligned} \end{equation} Let $M$ be the size of mini-batch $\bm{\xi}_i$, i.e., $|\bm{\xi}_i|=M$, and $\bm{\xi}_{i} = \{\xi_{i,1},...,\xi_{i,M}\}\subseteq \mathcal{D}_i$ denotes a set of i.i.d. random variables, and the mini-batch stochastic gradient is given by \begin{equation} \mathcal{G}_i(x, \bm{\xi}_i) = \frac{1}{M} \sum _{l=1}^{M} \nabla F_i(x, \xi_{i,l}). \end{equation} Clearly, we have \begin{equation} \begin{aligned} \mathbbm{E}_{\bm \xi_i \sim \mathcal{D}_i}\big[ \mathcal{G}_i(x, \bm \xi_i )\big]= \nabla f_i(x). \end{aligned} \end{equation} \end{assumption} Then, with these assumptions, we first present the following convergence properties for algorithm sI-ADMM. \begin{theorem}[Convergence]\label{theorem_1} Under $Assumptions$ \ref{assump1}-\ref{assump:unbias_g}, with $\{ \gamma^k \geq 4N,~\tau^k \geq \frac{2\rho}{\gamma^k } + \frac{L}{2} - \frac{\rho}{2}|k\geq 1 \}$, iterates $(\bm{x}^k, \bm{y}^k, z^k)$ generated by sI-ADMM satisfy the following properties: \begin{enumerate} \item $ \mathbbm{E}\big[ \mathcal{L}_{\rho}(\bm{x}^{k+1},\bm{y}^{k+1},z^k) - \mathcal{L}_{\rho}(\bm{x}^{k+1},\bm{y}^{k+1},z^{k+1})\big]$\\ $=\frac{N\rho}{2}\left\|z^{k} - z^{k+1} \right\|$;\label{statement1} \item $\mathbbm{E}\big[\mathcal{L}_{\rho}(\bm{x}^k,\bm{y}^k,z^k) - \mathcal{L}_{\rho}(\bm{x}^{k+1},\bm{y}^{k+1},z^k)\big]\\ \geq-\frac{1}{\rho \gamma^k } \norm{\bm{y}^{k+1} - \bm{y}^k}^2 + \big(\frac{\rho - L + 2\tau^k}{2}\big)\norm{\bm{x}^{k+1} -\bm{x}^k }^2$;\label{statement2} \item $\mathbbm{E}\big[\mathcal{L}_{\rho}(\bm{x}^k,\bm{y}^k,z^k)-\mathcal{L}_{\rho}(\bm{x}^{k+1},\bm{y}^{k+1},z^{k+1})\big] \\ \geq \big(\frac{\rho - L + 2\tau^k}{2} - \frac{2\rho}{\gamma^k }\big) \norm{ \bm{x}^{k+1} -\bm{x}^k }^2\\ ~~~+ \frac{\rho N (\gamma^k - 4N)}{2\gamma^k } \norm{z^{k+1} - z^k}^2$;\label{statement3} \item $ \{ \mathbbm{E}\big[\mathcal{L}_{\rho}(\bm{x}^k,\bm{y}^k,z^k)\big] \}_{k\geq1} $ is lower bounded.\label{statement4} \end{enumerate} \end{theorem} Hence, with statements $1)-4)$, the sequence $\{\mathbbm{E}\big[\mathcal{L}_{\rho}(\bm{x}^k,\bm{y}^k,z^k)\big] \}_{k\geq1}$ is convergent. \begin{IEEEproof} The convergence proof of sI-ADMM is similar to that of I-ADMM in \cite{ye_isit}. By substituting equation (25) of Lemma 4 in \cite{ye_isit} with $ \mathbbm{E}\big[\mathcal{G}_{i_k}(x_{i_k}^{k};\bm \xi_{i_k}^k ) - y_{i_k}^k\big] =\nabla f_{i_k}(x_{i_k}^{k}) - y_{i_k}^k = \rho(z^k-x_{i_k}^{k+1}) - \tau^k (x_{i_k}^{k+1} - x_{i_k}^k) = \frac{1}{\gamma^k} (y_{i_k}^{k+1} - y_{i_k}^k) - \tau^k (x_{i_k}^{k+1} - x_{i_k}^k )$, Theorem \ref{theorem_1} can be obtained. \end{IEEEproof} We note that Theorem \ref{theorem_1} provides a sufficient condition to guarantee the convergence of the proposed sI-ADMM. csI-ADMM has the same convergence properties as those of sI-ADMM. To obtain the convergence rate for the proposed algorithms, we introduce two more assumptions as follows. \begin{assumption}[Bounded gradient and variance]\label{assump:bounded} The gradient of local loss function $f_i(x)$ is bounded. That is, there exists a constant $\phi$ such that for all $x$, \begin{equation} \begin{aligned} \max_{1\leqslant i \leqslant N} \text{sup}_{x \in \mathcal{X} } \norm{ \nabla f_i(x)}^2 \leqslant \phi. \end{aligned} \end{equation} Moreover, \begin{equation} \begin{aligned} \mathbbm{E}_{\xi_{i,l} \in \mathcal{D}_i} \left[\norm{ \nabla F_{i}(x; \xi_{i,l}) - \nabla f_i(x) }^2\right] \leqslant \delta^2, \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \mathbbm{E}_{\bm{\xi_i} \sim \mathcal{D}_i}\left [\norm{ \mathcal{G}_i(x, \bm \xi_i ) - \nabla f_i(x) }^2\right] \leqslant \frac{\delta^2}{M}. \end{aligned} \end{equation} \end{assumption} \begin{assumption}[Strong convexity]\label{assump:s_c} The local loss function $f_i(x)$ is $\mu-strongly$ convex, satisfying that \begin{equation} f_i(x) \geq f_i(y) + \left\langle \nabla f_i(y), x-y \right\rangle + \frac{\mu}{2} \norm{x-y}^2. \end{equation} \end{assumption} Then we conclude the convergence rate of sI-ADMM algorithm as follows. \begin{theorem}[Convergence rate]\label{theorem1} For $ k=mN + i$ where cycle index $m=0,...,T-1$ and $ i \in\{1,...,N\}$, taking $\tau^k = c_{\tau} \sqrt{k}, \gamma^k = \frac{c_{\gamma}}{\sqrt{k}}$ with constants $c_{\tau}, c_{\gamma} >0$ in Algorithm \ref{algorithm:1}, under Assumptions \ref{assump1}-\ref{assump:s_c} with $\beta > 0$, we obtain the following convergence rate \begin{equation} \label{eq:theorem} \begin{aligned} & \frac{1} {TN} \sum_{m=0}^{T-1} \sum_{i=1}^{N}\mathbbm{E} \left[f_{i}(x_{i}^{k+1})- f_{i}(x_{i}^*)\right] \\&\qquad\qquad\qquad + \beta \mathbbm{E} \left[\norm{ \frac{1}{TN} \sum_{m=0}^{T-1} \sum_{i=1}^{N} \left(z^k - x_{i}^{k+1}\right) } \right]\\ &\qquad\leq \frac{1}{\sqrt{TN}}\left( \frac{c_{\tau}ND_{\mathcal{X}}^2 }{2 } + \frac{2N\beta^2}{\rho c_{\gamma} } + 2\phi + \frac{\delta ^2 }{M }\right), \end{aligned} \end{equation} if the total number of cycles $T$ is sufficiently large (i.e., iteration number $k$ is sufficiently large), especially with constraints \begin{equation} \begin{aligned} &\mu > 3\rho, ~c_{\tau} > \frac{2}{(N+1){N}},~ \frac{1}{\mu - 3\rho} < c_{\gamma} < \frac{1}{\rho}. \end{aligned} \end{equation} \end{theorem} \begin{IEEEproof} The proof is relegated in Appendix \ref{secondAppendix}. \end{IEEEproof} \begin{remark} Theorem \ref{theorem1} suggests that the sub-linear convergence rate for sI-ADMM algorithm is $O(\frac{1}{\sqrt{k}})$. With the network size $N$ scaling up, it indicates that the convergence rate of sI-ADMM may decrease to $O(\frac{N}{\sqrt{k}})$. Batch size $M$ plays a small role in determining the overall convergence speed although a larger batch size promotes faster convergence. Besides, this rate is also determined by the variance of stochastic gradients. \end{remark} \subsection{Communication Analysis} Next, we analyze the communication cost based on the sub-linear convergence rate. \begin{corollary}[Communication cost] \label{corollary} Let $c_{\tau} = \frac{1}{N}$, $ c_{\gamma} = N$ and $ k=mN + i$ where $ m = 0,...,T-1 , i \in \{1,...,N\}$, under the same conditions as those in Theorem \ref{theorem1}, with mean deviation defined by \begin{equation}\label{eq12} \frac{1 }{TN} \sum_{m=0}^{T-1} \sum_{i=1}^{N} \mathbbm{E} \left[\norm{{f_{i}(x_{i}^{k+1})- f_{i}(x_{i}^*)} } \right] \leq \upsilon, \end{equation} the communication cost of the proposed sI-ADMM is $O(\frac{1}{\upsilon ^2})$. \end{corollary} \begin{IEEEproof} From (\ref{eq:theorem}) with $c_{\tau} = \frac{1}{N}, c_{\gamma} = N$, to achieve mean deviation (\ref{eq12}), it is enough to have \begin{equation} \frac{1}{\sqrt{TN}} \left(\frac{D_{\mathcal{X}}^2}{2} + \frac{2\beta^2}{\rho } +2\phi + \frac{\delta ^2 }{M} \right)\leq \upsilon, \end{equation} which is implied by \begin{equation} k = TN \geq \frac{1}{\upsilon ^2} \left(\frac{D_{\mathcal{X}}^2 }{2} + \frac{2\beta^2}{ \rho } +2\phi + \frac{\delta ^2 }{M} \right)^2. \end{equation} For each cycle $m$, there are $N$ iterations, which has $O(N)$ communication. Since $\frac{D_{\mathcal{X}}^2 }{2} + \frac{2\beta^2}{\rho } +2\phi + \frac{\delta ^2 }{M} $ can be regarded as constant with respect to network size (i.e., agent number $N$), to guarantee (\ref{eq12}), the communication cost is $O(\frac{1}{\upsilon ^2}) $. This completes the proof. \end{IEEEproof} For csI-ADMM algorithm, both communication cost and convergence rate are roughly the same as sI-ADMM (but with differences outline in sub-Section \ref{sub-sec:impact} below). \subsection{The Impact of Straggling Nodes} \label{sub-sec:impact} From Theorem \ref{theorem1}, it is proven that for the proposed (c)sI-ADMM, a faster convergence speed can be achieved with a larger mini-batch size setup. However, in order for the csI-ADMM algorithm to tolerate more straggler nodes, the ECNs' capacity such as memory and storage, limits the maximum allowable mini-batch size per iteration. Under the same condition of computation overhead in ECNs, if pursuing robust to more stragglers, more overlapped data, i.e., less disjoint data, are participated in combating with stragglers in each iteration. The trade-off between the number of tolerated straggler nodes and mini-batch size can be formulated as \begin{equation} \overline{M} = \frac{M}{S+1}, \label{eq:s_vs_batchsize} \end{equation} where $M$ is the selected mini-batch size for the case without straggler nodes, $\overline{M}$ is the maximum potential mini-batch size for the case with $S$ straggler nodes. \begin{corollary}[Convergence rate] \label{corollary_2} Under the conditions of Theorem \ref{theorem1}, suppose that there exists $S_i=S$ straggling ECNs connected to each agent $i$. The coded csI-ADMM algorithm roughly achieves a $O( \frac{1}{\sqrt{k}}\cdot\frac{S+M+1}{M})$ convergence rate. \end{corollary} \begin{IEEEproof} By substituting $\overline{M}$ into (\ref{eq:theorem}), we can obtain the desired result. \end{IEEEproof} Apparently, Corollary \ref{corollary_2} implies that, to combat with more straggler nodes, the allowed batch size $\overline{M}$ is smaller than the case with robustness to less straggling ECNs. However, smaller batch size degrades the algorithm convergence speed. In the following Section \ref{sec:results}, the relation between the number of allowed straggler nodes and convergence rate will be also verified through numerical experiments. As for the response time for both uncoded and coded distributed systems, the relevant analysis can be found in \cite{speed-up}. In practice, the system cannot wait long time for the slowest ECN. Hence, a maximum delay parameter $\epsilon$ will be considered in following experiments. \section{Numerical Experiments} \label{sec:results} \begin{figure*}[t] \centering \vskip -0.2in \subfloat[ ]{\hspace*{1mm}\includegraphics[width=60mm]{ham_acc_vs_commu_minibatch.pdf}\label{fig_batch_a}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{ham_test_vs_commu_minibatch.pdf}\label{fig_batch_b}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{ham_acc_vs_comu_compare.pdf}\label{fig_compare_c}} \\ \subfloat[ ]{\hspace*{1mm}\includegraphics[width=60mm]{ham_test_vs_comu_compare.pdf}\label{fig_compare_d}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{ham_ciamm_acc_vs_runtime.pdf}\label{fig_siadmm_e}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{spc_test_comm_compare.pdf}\label{fig_compare_g}} \vskip -0.1in \caption{Performance of different consensus optimization methods on least squares on dataset USPS.} \label{fig_performance_usps} \end{figure*} In this section, both synthetic and real-world datasets are tested to evaluate the performance of the proposed stochastic ADMM algorithms for decentralized consensus optimization. We evaluate convergence performance in terms of mini-batch size, communication cost, running time as well as number of straggler nodes. \subsection{Simulation Setup} The experimental network $\mathcal{G}$ consists of $N$ agents and $E = \frac{N(N-1)}{2}\eta$ links, where $\eta$ is the network connectivity ratio. For agent $i$, $K_i = K$ ECNs with the same computing power (e.g., computing and memory) are attached. To reduce the impact of token traversing patterns, both Hamiltonian cycle-based and non-Hamiltonian cycle-based (i.e., the shortest path cycle-based \cite{wpg}) token traversing methods are evaluated for the proposed algorithms. For the shortest path cycle based traversing method, Fig. \ref{fig_network} (b) illustrates the token traversing pattern. The traversing route is determined through the shortest path routing strategy \cite{sp-cycle} and the cycle is formed through concatenating multiple shortest paths. To investigate the communication efficiency, we compare our approaches with state-of-the-art consensus optimization methods: 1) WADMM in \cite{wadmm}, where the agent activating order follows a random walk over the network, 2) D-ADMM in \cite{d-admm}, 3) DGD in \cite{DGD} and 4) EXTRA in \cite{EXTRA}, with respect to the relative error, which is defined as \begin{equation} \text{accuracy} = \frac{1}{N} \sum_{i=1}^{N} \frac{\norm{x_i^k - x^*}}{\norm{x_i^1 - x^*}}, \end{equation} where $x^* \in\mathbbm{R}^{p\times d}$ is the optimal solution of (P-1). For demonstrating the robustness against straggler nodes, distributed schemes, including \textit{Cyclic} and \textit{Fractional} repetition methods and uncode method, are achieved for comparison. For fair comparison, the parameters for algorithms are tuned, and kept the same in different experiments. Moreover, unicast is considered among agents and the communication cost per link is 1 unit. The consumed time for each communication among agents is assumed to follow a uniform distribution $\mathcal{U}(10^{-5}, 10^{-4})$ s. The response time of each ECN is measured by the computation time and the overall response time of each iteration is equal to the execution time for updating all variables in each iteration. Moreover, a maximum delay $\epsilon$ for stragglers in each iteration is considered in simulation. All experiments were performed using Python on an Intel CPU @2.3GHz (16GB RAM) laptop. \begin{figure*}[t] \centering \vskip -0.1in \subfloat[ ]{\hspace*{1mm}\includegraphics[width=60mm]{spc_mini_test1.pdf}\label{fig5_batch_a}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{spc_compare_acc1.pdf}\label{fig5_compare_b}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=60mm]{spc_iadmm_acc1.pdf}\label{fig5_siadmm_c}} \caption{Performance of different consensus optimization methods on least squares on dataset ijcnn1.} \label{fig_performance_ijcnn1} \end{figure*} We consider the decentralized least square problem, which aims at solving (\ref{eq:main_problem1}) with the local function of each agent \begin{equation} f_i(x_i,\mathcal{D}_i) = \frac{1}{2b_i}\sum_{j=1}^{b_i}\norm{x_i^T o_{i,j} - t_{i,j}}^2, \end{equation} where $\mathcal{D}_i = \{ o_{i,j}, t_{i,j} |j=1,...,b_i \}$ is the total disjoint dataset that agent $i$ needs to allocate among $K_i$ ECNs. In simulation, both synthetic and real datasets are utilized, which are summarised in Table \ref{tab:dataset}. For synthetic dataset, entries of $x_o \in \mathbbm{R}^{3 \times 1}$ and input $o_{i} \in \mathbbm{R}^3$ are generated with independent standard normal distribution. Output measurement $t_{i} \in \mathbbm{R}^1$ follows $t_{i} := x_o^T o_{i} + e_{i} $, where $e_{i} \sim \mathcal{N}(0, \sigma I_1)$ is the random noise with variance $\sigma$. Both USPS and ijccn1 data are disjointly linked to all agents. And among $K_i$ ECNs, we divide all the local data $D_i$ equally and disjointly, assigning $(S_i +1)$ partitions to each ECN. \begin{table}[!h] \caption{Simulation Datasets for Decentralized Consensus Optimization} \label{tab:dataset} \centering \fontsize{9}{8}\selectfont \begin{tabular}{|c|c|c|c|c|} \hline datasets & \# training & \# test & \# Dim. (p) & \# Dim. (d) \\ \hline synthetic & 50,400 & 5,040 & 3 & 1 \\ \hline USPS \cite{usps} & 1,000 & 100 & 64 & 10 \\ \hline ijcnn1\cite{libsvm} & 35,000 & 3,500 & 22 & 2\\ \hline \end{tabular} \end{table} \subsection{Simulation Results} Fig. \ref{fig_performance_usps} and \ref{fig_performance_ijcnn1} show the convergence performance of different consensus optimization methods on the least squares using dataset USPS, and dataset ijcnn1, respectively. Specifically, for Fig. \ref{fig_performance_usps}, a test network with Hamiltonian cycle is first considered in sub-figures Fig. \ref{fig_performance_usps} (a)-(e), while one result with the shortest path-based cycle is shown in Fig. \ref{fig_performance_usps} (f). In sub-figures Fig. \ref{fig_performance_usps} (a) and (b), we present the impact of $M$, the size of mini-batch, on the convergence behavior. It can be concluded that with increasing $M$, a higher accuracy of the proposed algorithms can be achieved with the same communication cost whilst the test error is lower as well. This agrees with Theorem \ref{theorem1} that a large mini-batch size may lead to fast convergence. The accuracy vs. communication cost and the test error vs. communication cost are shown in sub-figures Fig. \ref{fig_performance_usps} (c), (d), and (f), where test error is defined as the mean square error loss. \begin{figure}[t] \centering \vskip -0.1in \subfloat[ ]{\hspace*{1mm}\includegraphics[width=45mm]{acc_vs_runtime_S.pdf}\label{fig_ham1}} \subfloat[ ]{\hspace*{-2mm}\includegraphics[width=45mm]{testloss_vs_comm_S.pdf}\label{fig_b1}} \caption{Impact of number of straggler nodes on the convergence rate of the proposed csI-ADMM on synthetic dataset.} \label{fig_straggler} \end{figure} It is clear to see that the incremental algorithms including sI-ADMM and WADMM, are more communication effective than the gossip-based benchmarks, such as DADMM, DGD and EXTRA. And the proposed sI-ADMM leads to communication saving for both Hamiltonian based and non-Hamiltonian based token traversing test networks. Meanwhile, it keeps the same test error level compared with benchmarks. This is because that in each iteration, only one communication channel is being used for the incremental methods. Besides, with a fixed token traversing pattern, the proposed sI-ADMM is more balanced in visiting frequency of agents when compared with W-ADMM. Further, we evaluate robustness vs. the straggler nodes in terms of running time in Fig. \ref{fig_performance_usps} (e). Here, running time is defined as the experimental time including both communication time among agents and response time for updating all variables. Meanwhile, we consider the existence of $N$ stragglers and $S_i=1$ ECN straggler attached to agent $i$ in the test networks. As expected, the baseline uncoded scheme (i.e., sI-ADMM) has worse accuracy performance as delay increases. The proposed csI-ADMM with Cyclic or Fractional schemes has faster response in the presence of stragglers and are not influenced by the delay of the stragglers. In larger test networks, Fig. \ref{fig_performance_ijcnn1} presents some experimental results based on dataset ijcnn1 and the same performance can be observed as well. In addition, to investigate the convergence speed vs. straggler nodes trade-off for the proposed csI-ADMM, the impact of number of straggler nodes on the convergence speed is shown in Fig. \ref{fig_straggler}. To achieve good performance, we perform 10 independent experiment runs with the same simulation setup on synthetic data and take average for presentation. We can see that, with an increasing number of straggler nodes, the convergence speed decreases. This is because increasing the number of straggler nodes decreases the allowable mini-batch size allocated in each iteration and therefore affects the convergence speed. This is consistent with the analysis in sub-Section \ref{sub-sec:impact} that there exists one trade-off between tolerated straggling number and convergence speed. For the proposed MDS-based csI-ADMM, if we pursue robustness against more straggler nodes, its convergence speed degrades. \section{Conclusion} \label{conclusion} We have studied decentralized consensus optimization with ADMM in edge computing enabled large-scale networks. An error-control-code based stochastic incremental ADMM algorithm has been proposed to reduce the communication cost for exchanging intermediate model variables with tolerance to link failures and straggler nodes. We have theoretically analyzed the convergence and communication properties of the proposed csI-ADMM, which show that csI-ADMM reaches a $O(\frac{1}{\sqrt{k}})$ rate and $O(\frac{1}{\upsilon ^2}) $ communication cost, respectively. Moreover, the relation between convergence speed and the number of straggler nodes has also been presented. Simulation experiments have shown that the proposed csI-ADMM algorithm is more effective in reducing both response time and communication cost while retaining the test loss level, compared with benchmark approaches.
2,869,038,155,801
arxiv
\section{Introduction} \label{intro} Semi-inclusive deep inelastic (SIDIS) processes where a fast meson is detected are an important tool for the knowledge of the internal hadron dynamics. Indeed, the detected meson likely originates from the fragmentation of the quark which absorbed the virtual photon and opens a valuable window on the motion of quarks inside the parent nucleon, before the interaction with the photon. Hence, through SIDIS reactions (see, e.g., \cite{Qian,SIDIS,06010}) one can access the transverse-momentum-dependent parton distributions (TMDs) of nucleons (see, e.g., Ref. \cite{BARONE}). Neutron targets are not available, but, within a non-relativistic approach which include the final state interaction (FSI) through a distorted spin-dependent spectral function (SF), it has been shown the actual possibility to get information on the neutron structure from SIDIS experiments on $^3He$ \cite{Kaptari,DelDotto,Kaptari1}. For a relativistic description of few-body nuclei, we adopt a Poincar\'e covariant spin-dependent SF \cite{DPSS}, built up within the light-front Hamiltonian dynamics (LFHD) for an interacting system with a fixed number of on-mass-shell constituents (see, e.g., \cite{KP}). The LFHD has a subgroup structure of the light-front (LF) boosts (with a separation of the intrinsic motion from the global one) and allows one to give a fully Poincar\'e covariant description of deep inelastic scattering (DIS), SIDIS and deeply virtual Compton scattering. Furthermore, within the LFHD and using the Bakamijan-Thomas (BT) construction of the Poincar\'e generators \cite{Baka} one can take advantage of the whole successful non-relativistic (NR) phenomenology that has been developed for the nuclear interaction. A distinct feature of our approach is the ability to implement macrocausality or cluster separability, namely the expected property that if a system is separated into disjoint subsystems by a sufficiently large spacelike separation, then the subsystems behave as independent systems. In Section 2 the procedure to obtain information on the neutron Collins and Sivers asymmetries from SIDIS experiments on $^3He$ is discussed. In Section 3 the LF spin-dependent (SD) SF obtained from the LF wave functions for two- and three-nucleon systems is described, and the generalization to the LF dynamics of our procedure for the extraction of neutron asymmetries is outlined. In Section 4 the LF SF is applied to study the role of relativity for the EMC effect in $^3He$ and preliminary results are presented. In Section 5 conclusions and perspectives are drawn. \section{Extraction of neutron asymmetries from SIDIS experiments off $^3He$} \vspace{-1mm} The Collins and Sivers asymmetries, $A_{3}^{C(S)}$, can be expressed as follows \vspace{-1mm} \begin{eqnarray} A_{3}^{C(S)} = \frac { \int_{x}^A d\alpha \left[ \Delta \sigma_{C(S)}^n\left (x/\alpha ,Q^2 \right ) {{f^{\perp}_n(\alpha ,Q^2)}}+ 2\Delta \sigma_{C(S)}^p\left (x / \alpha ,Q^2 \right ) {{f^{\perp}_p(\alpha ,Q^2)}} \right] } {\int d\alpha\left[ \sigma^n\left (x/\alpha ,Q^2 \right ) {{f^{~}_n(\alpha ,Q^2)}}+ 2\sigma^p\left (x/\alpha ,Q^2 \right ) {{f^{~}_p(\alpha ,Q^2)}} \right]} \label{asi} \end{eqnarray} \vspace{-1mm} in terms of the light-cone unpolarized, {{$f^{~}_N$}} , and transverse, {{$f_N^\perp$}} , momentum distributions (md) \begin{eqnarray} \hspace{-2mm}{{f_N^{~(\perp)}(\alpha,Q^2)}} = \int dE \int_{p_{m}(\alpha,Q^2)}^{p_{M}(\alpha,Q^2)} {m_N \over E_N} {{{P}_N^{~(\perp)}(E, {\bf p})}} \, \delta \left ( \alpha - {p\cdot q \over m_N \nu} \right ) \, \theta \left(W_Y^2- \left(m_N+m_\pi \right )^2 \right ) d^3 {\bf{p}} \label{md} \end{eqnarray} with $W_Y$ the invariant mass of the debris Y, which hadronizes in a nucleon and, at least, one pseudoscalar meson. The quantities $\Delta \sigma_{C(S)}^{N}$ and $\sigma^{N}$ in Eq. (\ref{asi}) are related to the structure of the bound nucleon \begin{eqnarray} \hspace{-5mm}\Delta \sigma_{C}^N\left(x,Q^2 \right ) & = & { {1 -y \over 1-y-y^2/2}} \nonumber \\ & \times & \sum_q e_q^2 \int d^2 {{\mbox{\boldmath$\kappa$}}_T} d^2 {\bf k}_T \delta^2 ( {\bf k}_T + {\bf q}_T - {\mbox{\boldmath$\kappa$}}_T ) {{\bf \hat{P}}_{h\,\perp} \cdot {{\mbox{\boldmath$\kappa$}}_T} \over m_h} h_1^{q,N} (x, {\bf k}_T^2 ) H_1^{\perp q,h} (z, (z {{\mbox{\boldmath$\kappa$}}_T})^2 )~, \label{dcoll} \end{eqnarray} \be \hspace{-5mm}\Delta \sigma_{S}^N\left (x,Q^2 \right ) = \sum_q e_q^2 \int d^2 {{\mbox{\boldmath$\kappa$}}_T} d^2 {\bf k}_T \delta^2 ( {\bf k}_T + {\bf q}_T - {{\mbox{\boldmath$\kappa$}}_T} ) { {\bf \hat{P}}_{h\,\perp} \cdot {\bf{k}_T} \over m_N} f_{1T}^{\perp q,N} (x, {\bf{k}}_T^2 ) D_1^{q,h} (z, (z {\mbox{\boldmath$\kappa$}}_T)^2 )~, \label{dsiv} \ee \be \sigma^N\left (x,Q^2 , z\right ) = \sum_q e_q^2 \int d^2 {{\mbox{\boldmath$\kappa$}}_T} d^2 {\bf k}_T \delta^2 ( {\bf k}_T + {\bf q}_T - {{\mbox{\boldmath$\kappa$}}_T} ) f_1^{q,N} (x,{\bf k}_T^2 ) D_1^{q,h} (z, (z {{\mbox{\boldmath$\kappa$}}_T})^2 )~, \label{unpol} \ee where $z = E_h/ \nu$ and models for the parton distributions $ h_1^{q,N}$, $f_{1T}^{\perp q,N}$, ${{f_1^{q,N}}}$, and for the fragmentation functions $H_1^{\perp q,h}$, ${{D_1^{q,h}}}$ were used (see Ref. \cite{mio}). In Eq. (\ref{md}) ${P}_N^{~}(E, {\bf p})$ is the unpolarized SF (see \cite{cps}), while ${P}^{\, \perp}_N(E,{\bf p})$ is the transverse SF \be {P}^{\, \perp}_N(E,{\bf p})= \Re e \left \{ {P}^{N \, \frac12 -\frac12}_{\frac12 -\frac12}(E,{\bf p}) + {P}^{N \, -\frac12 \frac12}_{\frac12 -\frac12}(E,{\bf p}) \right \} \quad\quad. \label{trspectr}\ee In Ref. \cite{Kaptari1} the matrix elements of a distorted SD SF which includes a generalized eikonal approximation (GEA) to take care of the FSI in a NR approach were introduced \be {P}^{N \,MM'}_{\lambda\lambda'}(E,{\bf p})= \sum_{f_{23}} \sum \! \!\! \!\! \!\! \!\int_{~\epsilon^*_{23}}\rho\left( \epsilon^*_{23}\right)\, { \tilde {\cal O}}_{\lambda\lambda'}^{N \, MM' \, f_{23}} (~\epsilon^*_{23},{\bf p}) \, { \delta\left( E+ M_3-m_N-M^*_{23}\right)}~ ~, \label{spectrg} \ee with $\hspace{7mm}$ ${ \tilde {\cal O}}_{\lambda\lambda'}^{N \,M \, M' \, f_{23}} (\epsilon^*_{23},{\bf p})= \langle\, \lambda, {\bf p}; {{\hat S_{Gl}}} \phi_{\epsilon_{23}^*}^{f_{23}} | \Psi_3^{M}\,\rangle \langle\, \Psi_3^{M'}| \lambda', {\bf p}; {{\hat S_{Gl}}} \phi_{\epsilon_{23}^*}^{f_{23}} \,\rangle.$ The spin components $M$, $M'$ and $\lambda$, $\lambda'$ are defined with respect to the direction of $\hat{\bf q}$. The operator $\, {{\hat S_{Gl}}} ({\bf r}_1,{\bf r}_2,{\bf r}_3)= \prod_{i=2,3}\bigl[1-\theta(z_i-z_1) {{\Gamma}}({\bf b}_1-{\bf b}_i,{ z}_1-{z}_i) \bigr] $ is a {Glauber} operator which takes care of hadronization and FSI. The model of Ref. \cite{Kope1} for the (generalized) {{profile function} ${{\Gamma({{\bf b}},z)}}$, already successfully applied to $^2H(e,e'p)X$ \cite{Ciofi}, is adopted. In Ref. \cite{mio}, using the NR SF of Ref. \cite{cps} and within the plane wave impulse approximation (IA), i.e. no interaction between the measured fast $\pi$, the remnant debris and the interacting two-nucleon recoiling system, it was shown that the formula \cite{neutr} \be A_n \simeq {1 \over {{p_n}} d_n} \left ( {{A^{exp}_3}} - 2 {{p_p}} d_p {{A^{exp}_p}} \right )~, \quad \label{extrac} \ee already widely used to extract neutron asymmetries in DIS from experiments on $^3He$, works also in SIDIS, both for the Collins and Sivers single spin asymmetries. Nuclear effects are hidden in the effective polarizations (EP) $p_p=-0.024$ and $p_n=0.878$ and in the dilution factors, $d_{p(n)}$. To investigate whether the formula (\ref{extrac}) can be safely applied even in presence of the FSI, the GEA distorted spin-dependent SF was adopted in Ref. \cite{Kaptari1}. While {{$P^{IA}$}} depends on ground state properties, {{$P^{FSI}$}} is process dependent, since the Glauber operator depend on the kinematics of the process. Then for each experimental point ($x, Q^2...$) a different $P^{FSI}$ has to be evaluated ! The SFs ${P}^{IA}$ and ${P}^{FSI}$, as well as the light-cone md $f_N^{IA}$ and $f_N^{FSI}$ can be very different (see Fig. 1) and therefore FSI's have a strong effect on the SIDIS cross sections. \begin{figure}[h] \vspace{-0.8cm} \includegraphics[width=0.45\textwidth]{fig6aNew.eps} \includegraphics[width=0.45\textwidth]{fig6bNew.eps} \vspace{-0.3cm} \caption{Neutron unpolarized and transversely polarized distributions in $^3He$ in IA (full lines) and with FSI (dashed lines) for the initial electron energy $\cal{E}$= 8.8 GeV and $Q^2 = 5.73 ~(GeV/c)^2$ (preliminary results).} \label{distr} \end{figure} However, including the FSI the md {{$f_N$}} and {{$f_N^\perp$}} change in the same way and in asymmetries the md appear both in the numerator and in the denominator. Furthermore, while FSI's change effective polarizations $p_{p(n)}$ by 10-15 \%, {{it occurs that effects of GEA-FSI in the dilution factors and in the EP compensate each other to a large extent: i.e., the products ~ {{$p^{FSI}_{p(n)}~d^{FSI}_{p(n)}$}} and ~ {{$p^{IA}_{p(n)}~d^{IA}_{p(n)}$}} are essentially the same \cite{DelDotto}. Then the {{usual extraction}} of Eq. (\ref{extrac}) is safe, as shown at ${\cal E}=$ 8.8 GeV in Fig. \ref{asymm}. }} \begin{figure}[h] \includegraphics[width=14.cm]{fig10.eps} \vskip -5.mm \caption{Neutron asymmetries extracted (Eq. (2.7)) from the $^3He$ Sivers (left panel) and Collins (right panel) asymmetries, with and without FSI, in the actual kinematics of JLab \cite{SIDIS} (preliminary results, see \cite{Kaptari1}). } \label{asymm} \end{figure} \section{Light-Front Dynamics and the Light-Front Spectral Function} An explicit construction of the 10 Poincar\'e generators that fulfills the proper commutation rules in presence of interactions was given by Bakamjian and Thomas \cite{Baka} : i) only the mass operator, M, contains the interaction, and ii) it generates the dependence upon the interaction of the three dynamical generators in LFHD, namely $P^-$ and the transverse rotations $\vec F_\perp$. M is obtained adding to the free mass $M_0$ of the system an interaction $V$. There are two possibilities: $M^2 = M_0^2 + U$ (then for two particles one can easily embed the NR phenomenology) or $M = M_0 + V$. The interaction, $U$ or $V$, must commute with all the kinematical generators, and with the non interacting spin. Then it has to be invariant for translations and rotations, as in the NR case. For the three-body case the mass operator is $M_{BT}(123)= {{M_0(123)}}+ V^{BT}$, where $M_0(123)= \sqrt{m^2 +k^2_1}+\sqrt{m^2 +k^2_2}+\sqrt{m^2 +k^2_3}$~ is the free mass operator, $V^{BT}$ a BT two-body and three-body force, and $k_i~(i=1,2,3)$ are intrinsic momenta with ${\bf k}_1 +{\bf k}_2 +{\bf k}_3=0$ \cite{KP}. The NR mass operator is written as $M^{NR}=3m + \sum_{i=1,3} {k^2_i / 2m} +V^{NR}_{12}+V^{NR}_{23}+V^{NR}_{31}+V^{NR}_{123}$ and must obey the commutation rules proper of the Galilean group, leading to translational and rotational invariance. Those properties are analogous to the ones in the BT construction. This allows us to consider the standard NR mass operator as a sensible BT mass operator, and embed it in a Poincar\'e covariant approach: $M_{BT}(123) \sim M^{NR}~$. To obtain within the LFHD a Poincar\'e-covariant spin-dependent SF for a three-particle system in the bound state $|\Psi_{0};S, T_z \rangle$, eigenstate of the mass operator $M_{BT}(123)$ and polarized along $\vec{S}$, let us use the LF overlaps $_{LF}\langle \tau_{S},T_{S};\alpha,\epsilon ;J_{z}J;\tau\sigma,\tilde{\bm \kappa}|\Psi_{0}; S, T_z\rangle$ in place of their NR counterparts in the definition of the SF. The state $_{LF}\langle \tau_{S},T_{S};\alpha,\epsilon ;J_{z}J;\tau\sigma,\tilde{\bm \kappa}|$ is the tensor product of a plane wave for the knocked-out constituent (say particle 1) with intrinsic momentum $\tilde{\bm \kappa}$, and a fully interacting intrinsic state for the spectator system (say particles 2 and 3), with energy $\epsilon$, \emph{all moving in the intrinsic reference frame of the cluster (1,23)}. When applications to DIS or SIDIS processes are concerned, the issue of macrocausality has to be considered, i.e., if the subsystems which compose a system are brought far apart, the Poincar\'e generators of the system have to become the sum of the Poincar\'e generators corresponding to the subsystems in which the system is asymptotically separated. The packing operators \cite{KP}, which make it possible to include the macrocausality in the bound state, are not considered in the present approximation. However, we implement macrocausality in the tensor product of a plane wave for the knocked-out constituent times a fully interacting intrinsic state for the spectator pair. Then, the LF spin-dependent SF for the three-nucleon system ($^3He$ or $^3H$) is \cite{DPSS} \be \hspace{-0.4cm}{{{\cal {P}}^{\tau}_{\sigma'\sigma}(\xi,{\bm \kappa}_\perp,\kappa^-,S)} = \left|{\partial \kappa^+\over \partial \xi}\right| ~\int \! \!\ \! \! \! \! \!\ \! \! \!\! \!\sum d\epsilon~\rho(\epsilon) ~ \delta\left( \kappa^- -M_3+{M^2_S +|{\bm \kappa}_\perp|^2 \over (1-\xi)M_3} \right)} ~ ~\times \nonu \hspace{-0.4cm} \sum_{J J_{z}\alpha}\sum_{T_{S}\tau_{S} } ~ _{LF}\langle \tau_{S},T_{S} , \alpha,\epsilon; J J_{z}; \tau\sigma',\tilde{\bm \kappa}|\Psi_{0}; S,T_z \rangle ~\langle S,T_z; \Psi_0|\tilde{\bm \kappa},\sigma\tau; J J_{z}; \epsilon, \alpha, T_{S}, \tau_{S}\rangle_{LF} \label{LFspf} \ee where $\tau= \pm 1 /2$, $M_3$ is the nucleus mass, $\rho(\epsilon)$ the density of the two-nucleon eigenstates ($\rho(\epsilon) = \sqrt{\epsilon ~ m} ~ m/2$ for the two-body continuum states and $\rho(\epsilon) = 1$ for the deuteron bound state), $J$ the spin and $T_{S}$ the isospin of the two-body state, $\alpha$ the set of quantum numbers needed to completely specify this eigenstate, and { $M_S=2\sqrt{m^2 +m\epsilon}$} its mass. From ~$\xi, M_S, {\bm \kappa}_\perp$~ one can define ~{$\kappa^+=\xi{\cal M}_{0}(1,23)$, where ${\cal M}_0(1,23)$ is the free mass of the cluster (1,23) \be {\cal M}^2_{0}(1,23)={m^2 +|{\bm \kappa}_\perp|^2 \over \xi}+ {M^2_S +|{\bm \kappa}_\perp|^2 \over (1-\xi)} \quad . \ee The overlap $_{LF}\langle \tau_{S},T_{S};\alpha,\epsilon ;J_{z}J;\tau\sigma,\tilde{{\bm \kappa}}|\Psi_{0}; S, T_z\rangle$ is defined as follows \cite{DPSS} \be \hspace{-6mm}_{{LF}}\langle \tau_{S},T_{S} , \alpha,\epsilon; J_{z} J; \tau\sigma,{{\tilde{\bm \kappa}}}|\Psi_{0}; S, T_z \rangle = \sum_{\tau_2,\tau_3} \int d{\bf k}_{23} \sum_{\sigma'_1}~ D^{{1 \over 2}} [{\cal R}_M (\tilde{\bm k} )]_{\sigma\sigma'_1} ~\sqrt{ {\kappa^+ E_{23} \over k^+ E_S}}~\sqrt{(2 \pi)^3~2E({\bf k})}\times \nonu \hspace{-6mm} ~\sum_{\sigma''_2,\sigma''_3}\sum_{\sigma'_2,\sigma'_3} ~\sum_{\sigma_2}~ D^{{1 \over 2}} [{R}^\dagger_M{ ({\blf k}_{23} )}]_{\sigma''_2\sigma_2}~ D^{{1 \over 2}} [{R}_M {({\blf k}_{2} )}]_{\sigma_2\sigma'_2} ~\sum_{\sigma_3}~ D^{{1 \over 2}} [{R}^\dagger_M{(-{\blf k}_{23} )}]_{\sigma''_3\sigma_3}~ D^{{1 \over 2}} [{R}_M {({\blf k}_{3} )}]_{\sigma_3\sigma'_3} \nonu \times ~ ~~ _{{ {IF}}}\langle \tau_S, T_{S}, \alpha,\epsilon; J_{z} J |{\bf k}_{23};\sigma"_2,\sigma"_3;\tau_2,\tau_3 \rangle \langle \tau_3,\tau_2,\tau; \sigma'_3, \sigma'_2, \sigma'_1; {{\bf k}},{\bf k}_{23}| \Psi_{0}; S,T_z \rangle_{{{IF}}} \quad , \label{overl} \ee where ${\bf k}_{23}$ is the intrinsic momentum of the (23) pair, ${\bf k}$ is the intrinsic nucleon momentum in the (123) system ({${\bf k}_\perp={\bm \kappa}_\perp$, since we choose the $^3He$ transverse momentum ${\bf P}_\perp=0$}), {$k^+ = \xi~ M_0(123)= \kappa^+ ~M_0(123)/ {\cal M}_0(1,23)$, with $M_0(123)$ the free mass of the three-particle system \be ~~~~~~~M^2_0(123)={m^2 +|{\bm k}_\perp|^2 \over \xi}+{M^2_{23} + |{\bm k}_\perp|^2 \over (1-\xi)} \ee and $~M^2_{23}= 4 (m^2 +|{\bf k}_{23}|^2)$ the mass of the spectator pair without interaction ! In Eq. (\ref{overl}) one has {$k_z= { 1\over 2} ~\left[k^+ -{(m^2+|{\bm \kappa}_\perp|^2 ) / k^+} \right]$, $E_{23}=\sqrt{M^2_{23}+|{\bf k}|^2}$ and $E_S=\sqrt{M^2_S+|{\bm \kappa}|^2}$}. Furthermore {{ $D^{s}_{\sigma,\sigma'}(R^\dagger_{M}(\blf k))$}} is the Wigner function, needed for coupling angular momenta in LFHD, and the Melosh rotation {{ $R_{M}(\blf k)$}} is the rotation between the rest frames of the particle reached through a LF boost or a canonical, rotationless boost \cite{KP}. In our calculations, we identify the instant form (IF) overlaps of Eq. (\ref{overl}) with the NR wave functions for the two-nucleon and the three-nucleon \cite{pisa} systems, corresponding to the NN interaction AV18 \cite{AV18}. We are presently planning to test our extraction procedure of neutron asymmetries from $^3He$ asymmetries using the LF SF and including in our LF description the FSI between the jet produced from the hadronizing quark and the two-nucleon spectator system through an extension to the LF framework of the GEA of Refs. \cite{Kope1,Ciofi}, as we did in the NR case \cite{Kaptari,Kaptari1}. \section{Light-front momentum distribution and preliminary results for the EMC effect} From the LF SF one can obtain the momentum distribution {{$f^A_{p(n)}(z)$}} \be f^A_{\tau}(z) = \int_0^1 d\xi \int d {\bm \kappa}_\perp\int d\kappa^- ~{1 \over 2 (2 \pi)^3 \kappa^+} ~ Tr \left[{{{\cal P}^{\tau}(\xi,{\bf k}_\perp, \kappa^-,S)}}\right]~ \delta\left(z - {\xi M_A\over m} \right ) \ee that naturally fulfills normalization and momentum sum rule \vspace{-1mm} \be {{\int_0^{M_A/m} dz~f^A_{\tau}(z)=1 }} \quad \quad \quad {{MSR}={1 \over A}\int_0^{M_A/m} dz~z~\left [Z f^A_{p}(z) + (A-Z) f^A_{n}(z)\right] ={ M_A\over A~m} } \ee because of the symmetry of the three-body bound state (see \cite{DPSS}). To investigate whether the LF SF can affect the EMC effect, we first evaluated the nuclear structure function { {$ F^A_2(x)$}} ($x=Q^2/2m\nu$) as a convolution of the nuclear SF and of the nucleon structure function for the proton and the neutron. Then we obtained the ratios \be {{ R^A_2(x)={~ F^A_2(x)\over Z~F^p_2(x)+(A-Z)~F^n_2(x)}}} \ee and $R^{He}_2(x)/R^D_2(x)$. For the two-body channel an exact calculation was performed. In the three-body channel average values for $k_{23}$ were inserted in Eq. (\ref{overl}). Our preliminary results are shown in Fig. 3 and encourage us in performing the full LF calculation. \begin{figure} \vspace{-0.8cm} \centering \includegraphics[width=11.cm]{ect15_2.eps} \vspace{-0.8cm} \caption{$^3He$ EMC effect. Solid line: result for the LF SF, with exact calculation in the 2-body channel, and average energies in the 3-body one: $<k_{23} >$= 113.53 MeV (proton), $< k_{23} >$= 91.27 MeV (neutron), corresponding to the average kinetic energy of the intrinsic motion of the (23) pair in the continuum spectrum. Dotted line: result with the approach of Ref. \cite{Sauer} for the SF. Experimental data are from Ref. \cite{Seely}.} \label{fig1} \end{figure} \section{Conclusions and Perspectives} An investigation of SIDIS processes off $^3He$ beyond the NR, impulse approximation approach is presently being carried out. A Generalized Eikonal Approximation has been used to deal with the FSI effects and a distorted spin-dependent spectral function, still non relativistic, has been defined \cite{Kaptari,Kaptari1}. It has been shown that the formula (\ref{extrac}) can be safely used to obtain both the Collins and Sivers neutron asymmetries from the measured Collins and Sivers asymmetries of $^3He$ \cite{Kaptari1}. A Poincar\'e covariant description of A=3 nuclei, based on the LFHD, has been proposed \cite{DPSS}. The BT construction of the Poincar\'e generators allows one to embed the successful NR phenomenology for few-nucleon systems in a Poincar\'e covariant framework. Then a LF SF can be defined that exactly fulfills both the normalization and the momentum sum rule. The nucleon SF for $^3He$, has been evaluated by approximating the IF overlaps in Eq. (\ref{overl}) with their NR counterparts, calculated with the AV18 NN interaction, since it fulfills rotational and translational symmetries. Let us stress two important features of our LF spectral function : i) the definition of the nucleon momentum $\tilde{\bm \kappa}$ in the intrinsic reference frame of the cluster (1,23); and ii) the use for the calculation of the LF spectral function of the tensor product of a plane wave of momentum $\tilde{\bm \kappa}$ times the state which describes the intrinsic motion of the fully interacting spectator subsystem. These new features allows one to take care of macrocausality and to introduce a new effect of binding in the spectral function. A first test of our approach is the EMC effect for $^3He$. The 2-body contribution to the nucleon SF has been calculated with the full expression, while for the 3-body contribution average values for $<k_{23}^2>$ have been used. In the comparison with experimental data, encouraging improvements clearly appear with respect to the non-relativistic result. Therefore, relativistic effects generated by the fulfillment of Poincar\'e covariance at the nucleus level, seem to be required to identify unambiguously new, genuine QCD phenomena inside the nucleus itself. Our next steps will be the full calculation of the EMC effect for $^3He$, including the exact 3-body contribution, and the introduction of the FSI through the GEA within the LFHD.
2,869,038,155,802
arxiv
\section{Introduction}\label{sec:intro} Recently, there have been two novel ideas proposed to exploit coding in order to speed up distributed computing applications. Specifically, a repetitive structure of computation tasks across distributed computing servers was proposed in~\cite{li2016fundamental,LMA_ISIT16,LMA_all}, enabling coded multicast opportunities that significantly reduce the time to shuffle intermediate results. On the other hand, applying Maximum Distance Separable (MDS) codes to some linear computation tasks (e.g., matrix multiplication) was proposed in~\cite{lee2015speeding,lee-ISIT16}, in order to alleviate the effects of straggling servers and shorten the computation phase of distributed computing. In this paper, we propose a \emph{unified} coded framework for distributed computing with straggling servers, by introducing a tradeoff between ``latency of computation'' and ``load of communication'' for linear computation tasks. We show that the coding schemes of~\cite{li2016fundamental} and~\cite{lee2015speeding} can then be viewed as special instances of the proposed coding framework by considering two extremes of this tradeoff: minimizing either the load of communication or the latency of computation individually. Furthermore, the proposed coding framework provides a natural tradeoff between computation latency and communication load in distributed computing, and allows to systematically operate at any point on that tradeoff. More specifically, we focus on a distributed matrix multiplication problem in which for a matrix ${\bf A}$ and $N$ input vectors ${\bf x}_1,\ldots,{\bf x}_N$, we want to compute $N$ output vectors ${\bf y}_1={\bf A}{\bf x}_1,\ldots,{\bf y}_N={\bf A}{\bf x}_N$. The computation cannot be performed on a single server node since its local memory is too small to hold the entire matrix ${\bf A}$. Instead, we carry out this computation using $K$ distributed computing servers collaboratively. Each server has a local memory, with the size enough to store up to equivalent of $\mu$ fraction of the entries of the matrix {\bf A}, and it can only perform computations based on the contents stored in its local memory. Matrix multiplication is one of the building blocks to solve data analytics and machine learning problems (e.g., regression and classification). Many such applications of big data analytics require massive computation and storage power over large-scale datasets, which are nowadays provided collaboratively by clusters of computing servers, using efficient distributed computing frameworks such as Hadoop MapReduce~\cite{dean2004mapreduce} and Spark~\cite{zaharia2010spark}. Therefore, optimizing the performance of distributed matrix multiplication is of vital importance to improve the performance of the distributed computing applications. A distributed implementation of matrix multiplication proceeds in three phases: Map, Shuffle and Reduce. In the Map phase, every server multiplies the input vectors with the locally stored matrix that partially represents the target matrix ${\bf A}$. When a subset of servers finish their local computations such that their Map results are sufficient to recover the output vectors, we halt the Map computation and start to Shuffle the Map results across the servers in which the final output vectors are calculated by specific Reduce functions. Within the above three-phase implementation, the coding approach of \cite{li2016fundamental} targets at minimizing the shuffling load of intermediate Map results. It introduces a particular repetitive structure of Map computations across the servers, and utilizes this redundancy to enable a specific type of network coding in the Shuffle phase (named coded multicasting) to minimize the communication load. We term this coding approach as ``Minimum Bandwidth Code''. In~\cite{li2016scalable,LQMA_globecom16}, the Minimum Bandwidth Code was employed in a fully decentralized wireless distributed computing framework, achieving a scalable architecture with a constant load of communication. The other coding approach of~\cite{lee2015speeding}, however, aims at minimizing the latency of Map computations by encoding the Map tasks using MDS codes, so that the run-time of the Map phase is not affected by up to a certain number of straggling servers. This coding scheme, which we term as ``Minimum Latency Code'', results in a significant reduction of Map computation latency. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{tradeoff.pdf} \caption{The Latency-Load tradeoff, for a distributed matrix multiplication job of computing $N=840$ output vectors using $K=14$ servers each with a storage size $\mu=1/2$.} \label{fig:tradeoff} \end{figure} In this paper, we formalize a \emph{tradeoff} between the computation latency in the Map phase (denoted by $D$) and the communication (shuffling) load in the Shuffle phase (denoted by $L$) for distributed matrix multiplication (in short, the \emph{Latency-Load Tradeoff}), in which as illustrated in Fig.~\ref{fig:tradeoff}, the above two coded schemes correspond to the two extreme points that minimize $L$ and $D$ respectively. Furthermore, we propose a unified coded scheme that organically integrates both of the coding techniques, and allows to systematically operate at any point on the introduced tradeoff. For a given computation latency, we also prove an information-theoretic lower bound on the minimum required communication load to accomplish the distributed matrix multiplication. This lower bound is proved by first concatenating multiple instances of the problem with different reduction assignments of the output vectors, and then applying the cut-set bound on subsets of servers. At the two end points of the tradeoff, the proposed scheme achieves the minimum communication load to within a constant factor. We finally note that there has been another tradeoff between the computation load in the Map phase and the communication load in the Shuffle phase for distributed computing, which is introduced and characterized in~\cite{li2016fundamental}. In this paper, we are fixing the amount of computation load (determined by the storage size) at each server, and focus on characterizing the tradeoff between the computation latency (determined by the number of servers that finish the Map computations) and the communication load. Hence, the considered tradeoff can be viewed as an extension of the tradeoff in~\cite{li2016fundamental} by introducing a third axis, namely the computation latency of the Map phase. \section{Problem Formulation}\label{sec:def} \subsection{System Model} We consider a matrix multiplication problem in which given a matrix ${\bf A} \in \mathbb{F}_{2^T}^{m \times n}$ for some integers $T$, $m$ and $n$, and $N$ input vectors ${\bf x}_1,\ldots,{\bf x}_N \in \mathbb{F}_{2^T}^n$, we want to compute $N$ output vectors ${\bf y}_1 = {\bf A}{\bf x}_1,\ldots,{\bf y}_N = {\bf A}{\bf x}_N$. We perform the computations using $K$ distributed servers. Each server has a local memory of size $\mu mnT$ bits (i.e., it can store equivalent of $\mu$ fraction of the entries of the matrix ${\bf A}$), for some $\frac{1}{K} \leq \mu \leq 1$.\footnote{Thus enough information to recover the entire matrix ${\bf A}$ can be stored collectively on the $K$ servers.} We allow applying linear codes for storing the rows of ${\bf A}$ at each server. Specifically, Server $k$, $k \in \{1,\ldots,K\}$, designs an encoding matrix ${\bf E}_k \in \mathbb{F}_{2^T}^{\mu m \times m}$, and stores \begin{equation}\label{eq:store} {\bf U}_k = {\bf E}_k {\bf A}. \end{equation} The encoding matrices ${\bf E}_1,\ldots,{\bf E}_K$ are design parameters and is denoted as \emph{storage design}. The storage design is performed in prior to the computation. \begin{remark} For the Minimum Bandwidth Code in~\cite{li2016fundamental}, each server stores $\mu m$ rows of the matrix ${\bf A}$. Thus, the rows of the encoding matrix ${\bf E}_k$ was chosen as a size-$\mu m$ subset of the rows of the identity matrix ${\bf I}_m$, according to a specific repetition pattern. While for the Minimum Latency Code in~\cite{lee2015speeding}, ${\bf E}_k$ was generated randomly such that every server stores $\mu m$ random linear combinations of the rows of ${\bf A}$, achieving a $(\mu m K, m)$ MDS code. $\hfill \square$ \end{remark} \vspace{-2.5mm} \subsection{Distributed Computing Model} \vspace{-1.5mm} We assume that the input vectors ${\bf x}_1,\ldots,{\bf x}_N$ are known to all the servers. The overall computation proceeds in three phases: \emph{Map}, \emph{Shuffle}, and \emph{Reduce}. \noindent {\bf Map Phase:} The role of the Map phase is to compute some coded intermediate values according to the locally stored matrices in (\ref{eq:store}), which can be used later to re-construct the output vectors. More specifically, for all $j=1,\ldots,N$, Server $k$, $k =1,\ldots,K$, computes the intermediate vectors \begin{equation}\label{eq:map} {\bf z}_{j,k} = {\bf U}_k {\bf x}_j = {\bf E}_k {\bf A}{\bf x}_j = {\bf E}_k{\bf y}_j. \end{equation} We denote the latency for Server~$k$ to compute ${\bf z}_{1,k},\ldots,{\bf z}_{N,k}$ as $S_k$. We assume that $S_1,\ldots,S_K$ are i.i.d. random variables, and denote the $q$th order statistic, i.e., the $q$th smallest variable of $S_1,\ldots,S_K$ as $S_{(q)}$, for all $q \in \{1,\ldots,K\}$. We focus on a class of distributions of $S_k$ such that \begin{align} \mathbb{E}\{S_{(q)}\} = \mu N g(K,q), \end{align} for some function $g(K,q)$. The Map phase terminates when a subset of servers, denoted by ${\cal Q} \subseteq \{1,\ldots,K\}$, have finished their Map computations in (\ref{eq:map}). A necessary condition for selecting ${\cal Q}$ is that the output vectors ${\bf y}_1\ldots,{\bf y}_N$ can be re-constructed by jointly utilizing the intermediate vectors calculated by the servers in ${\cal Q}$, i.e., $\{{\bf z}_{j,k}: j=1,\ldots,N, k \in {\cal Q}\}$. However, one can allow redundant computations in ${\cal Q}$, since if designed properly, they can be used to reduce the load of communicating intermediate results, for servers in ${\cal Q}$ to recover the output vectors in the following stages of the computation. \begin{remark} The Minimum Bandwidth Code in~\cite{li2016fundamental} waits for all servers to finish their computations, i.e., ${\cal Q}=\{1,\ldots,K\}$. For the Minimum Latency Code in~\cite{lee2015speeding}, ${\cal Q}$ is the subset of the fastest $\lceil \frac{1}{\mu}\rceil$ servers in performing the Map computations. $\hfill \square$ \end{remark} \begin{definition}[Computation Latency] We define the \emph{computation latency}, denoted by $D$, as the average amount of time spent in the Map phase. $\hfill\Diamond$ \end{definition} After the Map phase, the job of computing the output vectors ${\bf y}_1\ldots,{\bf y}_N$ is continued \emph{exclusively} over the servers in ${\cal Q}$. The final computations of the output vectors are distributed uniformly across the servers in ${\cal Q}$. We denote the set of indices of the output vectors assigned to Server $k$ as ${\cal W}_k$, and $\{{\cal W}_k: k\in {\cal Q}\}$ satisfy 1) ${\cal W}_k \cap {\cal W}_{k'} = \emptyset, \; \forall k \neq k'$, 2) $|{\cal W}_k| = N/|{\cal Q}|\in \mathbb{N}, \; \forall k \in {\cal Q}$.\footnote{We assume that $N \gg K$, and $|{\cal Q}|$ divides $N$ for all ${\cal Q} \subseteq \{1,\ldots,K\}$.} \noindent {\bf Shuffle Phase:} The goal of the Shuffle phase is to exchange the intermediate values calculated in the Map phase, to help each server recover the output vectors it is responsible for. To do this, every server $k$ in ${\cal Q}$ generates a message $X_k$ from the locally computed intermediate vectors ${\bf z}_{1,k},\ldots,{\bf z}_{N,k}$ through an encoding function $\phi_k$, i.e., $X_k = \phi_k\left({\bf z}_{1,k},\ldots,{\bf z}_{N,k}\right)$, such that upon receiving all messages $\{X_k: k \in {\cal Q}\}$, every server $k \in {\cal Q}$ can recover the output vectors in ${\cal W}_k$. We assume that the servers are connected by a shared bus link. After generating $X_k$, Server~$k$ multicasts $X_k$ to all the other servers in ${\cal Q}$. \begin{definition}[Communication Load] We define the \emph{communication load}, denoted by $L$, as the average total number of bits in all messages $\{X_k: k \in {\cal Q}\}$, normalized by $mT$ (i.e., the total number of bits in an output vector). $\hfill\Diamond$ \end{definition} \noindent {\bf Reduce Phase:} The output vectors are re-constructed distributedly in the Reduce phase. Specifically, User $k$, $k \in {\cal Q}$, uses the locally computed vectors ${\bf z}_{1,k},\ldots,{\bf z}_{N,k}$ and the received multicast messages $\{X_k: k \in {\cal Q}\}$ to recover the output vectors with indices in ${\cal W}_k$ via a decoding function $\psi_k$, i.e., \begin{align} \{{\bf y}_j: j \in {\cal W}_k\} = \psi_k({\bf z}_{1,k},\ldots,{\bf z}_{N,k},\{X_k: k \in {\cal Q}\}). \end{align} For such a distributed computing system, we say a latency-load pair $(D,L) \in \mathbb{R} ^2$ is \emph{achievable} if there exist a storage design $\{{\bf E}_k\}_{k=1}^K$, a Map phase computation with latency $D$, and a shuffling scheme with communication load $L$, such that all output vectors can be successfully reduced. \begin{definition} We define the latency-load region, as the closure of the set of all achievable $(D,L)$ pairs. $\hfill \Diamond$ \end{definition} \subsection{Illustrating Example}\label{sec:illustrate-example} In order to clarify the formulation, we use the following simple example to illustrate the latency-load pairs achieved by the two coded approaches discussed in Section~\ref{sec:intro}. We consider a matrix ${\bf A}$ consisting of $m=12$ rows ${\bf a}_1,\ldots,{\bf a}_{12}$. We have $N=4$ input vectors ${\bf x}_1,\ldots,{\bf x}_4$, and the computation is performed on $K=4$ servers each has a storage size $\mu =\frac{1}{2}$. We assume that the Map latency $S_k$, $k=1,\ldots,4$, has a shifted-exponential distribution function \begin{equation}\label{eq:dis} F_{S_k}(t) = 1-e^{-(\frac{t}{\mu N}-1)}, \; \forall t \geq \mu N, \end{equation} and by e.g.,~\cite{arnold1992first}, the average latency for the fastest $q$, $1\leq q \leq 4$, servers to finish the Map computations is \begin{equation} D(q)=\mathbb{E}\{S_{(q)}\} = \mu N\Big(1 + \sum_{j=K-q+1}^{K} \tfrac{1}{j}\Big). \end{equation} \begin{figure}[htbp] \centering \subfigure[Minimum Bandwidth Code. Every row of ${\bf A}$ is multiplied with the input vectors twice. For $k =1,2,3,4$, Server $k$ reduces the output vector ${\bf y}_k$. In the Shuffle phase, each server multicasts $3$ bit-wise XORs, denoted by $\oplus$, of the calculated intermediate values, each of which is simultaneously useful for two other servers. \vspace{-2mm}]{\includegraphics[width=0.48\textwidth]{coded_shuffle.pdf} \label{fig:shuffle}} \vspace{-1.5mm} \subfigure[Minimum Latency Code. ${\bf A}$ is encoded into 24 coded rows ${\bf c}_1\ldots,{\bf c}_{24}$. Server 1 and 3 finish their Map computations first. They then exchange enough number (6 for each output vector) of intermediate values to reduce ${\bf y}_1, {\bf y}_2$ at Server~1 and ${\bf y}_3, {\bf y}_4$ at Server~3.]{\includegraphics[width=0.48\textwidth]{coded_map.pdf} \label{fig:map}} \caption{Illustration of the Minimum Bandwidth Code in~\cite{li2016fundamental} and the Minimum Latency Code in~\cite{lee2015speeding}.} \label{fig:extreme} \vspace{-2.5mm} \end{figure} \noindent {\bf Minimum Bandwidth Code~\cite{li2016fundamental}.} The Minimum Bandwidth Code in~\cite{li2016fundamental} repeatedly stores each row of ${\bf A}$ at $\mu K$ servers with a particular pattern, such that in the Shuffle phase, $\mu K$ required intermediate values can be delivered with a single coded multicast message, which results in a coding gain of $\mu K$. We illustrate such coding technique in Fig.~\ref{fig:shuffle}. As shown in Fig.~\ref{fig:shuffle}, a Minimum Bandwidth Code repeats the multiplication of each row of ${\bf A}$ with all input vectors ${\bf x}_1,\ldots,{\bf x}_4$, $\mu K=2$ times across the $4$ servers, e.g., ${\bf a}_1$ is multiplied at Server~1 and~2. The Map phase continues until all servers have finished their Map computations, achieving a computation latency $D(4)=2\times(1+\sum_{j=1}^4 \frac{1}{j})=\frac{37}{6}$. For $k=1,2,3,4$, Server $k$ will be reducing output vector ${\bf y}_k$. In the Shuffle phase, as shown in Fig.~\ref{fig:shuffle}, due to the specific repetition of Map computations, every server multicasts $3$ bit-wise XORs, each of which is simultaneously useful for two other servers. For example, upon receiving ${\bf a}_1{\bf x}_3 \oplus {\bf a}_3{\bf x}_2$ from Server 1, Server 2 can recover $ {\bf a}_3{\bf x}_2$ by canceling ${\bf a}_1{\bf x}_3$ and Server 3 can recover $ {\bf a}_1{\bf x}_3$ by canceling ${\bf a}_3{\bf x}_2$. Similarly, every server decodes the needed values by canceling the interfering values using its local Map results. The Minimum Bandwidth Code achieves a communication load $L = 3 \times 4/12=1$. The Minimum Bandwidth Code can be viewed as a specific type of network coding~\cite{ahlswede2000network}, or more precisely index coding~\cite{birk2006coding,bar2011index}, in which the key idea is to design ``side information'' at the servers (provided by the Map results), enabling multicasting opportunities in the Shuffle phase to minimize the communication load. \noindent {\bf Minimum Latency Code~\cite{lee2015speeding}.} The Minimum Latency Code in~\cite{lee2015speeding} uses MDS codes to generate some redundant Map computations, and assigns the coded computations across many servers. Such type of coding takes advantage of the abundance of servers so that one can terminate the Map phase as soon as enough coded computations are performed across the network, without needing to wait for the remaining straggling servers. We illustrate such coding technique in Fig.~\ref{fig:map}. For this example, a Minimum Latency Code first has each server $k$, $k=1,\ldots,4$, independently and randomly generate $6$ random linear combinations of the rows of ${\bf A}$, denoted by ${\bf c}_{6(k-1)+1},\ldots,{\bf c}_{6(k-1)+6}$ (see Fig.~\ref{fig:map}). We note that $\{{\bf c}_1,\ldots,{\bf c}_{24}\}$ is a $(24,12)$ MDS code of the rows of ${\bf A}$. Therefore, for any subset ${\cal D} \subseteq \{1,\ldots,24\}$ of size $|{\cal D}|=12$, using the intermediate values $\{{\bf c}_i{\bf x}_j: i \in {\cal D}\}$ can recover the output vector ${\bf y}_j$. The Map phase terminates once the fastest $2$ servers have finished their computations (e.g., Server~1 and~3), achieving a computation latency $D(2)\!=\! 2\! \times \!(1+\frac{1}{3}+\frac{1}{4})\!=\!\frac{19}{6}$. Then Server~1 continues to reduce ${\bf y}_1$ and ${\bf y}_2$, and Server~3 continues to reduce ${\bf y}_3$ and ${\bf y}_4$. As illustrated in Fig.~\ref{fig:map}, Server~1 and~3 respectively unicasts the intermediate values it has calculated and needed by the other server to complete the computation, achieving a communication load $L \!=\! 6\! \times \! 4/12\!=\!2$. From the above descriptions, we note that the Minimum Bandwidth Code uses about twice of the time in the Map phase compared with the Minimum Latency Code, and achieves half of the communication load in the Shuffle phase. They represent the two end points of a general latency-load tradeoff characterized in the next section. \section{Main Results} The main results of the paper are, 1) a characterization of a set of achievable latency-load pairs by developing a unified coded framework, 2) an outer bound of the latency-load region, which are stated in the following two theorems. \vspace{-1mm} \begin{theorem} For a distributed matrix multiplication problem of computing $N$ output vectors using $K$ servers, each with a storage size $\mu \geq \frac{1}{K}$, the latency-load region contains the lower convex envelop of the points \begin{align} \{(D(q),L(q)): q =\lceil \tfrac{1}{\mu}\rceil,\ldots,K\},\label{eq:pair} \end{align} in which \begin{align} D(q) &= \mathbb{E}\{S_{(q)}\} = \mu N g(K,q),\label{eq:latency}\\ L(q) &= N\sum_{j=s_q}^{\lfloor \mu q \rfloor} \tfrac{B_j}{j} + N\min\big\{1-\bar{\mu}-\sum_{j=s_q}^{\lfloor \mu q \rfloor} B_j, \tfrac{B_{s_q-1}}{s_q-1}\big\}, \label{eq:load} \end{align} where $S_{(q)}$ is the $q$th smallest latency of the $K$ i.i.d. latencies $S_1,\ldots,S_K$ with some distribution $F$ to compute the Map functions in (\ref{eq:map}), $g(K,q)$ is a function of $K$ and $q$ computed from $F$, $\bar{\mu} \triangleq \frac{\lfloor \mu q\rfloor}{q}$, $B_j \triangleq \frac{{q-1 \choose j}{K-q \choose \lfloor \mu q \rfloor-j}}{\frac{q}{K} {K \choose \lfloor \mu q \rfloor}}$, and $s_q \triangleq \inf \{s: \sum_{j=s}^{\lfloor \mu q \rfloor} B_j \leq 1-\bar{\mu}\}$. \end{theorem} We prove Theorem~1 In Section~\ref{sec:scheme}, in which we present a unified coded scheme that jointly designs the storage and the data shuffling, which achieves the latency in (\ref{eq:latency}) and the communication load in (\ref{eq:load}). \begin{remark} The Minimum Latency Code and the Minimum Bandwidth Code correspond to $q = \lceil \frac{1}{\mu}\rceil$ and $q=K$, and achieve the two end points $(\mathbb{E}\{S_{(\lceil \frac{1}{\mu}\rceil)}\}, N-N/\lceil \frac{1}{\mu}\rceil)$ and $(\mathbb{E}\{S_{(K)}\}, N\frac{1-\lfloor \mu K\rfloor/K}{\lfloor \mu K\rfloor})$ respectively. $\hfill \square$ \end{remark} \begin{figure}[htbp] \centering \includegraphics[width=0.3\textwidth]{region.pdf} \caption{Comparison of the latency-load pairs achieved by the proposed scheme with the outer bound, for computing $N=180$ output vectors using $K=18$ servers each with a storage size $\mu=1/3$, assuming the the distribution function of the Map time in (\ref{eq:dis}).} \vspace{-2mm} \label{fig:region} \end{figure} \begin{remark} We numerically evaluate in Fig.~\ref{fig:region} the latency-load pairs achieved by the proposed coded framework, for computing $N\!=\!180$ output vectors using $K\!=\!18$ servers each with a storage size $\mu \!=\!1/3$. The achieved tradeoff approximately exhibits an inverse-linearly proportional relationship between the latency and the load. For instance, doubling the latency from 120 to 240 results in a drop of the communication load from 43 to 23 by a factor of 1.87.$\hfill \square$ \end{remark} \begin{remark} The key idea to achieve $D(q)$ and $L(q)$ in Theorem~1 is to design the concatenation of the MDS code and the repetitive executions of the Map computations, in order to take advantage of both the Minimum Latency Code and the Minimum Bandwidth Code. More specifically, we first generate $\frac{K}{q}m$ MDS-coded rows of ${\bf A}$, and then store each of them $\lfloor \mu q\rfloor$ times across the $K$ servers in a specific pattern. As a result, any subset of $q$ servers would have sufficient amount of intermediate results to reduce the output vectors, and we end the Map phase as soon as the fastest $q$ servers finish their Map computations, achieving the latency in (\ref{eq:latency}). We also exploit coded multicasting in the Shuffle phase to reduce the communication load. In the load expression (\ref{eq:load}), $B_j$, $j \leq \lfloor \mu q \rfloor$, represents the (normalized) number of coded rows of ${\bf A}$ repeatedly stored/computed at $j$ servers. By multicasting coded packets simultaneously useful for $j$ servers, $B_j$ intermediate values can be delivered to a server with a communication load of $\frac{B_j}{j}$, achieving a coding gain of $j$. We greedily utilize the coding opportunities with a larger coding gain until we get close to satisfying the demand of each server, which accounts for the first term in (\ref{eq:load}). Then the second term results from two follow-up strategies 1) communicate the rest of the demands uncodedly 2) continue coded multicasting with a smaller coding gain (i.e., $j=s_q-1$), which may however deliver more than what is needed for reduction. $\hfill \square$ \end{remark} \vspace{-2mm} \begin{theorem} The latency-load region is contained in the lower convex envelop of the points \begin{align} \{(D(q),\bar{L}(q)): q =\lceil \tfrac{1}{\mu}\rceil,\ldots,K\}, \end{align} in which $D(q)$ is given by (\ref{eq:latency}) and \begin{align} \bar{L}(q) = N\underset{t=1,\ldots,q-1}{\max} \frac{1-\min\{t\mu, 1\}}{\lceil \tfrac{q}{t}\rceil (q-t)}q.\label{eq:lower} \end{align} \end{theorem We prove Theorem~2 in Section~V, by deriving an information-theoretic lower bound on the minimum required communication load for a given computation latency, using any storage design and data shuffling scheme. \vspace{-1.3mm} \begin{remark} We numerically compare the outer bound in Theorem~2 and the achieved inner bound in Theorem~1 in Fig.~\ref{fig:region}, from which we make the following observations \vspace{-1.2mm} \begin{itemize}[leftmargin=4mm] \item At the minimum latency point, i.e., $q=1 /\mu=3$ servers finish the Map computations, the proposed coded scheme achieves $1.33 \times$ of the minimum communication load. In general, when $q= 1/\mu \in \mathbb{N}$, the lower bound in Theorem~2 $\bar{L}(\frac{1}{\mu}) = N/ \lceil \frac{q}{t}\rceil |_{t=q-1} = N/\lceil \frac{1}{1-\mu}\rceil = \frac{N}{2}$. The proposed coded scheme, or Minimum Latency Code in this case, achieves the load $L(\frac{1}{\mu}) =N(1-\mu)$. Thus the proposed scheme always achieves the lower bound to within a factor of 2 at the minimum latency point. \item At the point with the maximum latency, i.e., all $K=18$ servers finish the Map computations, the proposed coded scheme achieves $2.67 \times$ of the lower bound on the minimum communication load. In general for $q=K$ and $\mu K \in \mathbb{N}$, we demonstrate in Appendix that the proposed coded scheme, or Minimum Bandwidth Code in this case, achieves a communication load $L(K) = N(1-\mu)/(\mu K)$ to within a factor of $3+\sqrt{5}$ of the lower bound $\bar{L}(K)$. \item For the intermediate latency from 70 to 270, the communication load achieved by the proposed scheme is within a multiplicative gap of at most $4.2 \times$ from the lower bound. In general, a complete characterization of the latency-load region (or an approximation to within a constant gap for all system parameters) remains open.$\hfill \square$ \end{itemize} \end{remark} \section{Proposed Coded Framework}\label{sec:scheme} In this section, we prove Theorem~1 by proposing and analyzing a general coded framework that achieves the latency-load pairs in (\ref{eq:pair}). We first demonstrate the key ideas of the proposed scheme through the following example, and then give the general description of the scheme. \subsection{Example: $m=20$, $N=12$, $K=6$ and $\mu =\frac{1}{2}$.} We have a problem of multiplying a matrix ${\bf A} \in \mathbb{F}_{2^T}^{m \times n}$ of $m=20$ rows with $N=12$ input vectors ${\bf x}_1,\ldots,{\bf x}_{12}$ to compute $12$ output vectors ${\bf y}_1={\bf A}{\bf x}_1\ldots,{\bf y}_{12}={\bf A}{\bf x}_{12}$, using $K=6$ servers each with a storage size $\mu =\frac{1}{2}$. We assume that we can afford to wait for $q=4$ servers to finish their computations in the Map phase, and we describe the proposed storage design and shuffling scheme. \noindent {\bf Storage Design.} As illustrated in Fig~\ref{fig:example-storage}, we first independently generate $30$ random linear combinations ${\bf c}_1,\ldots,{\bf c}_{30} \in \mathbb{F}_{2^T}^n$ of the $20$ rows of ${\bf A}$, achieving a $(30,20)$ MDS code of the rows of ${\bf A}$. Then we partition these coded rows ${\bf c}_1,\ldots,{\bf c}_{30}$ into $15$ batches each of size $2$, and store every batch of coded rows at a unique pair of servers. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{example-storage.pdf} \caption{Storage Design when the Map phase is terminated when $4$ servers have finished the computations.} \label{fig:example-storage} \end{figure} WLOG, due to the symmetry of the storage design, we assume that Servers $1$, $2$, $3$ and $4$ are the first $4$ servers that finish their Map computations. Then we assign the Reduce tasks such that Server $k$ reduces the output vectors ${\bf y}_{3(k-1)+1}$, ${\bf y}_{3(k-1)+2}$ and ${\bf y}_{3(k-1)+3}$, for all $k \in \{1,\ldots,4\}$. After the Map phase, Server~1 has computed the intermediate values $\{{\bf c}_1{\bf x}_j, \ldots,{\bf c}_{10}{\bf x}_j: j=1,\ldots,12\}$. For Server~1 to recover ${\bf y}_1 = {\bf A}{\bf x}_1$, it needs any subset of 10 intermediate values ${\bf c}_i{\bf x}_1$ with $i \in \{11,\ldots,30\}$ from Server $2$, $3$ and $4$ in the Shuffle phase. Similar data demands hold for all 4 servers and the output vectors they are reducing. Therefore, the goal of the Shuffle phase is to exchange these needed intermediate values to accomplish successful reductions. \noindent {\bf Coded Shuffle.} We first group the 4 servers into 4 subsets of size 3 and perform coded shuffling within each subset. We illustrate the coded shuffling scheme for Servers $1$, $2$ and $3$ in Fig.~\ref{fig:example-shuffle}. Each server multicasts $3$ bit-wise XORs, denoted by $\oplus$, of the locally computed intermediate values to the other two. The intermediate values used to create the multicast messages are the ones known exclusively at two servers and needed by another one. After receiving $2$ multicast messages, each server recovers $6$ needed intermediate values. For instance, Server~1 recovers ${\bf c}_{11}{\bf x}_1$, ${\bf c}_{11}{\bf x}_2$ and ${\bf c}_{11}{\bf x}_3$ by canceling ${\bf c}_{2}{\bf x}_7$, ${\bf c}_{2}{\bf x}_8$ and ${\bf c}_{2}{\bf x}_9$ respectively, and then recovers ${\bf c}_{12}{\bf x}_1$, ${\bf c}_{12}{\bf x}_2$ and ${\bf c}_{12}{\bf x}_3$ by canceling ${\bf c}_{4}{\bf x}_4$, ${\bf c}_{4}{\bf x}_5$ and ${\bf c}_{4}{\bf x}_6$ respectively. \begin{figure}[htbp] \centering \includegraphics[width=0.4\textwidth]{example-shuffle.pdf} \caption{Multicasting 9 coded intermediate values across Servers~1, 2 and 3. Similar coded multicast communications are performed for another 3 subsets of 3 servers.} \label{fig:example-shuffle} \end{figure} Similarly, we perform the above coded shuffling in Fig.~\ref{fig:example-shuffle} for another $3$ subsets of $3$ servers. After coded multicasting within the $4$ subsets of $3$ servers, each server recovers $18$ needed intermediate values (6 for each of the output vector it is reducing). As mentioned before, since each server needs a total of $3\times (20-10)=30$ intermediate values to reduce the 3 assigned output vectors, it needs another $30-18=12$ after decoding all multicast messages. We satisfy the residual data demands by simply having the servers unicast enough (i.e., $12 \times 4=48$) intermediate values for reduction. Overall, $9\times 4+48= 84$ (possibly coded) intermediate values are communicated, achieving a communication load of $L= 4.2$. \subsection{General Scheme} We first describe the storage design, Map phase computation and the data shuffling scheme that achieves the latency-load pairs $(D(q),L(q))$ in (\ref{eq:pair}), for all $q \in \{\lceil \frac{1}{\mu} \rceil, \ldots,K\}$. Given these achieved pairs, we can ``memory share'' across them to achieve their lower convex envelop as stated in Theorem~1. For ease of exposition, we assume that $\mu q \in \mathbb{N}$. Otherwise we can replace $\mu$ with $\bar{\mu}=\frac{\lfloor \mu q \rfloor}{q}$, and apply the proposed scheme for a storage size of $\bar{\mu}$. \noindent {\bf Storage Design.} We first use a $(\frac{K}{q}m,m)$ MDS code to encode the $m$ rows of matrix ${\bf A}$ into $\frac{K}{q}m$ coded rows ${\bf c}_1\ldots,{\bf c}_{\frac{K}{q}m}$ (e.g., $\frac{K}{q}m$ random linear combinations of the rows of ${\bf A}$). Then as shown in Fig.~\ref{fig:storage}, we evenly partitioned the $\frac{K}{q}m$ coded rows into ${K \choose \mu q}$ disjoint batches, each containing a subset of $\frac{m}{\frac{q}{K} {K \choose \mu q}}$ coded rows. \footnote{We focus on matrix multiplication problems for large matrices, and assume that $m \gg \frac{q}{K} {K \choose \mu q}$, for all $q \in \{\frac{1}{\mu},\ldots,K\}$.} Each batch, denoted by ${\cal B}_{\cal T}$, is labelled by a unique subset $\mathcal{T} \subset \{1,\ldots,K\}$ of size $|{\cal T}|=\mu q$. That is \begin{align} \{1,\ldots,\tfrac{K}{q}m\} = \{\mathcal{B}_{\cal T}: {\cal T} \subset \{1,\ldots,K\}, |{\cal T}|=\mu q \}. \end{align} Server~$k$, $k \in \{1,\ldots,K\}$ stores the coded rows in $\mathcal{B}_{\cal T}$ as the rows of ${\bf U}_k$ if $k \in \mathcal{T}$. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{storage.pdf} \caption{General MDS coding and storage design.} \label{fig:storage} \end{figure} In the above example, $q=4$, and $\frac{K}{q}m= \frac{6}{4} \times 20=30$ coded rows of ${\bf A}$ are partitioned into ${K \choose \mu q}={6 \choose 2}=15$ batches each containing $\frac{30}{15}=2$ coded rows. Every node is in $5$ subsets of size two, thus storing $5 \times 2=10$ coded rows of ${\bf A}$. \noindent {\bf Map Phase Execution.} Each server computes the inner products between each of the locally stored coded rows of ${\bf A}$ and each of the input vectors, i.e., Server $k$ computes ${\bf c}_i{\bf x}_j$ for all $j=1,\ldots,N$, and all $i \in \{{\cal B}_{\cal T}: k \in {\cal T}\}$. We wait for the fastest $q$ servers to finish their Map computations before halting the Map phase, achieving a computation latency $D(q)$ in (\ref{eq:latency}). We denote the set of indices of these servers as ${\cal Q}$. The computation then moves on exclusively over the $q$ servers in ${\cal Q}$, each of which is assigned to reduce $\frac{N}{q}$ out of the $N$ output vectors ${\bf y}_1={\bf A}{\bf x}_1,\ldots,{\bf y}_N={\bf A}{\bf x}_N$. For a feasible shuffling scheme to exist such that the Reduce phase can be successfully carried out, every subset of $q$ servers (since we cannot predict which $q$ servers will finish first) should have collectively stored at least $m$ distinct coded rows ${\bf c}_i$ for $i \in \{1,\ldots,\frac{K}{q}m\}$. Next, we explain how our proposed storage design meets this requirement. First, the $q$ servers in ${\cal Q}$ collectively provide a storage size equivalent to $\mu q m$ rows. Then since each coded row is stored by $\mu q$ out of all $K$ servers, it can be stored by at most $\mu q$ servers in ${\cal Q}$, and thus servers in ${\cal Q}$ collectively store at least $\frac{\mu q m}{\mu q}=m$ distinct coded rows. \noindent {\bf Coded Shuffle.} For ${\cal S}\subset {\cal Q}$ and $k \in {\cal Q} \backslash {\cal S}$, we denote the set of intermediate values needed by Server $k$ and known \emph{exclusively} by the servers in $\mathcal{S}$ as $\mathcal{V}_{\mathcal{S}}^{k}$. More formally: \begin{equation}\label{eq:V} \mathcal{V}_{\mathcal{S}}^{k} \triangleq \{{\bf c}_i{\bf x}_j: j \in {\cal W}_k, i \in \{{\cal B}_{\cal T}: {\cal T} \cap {\cal Q}={\cal S}\}\}. \end{equation} Due to the proposed storage design, for a particular ${\cal S}$ of size $j$, $\mathcal{V}_{\mathcal{S}}^{k}$ contains $\frac{N}{q}\cdot\frac{{K-q \choose \mu q-j}m}{\frac{q}{K} {K \choose \mu q}}$ intermediate values. In the above example, we have $\mathcal{V}_{\{2,3\}}^1=\{{\bf c}_{11}{\bf x}_j,{\bf c}_{12}{\bf x}_j: j=1,2,3\}$, $\mathcal{V}_{\{1,3\}}^2=\{{\bf c}_{3}{\bf x}_j,{\bf c}_{4}{\bf x}_j: j=4,5,6\}$, and $\mathcal{V}_{\{1,2\}}^3=\{{\bf c}_{1}{\bf x}_j,{\bf c}_{2}{\bf x}_j: j=7,8,9\}$. In the Shuffle phase, servers in ${\cal Q}$ create and multicast coded packets that are simultaneously useful for multiple other servers, until every server in ${\cal Q}$ recovers at least $m$ intermediate values for each of the output vectors it is reducing. The proposed shuffling scheme is \emph{greedy} in the sense that every server in ${\cal Q}$ will always try to multicast coded packets simultaneously useful for the largest number of servers. The proposed shuffle scheme proceeds as follows. For each $j\!=\!\mu q, \mu q-1,\ldots,s_q$, where $s_q \!\triangleq\! \inf \{s: \! \sum_{j=s}^{\mu q} \! \frac{{q-1 \choose j}{K-q \choose \mu q-j}}{\frac{q}{K} {K \choose \mu q}} \!\leq\! 1\!-\!\mu\}$, and every subset $\mathcal{S} \!\subseteq\! {\cal Q}$ of size $j\!+\!1$: \begin{enumerate}[leftmargin=5mm] \item For each $k \in \mathcal{S}$, we evenly and arbitrarily split $\mathcal{V}_{\mathcal{S}\backslash \{k\}}^{k}$ into $j$ disjoint segments $\mathcal{V}^{k}_{\mathcal{S}\backslash \{k\}} \!=\! \{ \mathcal{V}_{\mathcal{S} \backslash \{k\},i}^{k}\!:\! i \in {\cal S} \backslash \{k\}\}$, and associate the segment $\mathcal{V}_{\mathcal{S} \backslash \{k\},i}^{k}$ with the server $i \in {\cal S} \backslash \{k\}$. \item Server $i$, $i \in \mathcal{S}$, multicasts the bit-wise XOR, denoted by $\oplus$, of all the segments associated with it in ${\cal S}$, i.e., Server $i$ multicasts $ \underset{k \in \mathcal{S} \backslash \{i\}} \oplus \mathcal{V}^{k}_{\mathcal{S}\backslash \{k\},i}$ to the other servers in ${\cal S} \backslash \{i\}$. \end{enumerate} For every pair of servers $k$ and $i$ in ${\cal S}$, since Server $k$ has computed locally the segments $\mathcal{V}^{k'}_{\mathcal{S}\backslash \{k'\},i}$ for all $k' \in \mathcal{S} \backslash \{i,k\}$, it can cancel them from the message $\underset{k \in \mathcal{S} \backslash \{i\}}\oplus \mathcal{V}^{k}_{\mathcal{S}\backslash \{k\},i}$ sent by Server $i$, and recover the intended segment $\mathcal{V}^{k}_{\mathcal{S}\backslash \{k\},i}$. For each $j$ in the above coded shuffling scheme, each server in ${\cal Q}$ recovers ${q-1 \choose j}\frac{{K-q \choose \mu q-j}m}{\frac{q}{K} {K \choose \mu q}} intermediate values for each of the output vectors it is reducing. Therefore, $j=s_q+1$ is the smallest size of the subsets in which the above coded multicasting needs to be performed, before enough number of intermediate values for reduction are delivered. In each subset ${\cal S}$ of size $j$, since each server $i \in {\cal S}$ multicasts a coded segment of size $\frac{|{\cal V}^k_{{\cal S} \backslash \{k\}}|}{j}$ for some $k \neq i$, the total communication load so far, for $B_j = \frac{{q-1 \choose j}{K-q \choose \mu q-j}}{\frac{q}{K} {K \choose \mu q}}$, is \begin{align} \sum_{j=s_q}^{\mu q}{q \choose j+1}\frac{j+1}{j}\cdot \frac{N}{q} \cdot \frac{{K-q \choose \mu q-j}}{\frac{q}{K} {K \choose \mu q}}=\sum_{j=s_q}^{\mu q} N \frac{B_j}{j}, \end{align} Next, we can continue to finish the data shuffling in two different ways. The first approach is to have the servers in ${\cal Q}$ communicate with each other uncoded intermediate values, until every server has exactly $m$ intermediate values for each of the output vector it is responsible for. Using this approach, we will have a total communication load of \begin{align} L_1=\sum_{j=s_q}^{\mu q} N \tfrac{B_j}{j} + N(1-\mu-\sum_{j=s_q}^{\mu q}B_j). \end{align} The second approach is to continue the above 2 steps for $j=s_q-1$. Using this approach, we will have a total communication load of $L_2=\sum_{j=s_q-1}^{\mu q} N \frac{B_j}{j}$. Then we take the approach with less communication load, and achieve $L(q)=\min\{L_1,L_2\}$. \begin{remark} The ideas of efficiently creating and exploiting coded multicasting opportunities have been introduced in caching problems~\cite{maddah2014fundamental,maddah2013decentralized,ji2014fundamental}. In this section, we illustrated how to create and utilize such coding opportunities in distributed computing to slash the communication load, when facing with straggling servers. $\hfill \square$ \end{remark} \section{Converse}\label{sec:converse} In this section, we prove the outer bound on the latency-load region in Theorem~2. We start by considering a distributed matrix multiplication scheme that stops the Map phase when $q$ servers have finished their computations. For such scheme, as given by (\ref{eq:latency}), the computation latency $D(q)$ is the expected value of the $q$th order statistic of the Map computation times at the $K$ servers. WLOG, we can assume that Servers $1,\ldots,q$ first finish their Map computations, and they will be responsible for reducing the $N$ output vectors ${\bf y}_1,\ldots,{\bf y}_N$. To proceed, we first partition the ${\bf y}_1,\ldots,{\bf y}_N$ into $q$ groups ${\cal G}_1,\ldots,{\cal G}_q$ each of size $N/q$, and define the \emph{output assignment} \begin{align} {\cal A} = \left({\cal W}_1^{\cal A},{\cal W}_2^{\cal A}\ldots,{\cal W}_q^{\cal A}\right), \end{align} where ${\cal W}_k^{\cal A}$ denotes the group of output vectors reduced by Server $k$ in the output assignment ${\cal A}$. Next we choose an integer $t \in \{1,\ldots,q-1\}$, and consider the following $\lceil \frac{q}{t} \rceil$ output assignments which are circular shifts of $\left({\cal G}_1,\ldots,{\cal G}_q\right)$ with step size $t$, \begin{equation}\label{eq:assign} \begin{aligned} \mathcal{A}_1 &= \left({\cal G}_1,{\cal G}_2,\ldots,{\cal G}_q\right),\\ \mathcal{A}_2 &= \left({\cal G}_{t+1},\ldots,{\cal G}_q, {\cal G}_1,\ldots, {\cal G}_t\right),\\ & \vdots\\ \mathcal{A}_{\lceil \frac{q}{t} \rceil} &= \left({\cal G}_{(\lceil\frac{q}{t} \rceil \!-\!1)t+1},\ldots,{\cal G}_q, {\cal G}_1,\ldots,{\cal G}_{(\lceil \frac{q}{t} \rceil-1) t}\right). \end{aligned} \end{equation} \begin{remark}\label{independence} We note that by the Map computation in (\ref{eq:map}), at each server all the input vectors ${\bf x}_1,\ldots,{\bf x}_N$ are multiplied by the same matrix (i.e., ${\bf U}_k$ at Server~$k$). Therefore, for the same set of $q$ servers and their storage contents, a feasible data shuffling scheme for one of the above output assignments is also feasible for all other $\lceil \frac{q}{t} \rceil-1$ assignments by relabelling the output vectors. As a result, the minimum communication loads for all of the above output assignments are identical. $\hfill \square$ \end{remark} For a shuffling scheme admitting an output assignment ${\cal A}$, we denote the message sent by Server $k \in \{1,\ldots,q\}$ as $X_k^{\mathcal{A}}$, with a size of $R_{k}^{\mathcal{A}}mT$ bits. Now we focus on the Servers $1,\ldots,t$ and consider the compound setting that includes all $\lceil \frac{q}{t} \rceil$ output assignments in (\ref{eq:assign}). We observe that as shown in Fig.~\ref{fig:compound}, in this compound setting, the first $t$ servers should be able to recover all output vectors $({\bf y}_1\ldots,{\bf y}_N) = ({\cal G}_1,\ldots,{\cal G}_q)$ using their local computation results $\{{\bf U}_k{\bf x}_1,\ldots,{\bf U}_k{\bf x}_N:k=1,\ldots,t\}$ and the received messages in all the output assignments $\{X_k^{{\cal A}_1},\ldots,X_k^{{\cal A}_{\lceil \frac{q}{t}\rceil}}:k=t+1,\ldots,q\}$. Thus we have the following cut-set bound for the first $t$ servers. \begin{equation} rank \left( \begin{bmatrix} {\bf U}_1 \\ {\bf U}_2 \\ \vdots \\ {\bf U}_{t} \end{bmatrix} \right) NT + \sum \limits_{j=1}^{\lceil \frac{q}{t}\rceil} \sum \limits_{k=t+1}^{K} R_{k}^{\mathcal{A}_j}mT \geq NmT. \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{compound.pdf} \caption{Cut-set of Servers $1,\ldots,t$ for the compound setting consisting of the $\lceil \frac{q}{t} \rceil$ output assignments in (\ref{eq:assign}).} \label{fig:compound} \end{figure} Next we consider $q$ subsets of servers each with size $t$: $\mathcal{N}_i \triangleq \{i, (i+1), \ldots, (i+t-1)\}$, $i = 1,\ldots,q$, where the addition is modular $q$. Similarly, we have the following cut-set bound for ${\cal N}_i$: \begin{equation} rank \left( \begin{bmatrix} {\bf U}_i \\ {\bf U}_{i+1} \\ \vdots \\ {\bf U}_{i+t-1} \end{bmatrix} \right) NT + \sum \limits_{j=1}^{\lceil \frac{q}{t}\rceil} \sum \limits_{k \notin \mathcal{N}_i} R_{k}^{\mathcal{A}_j}mT \geq NmT. \end{equation} Summing up these $q$ cut-set bounds, we have \begin{align} NT\! \sum \limits_{i=1}^q rank \!\! \left(\! \begin{bmatrix} {\bf U}_i \\ {\bf U}_{i+1} \\ \vdots \\ {\bf U}_{i+t-1} \end{bmatrix} \!\right)& \!\! + \! \sum \limits_{i=1}^q \sum \limits_{j=1}^{\lceil \frac{q}{t}\rceil} \sum \limits_{k \notin \mathcal{N}_i} \!\!R_{k}^{\mathcal{A}_j}mT \geq qNmT, \\ \Rightarrow \sum \limits_{j=1}^{\lceil \frac{q}{t}\rceil} \sum \limits_{i=1}^q \sum \limits_{k \notin \mathcal{N}_i} R_{k}^{\mathcal{A}_j} \geq& qN-qN\min\{\mu t,1\}.\\ \Rightarrow \lceil \tfrac{q}{t}\rceil (q-t)L \overset{(a)}{\geq}& (1-\min\{t\mu, 1\})qN, \label{eq:sumup} \end{align} where (a) results from the fact mentioned in Remark~\ref{independence} that the communication load is independent of the output assignment. Since (\ref{eq:sumup}) holds for all $t=1,\ldots,q-1$, we have \begin{align} L \geq \bar{L}(q) =N\underset{t=1,\ldots,q-1}{\max} \frac{1-\min\{t\mu, 1\}}{\lceil \tfrac{q}{t}\rceil (q-t)}q. \end{align} We assume that the Map phase terminates when $q$ servers finish the computations with probability $P(q)$, for all $q \in \{\lceil \frac{1}{\mu}\rceil, \ldots,K\}$, then the communication load for a latency $\mathbb{E}_{q}(D(q))$ that is a convex combination of $\{\mathbb{E}\{S_{(q)}\}: q=\lceil \frac{1}{\mu}\rceil, \ldots,K\}$, is lower bounded by $\mathbb{E}_{q}(\bar{L}(q))$ that is the same convex combination of $\{\bar{L}(q): q=\lceil \frac{1}{\mu}\rceil, \ldots,K)\}$. Considering all distributions of $q$, we achieve all points on the lower convex envelop of the points $\{(\mathbb{E}\{S_{(q)}\}, \bar{L}(q)): q=\lceil \frac{1}{\mu}\rceil, \ldots,K\}$, as an outer bound on the latency-load region.
2,869,038,155,803
arxiv
\section{Introduction} This paper can be divided into two parts, namely discrete and smooth isoperimetric-type inequalities. In the first part of this paper, we derive a series of sharp higher order discrete Wirtinger inequalities together with their stability results, and use them to obtain sharp bounds for the isoperimetric inequalities for polygons. There is already a large number of results on the higher order Wirtinger type inequalities (see for example \cite{alzer1991converses, lunter1994new, milovanovic1997discrete}). However, up to now, the applications of the Wirtinger inequalities to isoperimetric type inequalities seem to be restricted only to the first order case, and these inequalities involve only the area, the length (or sum of the squares of the side lengths), and some special distances, but not the ``curvature'' or its higher ``derivatives'', see for example \cite{block1957discrete, tang1991discrete, zhang1997bonnesen}, or \cite{osserman1979bonnesen, Osserman1978, zhou2011some} for some analogous results in the smooth case. This is not surprising because for a polygon, the area and various lengths or distances involve only the zeroth and the first order differences. It is therefore desirable to see what kind of higher order discrete Wirtinger inequalities can be constructed to obtain information about the geometry of a polygon, especially those involving more delicate geometric quantities such as the curvature. To this end, we are going to define the curvature of a polygon to be essentially the second order differences, and at the same time construct higher order Wirtinger inequalities which naturally contain the information about the curvature. Indeed, we obtain a family of inequalities ($I_m$) indexed by $m\in \mathbb N$, which involves up to the $m$-th derivative. There are two remarkable properties of these inequalities: 1. $I_m$ gives a lower bound of the isoperimetric deficit when $m$ is odd and an upper bound when $m$ is even. 2. Moreover, $I_{m+1}$ measures the stability of $I_m$ in the sense that it gives an upper bound to the deficit of $I_m$. Our approach also has the advantage that all the constants that appear are explicit and sharp. See e.g. \cite{indrei2016sharp}, \cite{indrei2015stability} for another approach based on the spectral theory for circulant matrices, with an implicit constant. For example, when $m=1$, the inequality $I_1$ (Theorem \ref{thm chakerian 2}) is the discrete version of Chakerian's sharpened isoperimetric inequality: \begin{align*} 2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right)\ge\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}. \end{align*} The equality holds if and only if $P$ is a positively oriented regular $k$-gon. Here $S(P)$ is the sum of the squares of the side lengths of the $k$-sided polygon $P$ and $F(P)$ is the algebraic area enclosed by $P$. The vector $z$ denotes the position of the vertices and $\boldsymbol t$ denotes the ``tangent'' vectors at the vertices. When $m=2$ (and $k\ge 4$), the inequality $I_2$ in Theorem \ref{thm discrete higher} becomes \begin{align*} &8\left(\sin^{2} \left(\frac{2 \pi}{k}\right)-\sin^{2} \left(\frac{\pi}{k}\right)\right) \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right)\\ \le& 4\left(\sin^{2}\left(\frac{2 \pi}{k}\right)-\sin^{2}\left(\frac{\pi}{k}\right)\right) \left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}+\left\| \boldsymbol{\kappa}+4 \sin^{2}\left(\frac{\pi}{k}\right) z \right\|^{2}. \end{align*} The inequality is sharp and we will characterize the rigidify case. Here $\boldsymbol \kappa$ is the curvature vector, which is essentially a second order difference of the position vector. This is the discrete analogue of the following inequality from \cite{kwong2021higher}: $$ \frac{L}{2 \pi^{2}}\left(L^{2}-4 \pi F\right) \le\int_{C}\left|z-\left(\frac{L}{2 \pi}\right) \boldsymbol n \right|^{2} d s+\frac{1}{3} \int_{C}\left|z+\left(\frac{L}{2 \pi}\right)^{2} \boldsymbol\kappa\right|^{2} d s. $$ It can also be rearranged so that it gives a measure of the deficit of the discrete Chakerian's isoperimetric inequality: \begin{equation*} \begin{split} & 2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right)-\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2} \\ \le& \frac{1}{4 \sin^{2}\left(\frac{2 \pi}{k}\right)}\left[4 \sin^{2}\left(\frac{\pi}{k}\right) \left(2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right)-\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}\right)\right. \\ &\left. +\left\|\boldsymbol\kappa+4 \sin^{2}\left(\frac{\pi}{k}\right) z\right\|^{2}\right]. \end{split} \end{equation*} The second part of this paper starts from Section \ref{sec smooth}. We will make use of the smooth version of the higher order Wirtinger inequalities to show the smooth counterpart of the isoperimetric type inequalities. Since the main proposition (Proposition \ref{prop higher}) already appears elsewhere (\cite{kwong2021higher}), we will be rather brief on the derivation of the main result, which states that \begin{theorem}[Theorem \ref{smooth thm}] Let $C$ be a simple closed $C^{l+1}$ curve in $\mathbb C$ with the length $L$ and a unit speed counter-clockwise parametrization $z (s) $ with $s \in[0, L] $. Assume that $C =\partial \Omega$ is the boundary curve of a domain $\Omega \subset \mathbb C$ with the area $F$. Let $\boldsymbol{n} $ denotes the outward-pointing unit normal on $C$. Assume $C$ has centroid $0$, then \begin{equation*} \begin{split} 0 \le & \sum_{l=1}^{m-1} s_{m, l}\left(\frac{L}{2 \pi}\right)^{2 l-1} \int_{C}\left|\left(\frac{d}{d s}\right)^{l-1}\left(z+\left(\frac{L}{2 \pi}\right)^{2} \boldsymbol{\kappa}\right)\right|^{2} d s \\ &-\frac{(-1)^{m}}{2}(m-1) !(m+1) !\left(\frac{1}{\pi}\left(L^{2}-4 \pi F\right)-\frac{2 \pi}{L} \int_{C}\left|z-\left(\frac{L}{2 \pi}\right) \boldsymbol{n}\right|^{2} d s\right). \end{split} \end{equation*} The equality holds if and only if $z(t)=be^{it}$ for some $b\in \mathbb C$. Here, the constants $s_{m, l}$ are explicit. \end{theorem} Finally, we will generalize the Chernoff inequality, which states that for a closed convex curve $\gamma$ on $\mathbb R^{2}$ with area $F$ and width function $w(\theta)$, then $$ F \le \frac{1}{2} \int_{0}^{\frac{\pi}{2}} w(\theta) w\left(\theta+\frac{\pi}{2} \right) d \theta. $$ Since the notation is more involved, we are not going to explain all the notation of our generalized inequality here, but instead give the form of it (see Theorem \ref{Chernoff} for details): if $m$ is an odd natural number, then \begin{equation*} \begin{split} 0 \le&- \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1}{r} F[\gamma_{(r)}] +\frac{1}{k} \int_{0}^{\frac{\pi}{k}} w_{k}(\theta) w_{k}\left(\theta+\frac{\pi}{k}\right) d \theta\\ &- \sum_{1\le j \le m-1 \atop 2 \mid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k}^{2} \gamma\right)_{(r)}\right] \\ &+ \sum_{1\le j \le m -1\atop 2 \nmid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k} \gamma\right)_{(r)}\right]. \end{split} \end{equation*} When $m=1$, the last two terms vanish and the inequality is reduced to $$ F \le \frac{1}{k} \int_{0}^{\frac{\pi}{k}} w_{k}(\theta) w_{k}\left(\theta+\frac{\pi}{k}\right) d \theta. $$ This is Ou-Pan \cite{ou2010some} generalization of the Chernoff's inequality, which in turn is reduced to Chernoff inequality when $k=2$. To keep this introduction brief, we will leave the outline of the idea of the discrete or smooth Wirtinger inequalities to the next section, but let us just emphasize that the same approach is used to obtain the smooth isoperimetric type inequalities (Section \ref{sec smooth}) for plane curves, higher order Poincare type inequalities for hypersurfaces in Euclidean space of any dimension (which we are not going to cover in this paper, see \cite{kwong2021higher2}), and generalized Chernoff inequalities (Section \ref{chernoff}) that involve a generalized width, higher order locus of curvature centers and the mixed area. The organization of this paper is as follows. In Section \ref{sec idea}, we outline the idea to obtain higher order Wirtinger inequalities on a smooth manifold, which can be easily adapted to the discrete case. In Section \ref{sec background}, we provide the basic knowledge of discrete Fourier analysis, which will then be used to prove discrete Wirtinger inequalities in Section \ref{sec discrete wirtinger}. These inequalities are in turn used to prove geometric discrete isoperimetric type inequalities in Section \ref{sec geom}. We then switch to the smooth case in Section \ref{sec smooth}, in which we prove the smooth analogues of the upper and lower bounds of the isoperimetric deficits. Section \ref{sec smooth} is independent of Sections \ref{sec background}, \ref{sec discrete wirtinger} and \ref{sec geom}, although the underlying idea is the same. Finally, in Section \ref{chernoff}, we prove a generalization of the Chernoff inequality. \textbf{Acknowledgements}: We would like to thank Emanuel Indrei for suggesting this problem and for useful comments. \section{ Basic idea to generate higher order inequalities }\label{sec idea} The idea to generate higher order Wirtinger or Poincare type inequalities is quite straightforward. For the sake of discussion, we will discuss the case of a Riemannian manifold, although the case for a polygon is completely analogous, and let us ignore any convergence or regularity issue in this section, and more details will be given later. (Of course, in the discrete case, there is no such issue.) Let $M$ be a closed Riemannian manifold and $\{T_j\}_{j=1}^{m}: C^{\infty}(M)\to C^{\infty}(M)$ be a family of self-adjoint differential operators which mutually commute with each other. Then there exists an orthonormal Schauder basis $\{e_n\}$ which are simultaneous eigenfunctions of the $T_j$, with eigenvalues $\{\lambda_{n, j}\}$, i.e. $T_j [e_n] =\lambda_{n, j}e_n$. \begin{example} Let us give some examples of such operators. If $M=\mathbb S^{1}$ is the standard unit circle, examples of such operators include differential operators of the form $ \displaystyle T[h]=\sum_{j=1}^{m}c_j h^{(j)}$. An important case is $T[h]=h+\ddot h$, which gives the reciprocal of the curvature of a convex curve $\gamma$ if $h=h(\theta)$ is the support function of a convex curve, parametrised by the normal angle $\theta$ (\cite[p. 34]{ChouZhu2001}). Another class of such operators is the averaged translational operator such as $\displaystyle T_k[h](\theta)=\frac{1}{k} \sum_{j=1}^{k} h\left(\theta+\frac{(2 j-1)\pi}{k}\right)$ for some $\displaystyle k\in \mathbb N$. This operator is the generalization of the Chernoff operator $T_2$. In 1969, Chernoff \cite{chernoff1969area} obtained an area-width inequality for convex plane curves: Let $\gamma$ be a closed convex curve in the plane $\mathbb R^{2}$ with area $F$ and width function $w(\theta)$, then $$ F \le \frac{1}{2} \int_{0}^{\pi / 2} w(\theta) w\left(\theta+\frac{1}{2} \pi\right) d \theta, $$ and the equality holds if and only if $\gamma$ is a circle. Indeed, the Chernoff inequality can be regarded as a Wirtinger inequality for $T_2$. By obtaining a first order Wirtinger-type inequality for $T_k$, Ou and Pan \cite{ou2010some} obtained a generalized version of the Chernoff inequality: $$ F \le \frac{1}{k} \int_{0}^{\pi / k} w_{k}(\theta) w_{k}\left(\theta+\frac{1}{k} \pi\right) d \theta, $$ where $w_{k}(\theta)=h(\theta)+h\left(\theta+\frac{2 \pi}{k}\right)+\cdots+h\left(\theta+\left(\frac{2(k-1) \pi}{k}\right)\right)$ and $h$ is the support function. If $M=\mathbb S^{d-1}$, we can consider the family of operators $\{-\Delta -\lambda_j\mathrm{Id}\}_{j=1}^{m}$, where $\Delta$ is the (negative definite) Laplacian and $0=\lambda_{0}<\lambda_{1}<\lambda_{2}<\cdots \rightarrow \infty$ are the eigenvalues of $-\Delta$ on $\mathbb S^{d-1}$. In \cite{kwong2021higher2}, the author considered this family of operators and obtained higher order Poincare inequalities on the sphere $\mathbb S^{d-1} $. As a result, new Minkowski-type inequalities involving higher order mean curvatures for convex hypersurfaces in $\mathbb R^d$ were also obtained. \end{example} Let us go back to the recipe to obtain higher order Wirtinger type inequalities. Suppose we can construct a family of such $\{T_j\}_{j=1}^{m}$ such that $\prod_{j=1}^{m} \lambda_{n, j}\ge0$ for all $n$. Suppose $f\in C^{\infty}(M)$ can be expressed in the orthonormal Schauder basis as $f=\sum_{n} a_{n} e_n$, then \begin{align*} \int_M f\cdot\prod_{j=1}^{m} T_j [f] =\sum_{n}\left(\prod_{j=1}^{m} \lambda_{n, j} \right)|a_n|^2 \ge 0. \end{align*} This is our higher order Wirtinger inequality. Although the idea seems simple, when we suitably choose $f$ and $T_j$, and with some rearrangement, this gives us some geometric inequalities and stability results that are new. Often in this paper, $T_j$ above will be the difference of two operators, such as $-\Delta-\lambda_j \mathrm{Id}$ in the above. \section{Background knowledge: discrete case}\label{sec background} We will start from the discrete case first. Let us first give the necessary definitions and notation. Let $k\in \mathbb N$, which we are going to fix. A $k$-gon $P$ in the Euclidean plane is an ordered $k$-tuple of points, $P =\left(z_{0}, z_{1}, \ldots, z_{k-1}\right). $ These points are called vertices, and the line segments joining $z_{0}$ to $z_{1}, z_{1}$ to $z_{2}, \ldots, z_{k-1}$ to $z_{0}$ are called sides. We identify a point on the plane with a complex number. For convenience, we also identify $z_{k-l}$ with $z_{-l}$ (and later on $\zeta_{k-l}$ with $\zeta_{-l}$), so for example $\{z_\nu: |\nu|\le m\}=\{z_\nu: 0\le \nu\le m \textrm{ or }k-m\le \nu\le k-1\}$. We can then define scaling by complex numbers and linear combination of two polygons. So $\left(r e^{i \theta}\right) P$ is the image of $P$ under a dilatation by the real factor $r$ and a rotation through the angle $\theta$, both about the origin. A linear combination of $k$-gons $P$ and $Q$ is the $k$-gon identified with the linear combination of the corresponding vectors in $\mathbb C^{k}$. Let $\tau^{n} P$ be the $k$-gon obtained from $P$ by a cyclic shift of the entries of $\left(z_{0}, z_{1}, \ldots, z_{k-1}\right) n$ places to the right. That is, \begin{equation*} \begin{split} \tau P =\left(z_{k-1}, z_0, z_{1}, z_{2}, \ldots, z_{k-2} \right), \; \tau^{2} P =\left(z_{k-2}, z_{k-1}, \ldots, z_{k-3}\right), \;\text { etc. } \end{split} \end{equation*} We will denote by $R_{n}$ the special $n$-regular $k$-gon centered at 0 with $z_{0}=1$, namely, $$ R_{n}=\left(1, \omega^{n}, \omega^{2 n}, \ldots, \omega^{(k-1) n}\right) $$ where $\omega=e^{\frac{2 \pi i}{k} } $ is a $k$-th root of unity. Since our results will be proved using finite Fourier series, it is more convenient to work with various Hermitian forms on the vector space $\mathbb C^{k}$ of all $k$-gons. In particular, we say a Hermitian form $I$ which is invariant under $\tau$ (i.e. $I(\tau P, \tau Q)=I(P, Q)$) a polygonal form (\cite[Section 3]{fisher1985perpendicular}). For simplicity we denote a Hermitian form and the quadratic form associated to it by the same symbol. The signed area of $P$ is given by the polygonal form \cite[p. 30]{fisher1985perpendicular} \begin{align}\label{F} F(P):=\frac{1}{4 i} \sum_{j=0}^{k-1}\left(z_{j+1} \bar{z}_{j}-z_{j} \bar{z}_{j+1}\right). \end{align} Another polygonal form that we will consider here is $S=S(P)$, the sum of the squares of the side lengths of the polygon $P$. Thus from the definition \begin{equation}\label{S} S(P)=\sum_{j=0}^{k-1}\left|z_{j+1}-z_{j}\right|^{2}=\sum_{j=0}^{k-1}\left(z_{j+1}-z_{j}\right)\left(\bar{z}_{j+1}-\bar{z}_{j}\right). \end{equation} It turns out that in the Fourier series approach, $S$ is a better quantity to control than the length $L=\sum_{\nu=0}^{k-1}|z_{\nu+1}-z_\nu|$ of the polygon. This is because $S$ is a polygonal form whereas the squared length $L^2$ is not even a Hermitian form. This is analogous to the fact that Fourier series is harder to be used to control the $L^p$ norm of a function or its derivatives for $p$ other than $2$. In many instances, the isoperimetric deficit $L^2-4\pi F$ in the smooth case is replaced by the deficit $S-4 \tan \left(\frac{\pi}{k}\right) F$ in the discrete case, see for example Theorem \ref{thm1}. For the convenience of the reader, we now provide some basic knowledge on finite Fourier series here. For more details on finite Fourier series, please refer to \cite{schoenberg1950finite}. Let $z= \left(z_0, \cdots z_{k-1} \right)\in \mathbb C^k$. The finite Fourier series (FFS) of $z$ is $$ z_{\nu}=\zeta_{0}+\zeta_{1} \omega_{\nu}+\zeta_{2} \omega_\nu^{2}+\cdots+\zeta_{k-1} \omega_{\nu}^{k-1} \quad(\nu=0, 1, \cdots, k-1) $$ where $\omega_\nu=e^{\frac{2 \pi i \nu }{k}}$. It is well-known that the Fourier coefficients $\zeta_\nu$ are given by \begin{align*} \zeta_\nu=\frac{1}{k}\left(z_{0}+z_{1} \bar{\omega}_{\nu}+z_{2} \bar{\omega}_{\nu}^{2}+\cdots+z_{k-1} \bar{\omega}_{\nu}^{k-1}\right)=\frac{1}{k}\langle z, R_\nu\rangle, \end{align*} so that $z=\sum_{\nu=0}^{k-1}\zeta_\nu R_\nu$. We define $\dot{z}$ by \begin{equation*} \dot{z}= \left(z_1-z_0, z_{2}-z_{1}, \cdots, z_0-z_{k-1}\right). \end{equation*} Up to a sign, this is called the first difference, nevertheless we prefer to call it the ``first derivative'' in order to compare with \cite{kwong2021higher}. Using this notation, \eqref{S} becomes \begin{equation*} S(P)=\|\dot P\|^2=\langle \dot P, \dot P\rangle \end{equation*} and \eqref{F} becomes $$ F(P) =-\frac{1}{2} \mathrm{Im}\langle P, \dot P\rangle, $$ where the $\langle\cdot, \cdot\rangle$ is the standard Hermitian inner product given by $\langle z, w\rangle=\sum_{\nu=0}^{k-1} z_{\nu} \overline{w_{\nu}}$. This is the discrete analogue of the area enclosed by a smooth curve $C$: \begin{equation*} 2 \mathrm{Area} =\int_C xdy-ydx =-\int_{C}\mathrm{Im}(z \dot{\bar{z}}) d t. \end{equation*} In terms of FFS, we have \begin{equation*} \begin{split} \dot {z} = \sum_{\nu=0}^{k-1} \zeta_{\nu} \dot R_{\nu} = \sum_{\nu=0}^{k-1} \zeta_{\nu} (\omega_\nu-1)R_{\nu} = \sum_{\nu=1}^{k-1} \zeta_{\nu} (\omega_\nu-1)R_{\nu}. \end{split} \end{equation*} So we have \begin{align*} \left\|\dot{z}\right\|^{2}=k \sum_{\nu=1}^{k-1}\left|\zeta_{\nu}\right|^{2}\left|1-\omega_{\nu}\right|^{2}, \end{align*} Naturally we define the ``higher derivatives'', by defining $\ddot z$ to be the derivative of $\dot z$, etc. In the literature, these are called the differences of higher order. Denote the $j$-th derivative of $z$ by either $z^{(j)}$ or $D^jz$. Inductively, we have the Parseval identity \begin{align}\label{parseval} \frac{1}{k}\left\|z^{(j)}\right\|^{2} =\sum_{\nu=0}^{k-1}\left|1-\omega_{\nu}\right|^{2 j}\left|\zeta_{\nu}\right|^{2} =\sum_{\nu=0}^{k-1}4^j\sin^{2j}\left(\frac{\nu\pi}{k}\right)\left|\zeta_{\nu}\right|^{2}. \end{align} We will also make frequent use of the fact that for a polygonal form $I$ (\cite[Theorem 3.6]{fisher1985perpendicular}) \begin{align*} I(z)=\sum_{\nu=0}^{k-1}|\zeta_\nu|^2 I(R_\nu). \end{align*} \section{Discrete Wirtinger inequalities}\label{sec discrete wirtinger} We now give a family of sharp discrete Wirtinger inequalities, which involves any arbitrary order of derivatives. \begin{proposition}\label{prop1} Let $m \le \left\lfloor\frac{k}{2}\right\rfloor$ be a natural number. Define $m$-th degree polynomial $Q_m(x)=\prod_{j=1}^{m}\left(x-4 \sin^{2}\left(\frac{j \pi}{k}\right)\right)=\sum_{j=0}^{m} c_{m, j}x^j$. Suppose $z=(z_0, \cdots, z_{k-1})$ satisfies $\sum_{\nu=0}^{k-1}z_\nu=0$, then \begin{equation}\label{ineq c} \begin{split} \sum_{j=0}^{m} c_{m, j}\left\|z^{(j)}\right\|^{2} \ge 0. \end{split} \end{equation} The equality holds if and only if the Fourier coefficients satisfy $\zeta_\nu=0$ for $m<\nu<k-m$. \end{proposition} \begin{proof} As $\sum_{\nu=0}^{k-1} z_{\nu}=0$, we have $\zeta_{0}=0$, where $z=\sum_{\nu=0}^{k-1} \zeta_{\nu} R_{\nu}$ is its finite Fourier series. By \eqref{parseval}, \begin{equation*} \begin{split} \frac{1}{k} \sum_{j=0}^{m} c_{m, j}\left\|z^{(j)}\right\|^{2} &=\sum_{\nu=1}^{k-1}\left[\sum_{j=0}^{m} c_{m, j}\left|1-\omega_{\nu}\right|^{2 j}\right]\left|\zeta_{\nu}\right|^{2} \\ &=\sum_{\nu=1}^{k-1} Q_{m}\left(\left|1-\omega_{\nu}\right|^{2}\right)\left|\zeta_{\nu}\right|^{2} \\ &=\sum_{\nu=1}^{k-1}\left[\prod_{j=1}^{m}\left(\left|1-\omega_{\nu}\right|^{2}-4 \sin^{2}\left(\frac{j \pi}{k}\right)\right)\right]\left|\zeta_{\nu}\right|^{2} \\ &=\sum_{\nu=1}^{k-1} \left[\prod_{j=1}^{m}\left(\left|1-\omega_{\nu}\right|^{2}-\left|1-\omega_{j}\right|^{2}\right)\right]\left|\zeta_{\nu}\right|^{2} \\ &=\sum_{\nu=m+1}^{k-m-1}\left[\prod_{j=1}^{m}\left(\left|1-\omega_{\nu}\right|^{2}-\left|1-\omega_{j}\right|^{2}\right)\right]\left|\zeta_{\nu}\right|^{2} \\ & \ge 0. \end{split} \end{equation*} Here, we have used the fact that \begin{equation*} \begin{split} \left|1-\omega_{\nu}\right|^{2} &=\left(\cos \left(\frac{2 \pi \nu}{k}\right)-1\right)^{2}+\sin^{2}\left(\frac{2 \pi \nu}{k}\right) \\ &=2-2 \cos \left(\frac{2 \pi \nu}{k}\right) \\ &=4 \sin^{2}\left(\frac{\nu \pi}{k}\right). \end{split} \end{equation*} Hence for $m \le \left\lfloor\frac{k}{2} \right\rfloor$, $$|1-\omega_1|^2=|1-\omega_{k-1}|^2< |1-\omega_2|^2=|1-\omega_{k-2}|^2<\cdots< |1-\omega_m|^2=|1-\omega_{k-m}|^2$$ and all $|1-\omega_\nu|^2> |1-\omega_m|^2$ for $m<\nu<k-m$. \end{proof} To obtain geometric applications of the inequality \eqref{ineq c}, we rearrange it so that each term has its geometric meaning. We are going to do it in two steps. The form that we are looking for is the inequality \eqref{ineq s} and we are going to give its geometric interpretation in Section \ref{sec geom}. \begin{proposition}\label{prop lambda} Let $m \le \left\lfloor \frac{k}{2}\right\rfloor$ be a natural number. Define $P_1=1$ and the $(m-1)$-th degree polynomial $\displaystyle P_{m}(x):=\prod_{j=2}^{m}\left(x-4 \sin^{2}\left(\frac{j \pi}{k}\right)\right)=\sum_{k=0}^{m-1} \lambda_{m, k} x^{k}$ for $m\ge 2$. Suppose $\displaystyle z=\left(z_{0}, \cdots, z_{k-1}\right)$ satisfies $\displaystyle \sum_{\nu=0}^{k-1} z_{\nu}=0$, then $$\sum_{j=0}^{m-1} \lambda_{m, j}\left(\left\|z^{(j+1)}\right\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\left\|z^{(j)}\right\|^{2} \right)\ge0. $$ \end{proposition} \begin{proof} From the relation $Q_{m}(x)=\left(x-4 \sin^{2}\left(\frac{\pi}{k}\right)\right) P_{m}(x)$, we can deduce that $ c_{m, j}=\lambda_{m, j-1}-4 \sin^{2}\left(\frac{\pi}{k}\right) \lambda_{m, j}$, where we use the standard convention that $\lambda_{m, k}=0$ for $k \notin\{0, \cdots, m-1\}$. From this and Proposition \ref{prop1} we have \begin{equation*} \begin{split} 0 \le \sum_{j=0}^{m} c_{m, j}\left\|z^{(j)}\right\|^{2}=& \sum_{j=0}^{m}\left(\lambda_{m, j-1}-4 \sin^{2}\left(\frac{\pi}{k}\right) \lambda_{m, j}\right)\left\|z^{(j)}\right\|^{2} \\ =& \sum_{j=0}^{m-1} \lambda_{m, j}\left(\left\|z^{(j+1)}\right\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\left\|z^{(j)}\right\|^{2} \right). \end{split} \end{equation*} \end{proof} \begin{proposition}\label{prop s} Let $m \le \left\lfloor\frac{k}{2}\right\rfloor$ be a natural number. Define the $(m-2)$-th degree polynomial $\displaystyle \mathcal S_{m}(x)=\frac{P_{m}(x)-P_{m}\left(4 \sin^{2} \left(\frac{\pi}{k}\right)\right)}{x-4 \sin^{2}\left(\frac{\pi}{k}\right)}=\sum_{l=1}^{m-1} S_{m, l} x^{l-1} $ and $S_{m, 0}=P_{m}\left(4 \sin^{2}\left(\frac{\pi}{k}\right)\right)$. Suppose $z=\left(z_{0}, \cdots, z_{k-1}\right)$ satisfies $\sum_{\nu=0}^{k-1} z_{\nu}=0$, then \begin{equation}\label{ineq s} \begin{split} 0 & \le S_{m, 0}\left(\|\dot{z}\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2}\right)+\sum_{l=1}^{m-1} S_{m, l}\left\|\tau z^{(l+1)}+4 \sin^{2}\left(\frac{\pi}{k}\right) z^{(l-1)}\right\|^{2}. \end{split} \end{equation} The equality holds if and only if the Fourier coefficients of $z$ satisfy $\zeta_{\nu}=0$ for $m<\nu<k-m$. \end{proposition} \begin{proof} By definition, \begin{equation*} \begin{split} P_{m}(x) &=\left(x-4 \sin^{2} \frac{\pi}{k}\right) \mathcal S_{m}(x)+P_{m}\left(4\sin^{2} \left(\frac{\pi}{k}\right)\right) \\ &=\left(x-4 \sin^{2}\left(\frac{\pi}{k}\right)\right) \mathcal S_{m}(x)+S_{m, 0} \\ &=\sum_{l=0}^{m-1}\left(S_{m, l}- 4 \sin^{2}\left(\frac{\pi}{k}\right) S_{m, l+1}\right) x^{l}. \end{split} \end{equation*} Therefore \begin{equation}\label{lambda} \lambda_{m, l}=S_{m, l}-4 \sin^{2}\left(\frac{\pi}{k}\right) S_{m, l+1}. \end{equation} Define $I_{l}=\left\|\tau z^{(l+1)}+4 \sin^{2}\left(\frac{\pi}{k}\right)z^{(l-1)}\right\|^{2}$ and $J_{l}=\left\|z^{(l+1)}\right\|^{2}-4\sin^2\left(\frac{\pi}{k}\right)\left\|z^{(l)}\right\|^{2}$. We have \begin{equation}\label{Il} \begin{split} I_{l} &=\left\|\tau z^{(l+1)}+4 \sin^{2}\left(\frac{\pi}{k}\right) z^{(l-1)}\right\|^{2} \\ &=\left\|z^{(l+1)}\right\|^{2}+16 \sin^{4}\left(\frac{\pi}{k}\right)\left\|z^{(l-1)}\right\|^{2}+8 \sin^{2}\left(\frac{\pi}{k}\right)\left\langle\tau z^{(l+1)}, z^{(l-1)}\right\rangle \\ &=\left\|z^{(l+1)}\right\|^{2}+16 \sin^{4}\left(\frac{\pi}{k}\right)\left\|z^{(l-1)}\right\|^{2}-8 \sin^{2}\left(\frac{\pi}{k}\right)\left\|z^{(l)}\right\|^{2} \\ &=\left(\left\|z^{(l+1)}\right\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\left\|z^{(l)}\right\|^{2}\right)-4 \sin^{2}\left(\frac{\pi}{k}\right)\left(\left\|z^{(l)}\right\|^{2}-4\sin^{2}\left(\frac{\pi}{k}\right)\left\|z^{(l-1)}\right\|^{2}\right) \\ &=J_{l}-4 \sin^{2}\left(\frac{\pi}{k}\right) J_{l-1}. \end{split} \end{equation} Here we have used the summation by parts formula $\langle\dot{w}, \dot{w}\rangle=-\left\langle\ddot{w}, \tau^{-1} w\right\rangle=-\langle\tau \ddot{w}, w\rangle$. So by Proposition \ref{lambda}, \eqref{lambda} and \eqref{Il}, we have \begin{equation*} \begin{split} 0 \le& \sum_{l=0}^{m-1} \lambda_{m, l} J_{l}=\sum_{l=0}^{m-1}\left(S_{m, l}-4 \sin^{2}\left(\frac{\pi}{k}\right) S_{m, l+1}\right) J_{l} =S_{m, 0} J_{0}+\sum_{l=1}^{m-1} S_{m, l}\left(J_{l}-4 \sin^{2}\left(\frac{\pi}{k}\right) J_{l-1}\right)\\ =&S_{m, 0} \left(\left\|\dot z\right\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\left\|z\right\|^{2}\right)+\sum_{l=1}^{m-1} S_{m, l}\left\|\tau z^{(l+1)}+4 \sin^{2}\left(\frac{\pi}{k}\right) z^{(l-1)}\right\|^{2}. \end{split} \end{equation*} \end{proof} Using the recurrence relations satisfied by the polynomials $Q_m$ and $S_{m}$, we can also derive stability results for the inequalities in Proposition \ref{prop1} and Proposition \ref{prop s}. \begin{proposition}\label{prop stab} Let $m \le \left\lfloor\frac{k}{2}\right\rfloor-1$ be a natural number and $z: \mathbb R \rightarrow \mathbb C$ be a $2 \pi$-periodic $C^{m+1}$ function with zero mean. With the notation of Proposition \ref{prop1} and \ref{prop s}, the following inequalities hold. \begin{enumerate} \item \begin{equation}\label{stab1} \sum_{l=0}^{m} C_{m, l}\left\|z^{(l)}\right\|^{2} \le \frac{1}{4 \sin^{2}\left(\frac{m+1) \pi}{k}\right)}\sum_{l=0}^{m} C_{m, l}\left\|z^{(l+1)}\right\|^{2}. \end{equation} \item \begin{equation}\label{stab2} \begin{split} & S_{m, 0}\left(\|\dot{z}\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2}\right)+\sum_{l=1}^{m-1} S_{m, l}\left\|\tau z^{(l+1)}+4 \sin^{2}\left(\frac{\pi}{k}\right) z^{(l-1)}\right\|^{2}\\ & \le \frac{1}{4 \sin^{2}\left(\frac{(m+1) \pi}{k}\right)}\left[4\sin^{2}\left(\frac{\pi}{k}\right)S_{m, 0} \left(\|\dot{z}\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2}\right)+\sum_{l=0}^{m-1} S_{m, l} \|\tau z^{(l+2)}+4 \sin^{2}\left(\frac{\pi}{k}\right)z^{(l)}\|^{2} \right]. \end{split} \end{equation} \end{enumerate} The equality for both inequalities hold if and only if $z=\sum_{0<|\nu|\le m+1} \zeta_\nu R_\nu$. \end{proposition} \begin{proof} By the identity $Q_{m+1}(x)=\left(x-4 \sin^{2}\left(\frac{(m+1) \pi}{k}\right)\right) Q_{m}(x)$, we have the recurrence relation \begin{equation*} C_{m+1, l}=C_{m, l-1}-4 \sin^{2}\left(\frac{m+1) \pi}{k}\right) C_{m, l}. \end{equation*} Here, we use the standard convention that $c_{m, k}=0$ for $k \notin\{0, \cdots, m\} $. By replacing $m$ with $m+1$, the inequality \eqref{ineq c} then gives \begin{equation*} \begin{split} 0 & \le \sum_{l=0}^{m+1} C_{m+1, l}\left\|z^{(l)}\right\|^{2} \\ &=\sum_{l=0}^{m+1}\left[C_{m, l-1}-4 \sin^{2}\left(\frac{m+1) \pi}{k}\right) C_{m, l}\right]\left\|z^{(l)}\right\|^{2} \\ &=\sum_{l=0}^{m} C_{m, l}\left\|z^{(l+1)}\right\|^{2}-4 \sin^{2}\left(\frac{(m+1) \pi}{k}\right) \sum_{l=0}^{m} C_{m, l}\left\|z^{(l)}\right\|^{2}. \end{split} \end{equation*} From this \eqref{stab1} follows. The proof for \eqref{stab2} is similar. It requires the recurrence relation $S_{m+1, l}=S_{m, l-1}-4 \sin^{2}\left(\frac{m+1) \pi}{k}\right) S_{m, l}$, for $l \ge 1$, which is obtained from the identity $\mathcal S_{m+1}(x)=\left(x-4 \sin^{2}\left(\frac{m+1) \pi}{k}\right)\right) \mathcal S_{m}(x)+S_{m, 0}$. \end{proof} \section{Geometric inequalities}\label{sec geom} Recall that $S=S(P)$ and $F=F(P)$ are the sum of squared sides and the signed area of the polygon $P$ respectively. If $P$ is a $k$-gon represented by $z\in \mathbb C^k$, then $$F(P)=-\frac{1}{2} \mathrm{Im}\left\langle z, \dot{z}\right\rangle\quad \textrm{and}\quad S(P)=\|\dot{z}\|^2. $$ For a polygon $z$, we define $\boldsymbol{t}=\dot{z}$ to be its set of tangent vectors (at the corresponding vertices $z_\nu$) and $ \boldsymbol{\kappa} =\tau\ddot z$ to be the discrete curvature vectors (again at the vertices $z_\nu$). Note that the translation $\tau$ is necessary because $\tau\ddot z$ at the $\nu$-th position corresponds to the change of the tangent vectors across the vertex $z_\nu$, i.e. it is the change from $\overrightarrow{z_{\nu-1}z_\nu}$ to $\overrightarrow {z_\nu z_{\nu+1}}$. Also, it is not true that $\boldsymbol{\kappa}$ is perpendicular to $\boldsymbol{t}$ in general, unlike the smooth case. For a regular polygon with centroid $0$ which is positively oriented, $\boldsymbol{t}=\dot z= 2i e^{i \frac{\pi}{k}} \sin \left(\frac{\pi}{k}\right) z $. Therefore the quantity $\left\|\dot{z}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}$ measures how far a given polygon $P$ deviates from a regular $k$-gon. \begin{proposition}\label{prop4} Let $P$ be a $k$-gon represented by $z$. Then \begin{equation*} \begin{aligned} \|\dot z\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2} =2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4\tan \left(\frac{\pi}{k}\right) F(P)\right)-\left\|\dot{z}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}. \end{aligned} \end{equation*} \end{proposition} \begin{proof} By writing $z=\sum_{\nu=0}^{k-1} \zeta_{\nu} R_{\nu}$, we have \begin{equation}\label{pf1} \begin{split} &\left\|\dot{z}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}+\|\dot{z}\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2} \\ =&2\|\dot{z}\|^{2}-8 \sin \left(\frac{\pi}{k}\right)\left[-\frac{1}{2} \mathrm{Im}\left\langle e^{i \frac{\pi}{k}} z, \dot{z}\right\rangle\right]\\ =& 2\|\dot{z}\|^{2}-8 \sin \left(\frac{\pi}{k}\right)\left[-\frac{1}{2} \sum_{\nu=0}^{k-1}\left|\zeta_{\nu}\right|^{2} \mathrm{Im}\left\langle e^{i \frac{\pi}{k}} R_{\nu}, \dot R_{\nu}\right\rangle\right]. \end{split} \end{equation} By direct computation, $-\frac{1}{2} \mathrm{Im}\left\langle e^{i \frac{\pi}{k}} R_{\nu}, \dot{R}_{\nu}\right\rangle=k \sin \left(\frac{\nu \pi}{k}\right) \cos \left(\frac{(\nu-1) \pi}{k}\right)$ and $-\frac{1}{2} \mathrm{Im}\left\langle R_{\nu}, \dot{R}_{\nu}\right\rangle=$ $F\left(R_{\nu}\right)=k \sin \left(\frac{\nu \pi}{k}\right) \cos \left(\frac{\nu \pi}{k}\right)$. Therefore \begin{equation*} \begin{split} &\left\|\dot{z}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}+\|\dot{z}\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2}\\ =& 2\|\dot{z}\|^{2}-8 \sin \left(\frac{\pi}{k}\right) \sum_{\nu=0}^{k-1}\left|\zeta_{\nu}\right|^{2} k \sin \left(\frac{\nu \pi}{k}\right) \cos \left(\frac{(\nu-1) \pi}{k}\right) \\ =& 2\|\dot{z}\|^{2}-8 \sin \left(\frac{\pi}{k}\right) \sum_{\nu=0}^{k-1}\left|\zeta_{\nu}\right|^{2} k \sin \left(\frac{\nu \pi}{k}\right)\left(\cos \left(\frac{\nu \pi}{k}\right) \cos \left(\frac{\pi}{k}\right)+\sin \left(\frac{\nu \pi}{k}\right) \sin \left(\frac{\pi}{k}\right)\right) \\ =& 2\|\dot{z}\|^{2}-8 \sin \left(\frac{\pi}{k}\right) \cos \left(\frac{\pi}{k}\right) \sum_{\nu=0}^{k-1}\left|\zeta_{\nu}\right|^{2} F\left(R_{\nu}\right)-2 \sin^{2}\left(\frac{\pi}{k}\right) \sum_{\nu=0}^{k-1}\left|\zeta_{\nu}\right|^{2} S\left(R_{\nu}\right) \\ =& 2 S(P)-8 \sin \left(\frac{\pi}{k}\right) \cos \left(\frac{\pi}{k}\right) F(P)-2 \sin^{2}\left(\frac{\pi}{k}\right) S(P) \\ =& 2 \cos^{2}\left(\frac{\pi}{k}\right) S(P)-8 \sin \left(\frac{\pi}{k}\right) \cos \left(\frac{\pi}{k}\right) F(P) \\ =& 2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right), \end{split} \end{equation*} where we have used $S\left(R_{\nu}\right)=4 k \sin^{2}\left(\frac{\nu \pi}{k}\right)$. \end{proof} In \cite{Chakerian1978}, Chakerian proved the following sharpened isoperimetric inequality for a simple closed curve with centroid $0$: $$ \frac{2 \pi^{2}}{L} \int_{ C }\left| z -\left(\frac{L}{2 \pi}\right) \boldsymbol{n} \right|^{2} d s \le L^{2}-4 \pi F. $$ A discrete analogue of Chakerian's result is given by the following result. Indeed, it is the $m=1$ case of Theorem \ref{thm discrete higher}. We single out this case here in order to compare with Theorem \ref{thm chakerian 2}. \begin{theorem}[Discrete Chakerian's isoperimetric inequality]\label{thm1} Let $k\ge 3$. For any $k$-gon $P$ with centroid $0$, \begin{equation*} S (P)- 4 \tan \left(\frac{\pi}{k}\right) F(P) \ge 2\tan^2\left(\frac{\pi}{k}\right) \left\|z+\frac{i e^{-i \frac{\pi}{k}} \boldsymbol{t}}{2 \sin \left(\frac{\pi}{k}\right)}\right\|^{2}. \end{equation*} The equality holds if and only if $P$ has the form $\zeta_1R_1+\zeta_{k-1}R_{k-1}$. \end{theorem} \begin{proof} This follows from Proposition \ref{prop4} and the discrete Wirtinger inequality $\|\dot{z}\|^2\ge 4 \sin^2\left(\frac{\pi}{k}\right)\|z\|^2$ given that $z$ has centroid $0$. If the equality holds, by the equality case of Proposition \ref{prop1}, we have $z=\zeta_1 R_1+\zeta_{k-1}R_{k-1}$. \end{proof} \begin{corollary} For an equilateral $k$-gon $P$ with perimeter $L, L^{2} \ge 4 k \tan \left(\frac{\pi}{k}\right)|F(P)|$, and equality holds if and only if $P$ is a regular $k$-gon. \end{corollary} If we are allowed to weaken the lower bound in Theorem \ref{thm1}, we can obtain a similar result with a better rigidity case. \begin{theorem} [Discrete Chakerian's isoperimetric inequality]\label{thm chakerian 2} Let $k \ge 3$. For any $k$-gon $P$ with centroid 0, $$ S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P) \ge 2 \sin^{2}\left(\frac{\pi}{k}\right)\left\|z+\frac{i e^{-i \frac{\pi}{k}} \boldsymbol{t}}{2 \sin \left(\frac{\pi}{k}\right)}\right\|^{2}. $$ The equality holds if and only if $P$ is a positively oriented regular $k$-gon. \end{theorem} \begin{proof} By \eqref{pf1} and the discrete Wirtinger inequality $\|\dot{z}\|^{2} \ge 4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2}$, \begin{equation*} \begin{split} \left\|\dot{z}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{i}{k}} z\right\|^{2} \le& 2\|\dot z\|^{2}-8 \sin \left(\frac{\pi}{k}\right)\left[-\frac{1}{2} \sum_{\nu=1}^{k-1}\left|\zeta_{\nu}\right|^{2} \mathrm{Im}\left\langle e^{i \frac{\pi}{k}} R_{\nu}, \dot{R}_{\nu}\right\rangle\right]\\ =& 2\|\dot z\|^{2}-8 \sin \left(\frac{\pi}{k}\right)\left[\sum_{\nu=1}^{k-1}|\zeta_\nu|^2\cdot k \sin \left(\frac{\nu \pi}{k}\right) \cos \left(\frac{(\nu-1) \pi}{k}\right)\right]. \end{split} \end{equation*} As $F(R_\nu)=k \sin \left(\frac{\nu \pi}{k}\right) \cos \left(\frac{\nu \pi}{k}\right)$, we have $\mathrm{Im}\left\langle e^{i\frac{\pi}{k}}R_{\nu}, \dot{R_{\nu}}\right\rangle=\frac{\cos \left(\frac{(\nu-1) \pi}{k}\right)}{\cos \frac{\nu \pi}{k}} F\left(R_{\nu}\right)$ unless $\cos \frac{\nu \pi}{k}=0$. In the case where $\cos \frac{\nu \pi}{k}=0$, $k=2\nu$ and so $-\frac{1}{2}\mathrm{Im}\langle e^{i\frac{\pi}{k}}R_\nu, \dot{R_\nu}\rangle =k\sin \left(\frac{\pi}{k}\right)>0$ and $F(R_\nu)=0$. By combining with the $m=1$ case of Proposition \ref{prop1}, we arrive at \begin{equation}\label{ineq1} \begin{split} \left\|\dot{z}-2 i \sin \frac{\pi}{k} e^{i \frac{\pi}{k}} z\right\|^{2} \le 2\|\dot{z}\|^{2}-8 \sin \left(\frac{\pi}{k}\right) \sum_{0<\nu<k, \nu \ne \frac{k}{2}}\left|\zeta_{\nu}\right|^{2} \frac{\cos \left(\frac{(\nu-1) \pi}{k}\right)}{\cos \left(\frac{\nu \pi}{k}\right)} F\left(R_{\nu}\right). \end{split} \end{equation} If $\nu \le \left\lfloor\frac{k}{2}\right\rfloor$ and $\nu\ne \frac{k}{2}$, then $F(R_\nu)\ge 0$ and $$\frac{\cos \frac{(\nu-1) \pi}{k}}{\cos \left(\frac{\nu \pi}{k}\right)}=\cos \left(\frac{\pi}{k}\right)+\tan \left(\frac{\nu \pi}{k}\right) \sin \left(\frac{\pi}{k}\right) \ge \cos \left(\frac{\pi}{k}\right)+ \tan \left(\frac{\pi}{k}\right) \sin \left(\frac{\pi}{k}\right)=\frac{1}{\cos \left(\frac{\pi}{k}\right)}. $$ If $ \nu>\left\lfloor\frac{k}{2}\right\rfloor$, then $F(R_\nu)<0$, and \begin{equation}\label{ineq3} \begin{split} -\frac{\cos \left(\frac{(\nu-1) \pi}{k}\right)}{\cos \left(\frac{\nu \pi}{k}\right)} F\left(R_{\nu}\right) &=\frac{\cos \left(\frac{(\nu-1) \pi}{k}\right)}{\cos \left(\frac{\nu \pi}{k}\right)}\left|F\left(R_{\nu}\right)\right| \\ &=\left(\cos \left(\frac{\pi}{k}\right)+\tan \left(\frac{\nu \pi}{k}\right) \sin \left(\frac{\pi}{k}\right)\right)\left|F\left(R_{\nu}\right)\right| \\ & \le \left(\cos \left(\frac{\pi}{k}\right)+\tan \left(\frac{\pi}{k}\right) \sin \left(\frac{\pi}{k}\right)\right)\left|F\left(R_{\nu}\right)\right| \\ &=\frac{1}{\cos \left(\frac{\pi}{k}\right)}\left|F\left(R_{\nu}\right)\right| \\ &=-\frac{1}{\cos \left(\frac{\pi}{k}\right)} F\left(R_{\nu}\right). \end{split} \end{equation} In view of \eqref{ineq1}, we have \begin{equation*} \begin{split} \left\|\dot{z}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2} & \le 2\|\dot{z}\|^{2}-8 \sin \left(\frac{\pi}{k}\right) \sum_{\substack{0<\nu<k\\ \nu \ne \frac{k}{2}}}\left|\zeta_{\nu}\right|^{2} \frac{\cos \left(\frac{(\nu-1) \pi}{k}\right)}{\cos \left(\frac{\nu \pi}{k}\right)} F\left(R_{\nu}\right) \\ & \le 2\left\|\dot{z}\right\|^{2}-8 \tan \left(\frac{\pi}{k}\right) \sum_{\nu=1}^{k-1}\left|\zeta_{\nu}\right|^{2} F\left(R_{\nu}\right) \\ &=2\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right). \end{split} \end{equation*} If the equality holds, by the equality case of Proposition \ref{prop1}, we have $z=\zeta_1 R_1+\zeta_{k-1}R_{k-1}$. When $\nu=k-1$, we have $\nu>\left\lfloor \frac{k}{2}\right\rfloor$ and the inequality in \eqref{ineq3} is strict. Therefore \eqref{ineq1} is strict unless $\zeta_{k-1}=0$. Hence $z$ represents a regular convex $k$-gon. \end{proof} In \cite{kwong2021higher}, the authors proved that for a smooth closed curve $C$ on the plane whose centroid is $0$, \begin{equation}\label{KL} \int_{C}\left|z-\left(\frac{L}{2 \pi}\right) \boldsymbol{n}\right|^{2} d s+\frac{1}{3} \int_{C}\left|z+\left(\frac{L}{2 \pi}\right)^{2} \boldsymbol{\kappa}\right|^{2} d s-\frac{L}{2 \pi^{2}}\left(L^{2}-4 \pi F\right) \ge 0. \end{equation} Here $\boldsymbol{n}$ is the unit outward normal, $z$ is the position vector, $L$ is the length of $C $, $F$ is the area enclosed by $C$ and $\boldsymbol{\kappa}$ is the (smooth) curvature vector. In the following, we are going to give the discrete analogue of this result. Let us remark that for a regular $k$-gon whose centroid is $0$, the curvature vector is equal to $-4\sin^2\left(\frac{\pi}{k}\right)z$, and so the term $ \boldsymbol{\kappa}+4 \sin^{2}\left(\frac{\pi}{k}\right) z $ is natural in the following result. \begin{theorem}\label{thm discrete higher} Let $m \le \left\lfloor\frac{k}{2}\right\rfloor$ be a natural number. Suppose $P$ is a $k$-gon represented by $z$ whose centroid is $0$. Then \begin{equation*} \begin{split} 0\le&2 S_{m, 0} \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4\tan \left(\frac{\pi}{k}\right) F(P)\right)-S_{m, 0}\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2} \\ &+\sum_{l=1}^{m-1} S_{m, l}\left\|D^{l-1}\left(\boldsymbol{\kappa}+4 \sin^{2}\left(\frac{\pi}{k}\right) z\right)\right\|^{2}. \end{split} \end{equation*} Here $S_{m, l}$ are defined in Proposition \ref{prop s}. The equality holds if and only if the Fourier coefficients of $z$ satisfy $\zeta_{\nu}=0$ for $m<\nu<k-m$. \end{theorem} \begin{proof} This follows from Proposition \ref{prop s} and Proposition \ref{prop4}. \end{proof} \begin{remark} For $m \le \left\lfloor\frac{k}{2}\right\rfloor$, note that $S_{m, 0}=4^{m-1}\prod_{j=2}^{m}\left(\sin^{2}\left(\frac{\pi}{k}\right)-\sin^{2}\left(\frac{j\pi}{k}\right)\right)$ is positive when $m$ is odd and negative when $m$ is even. So in the case where $m$ is even, Theorem \ref{thm discrete higher} gives the following upper bound: \begin{align*} & 2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right) \\ \le &\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}-\widetilde{S}_{m, 0}^{-1}\sum_{l=1}^{m-1} S_{m, l}\left\|D^{l-1}\left(\boldsymbol{\kappa}+4 \sin^{2}\left(\frac{\pi}{k}\right) z\right)\right\|^{2}. \end{align*} In the case where $m$ is odd, Theorem \ref{thm discrete higher} gives the following lower bound: \begin{align*} & 2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right) \\ \ge &\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}- {\widetilde S_{m, 0}}^{-1} \sum_{l=1}^{m-1} S_{m, l}\left\|D^{l-1}\left(\boldsymbol{\kappa}+4 \sin^{2}\left(\frac{\pi}{k}\right) z\right)\right\|^{2}. \end{align*} Here $S_{m, l}$ are defined in Proposition \ref{prop s} and $\widetilde S_{m, 0}=|S_{m, 0}|=4^{m-1} \prod_{j=2}^{m}\left(\sin^{2}\left(\frac{j\pi}{k}\right)-\sin^{2}\left(\frac{ \pi}{k}\right)\right)$. \end{remark} \begin{corollary} Let $m \le \left\lfloor\frac{k}{2} \right\rfloor$ be an even natural number. Suppose $P$ is a $k$-gon represented by $z$ whose centroid is $0$. Then \begin{equation*} \begin{split} & 2\cos^{2}\left(\frac{\pi}{k}\right)\left(L(P)^2-4 k\tan \left(\frac{\pi}{k}\right) F(P)\right) \\ \le & k \left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}- k\widetilde{S}_{m, 0}^{-1} \sum_{l=1}^{m-1} S_{m, l}\left\|D^{l-1}\left(\boldsymbol\kappa +4 \sin^{2}\left(\frac{\pi}{k}\right) z\right)\right\|^{2}. \end{split} \end{equation*} Here $\widetilde{S}_{m, 0}=\left|S_{m, 0}\right|=4^{m-1} \prod_{j=2}^{m}\left(\sin^{2}\left(\frac{j \pi}{k}\right)-\sin^{2}\left(\frac{\pi}{k}\right)\right)$. The equality holds if and only if $ P$ is represented by $z=\zeta_\nu R_\nu$ for some $0<|\nu|\le m$. \end{corollary} \begin{proof} The inequality follows from Theorem \ref{thm discrete higher} and the Cauchy-Schwarz inequality $kS(P)\ge L^2(P)$. By the rigidity case of Theorem \ref{thm discrete higher}, the Fourier coefficients of $\dot z=\sum_{\nu=1}^{k-1}\xi_\nu R_\nu$ satisfy $\xi_{\nu}=0$ for $m<\nu<k-m$. By the equality case of the Cauchy-Schwarz inequality, $\left|\dot{z}_{\nu}\right|^{2}$ is a constant which is independent of $\nu$. Since $\left(R_{n}\right)_\nu=e^{\frac{2 \pi i n \nu}{k} }$, we have \begin{equation*} \left|\dot{z}_{\nu}\right|^{2}=\sum_{n=-2 m}^{2 m}\left(\sum_{n_{1}-n_{2}=n} \xi_{n_{1}} \overline {\xi}_{n_{2}} \right) e^{{2 \pi i n \nu}{k}} =\sum_{n=-2m}^{2m} c_{n} \omega^{n} =\text { constant } \end{equation*} for all $\nu$, where we have identified $\xi_{k-l}$ with $\xi_{-l}$. By the uniqueness of Fourier series, for $n\ne 0$, we have $c_n=\sum_{n_{1}-n_{2}=n} \xi_{n_{1}} \bar{\xi}_{n_{2}}=0$. By Lemma \ref{lem nonzero} below, at most one of the coefficients, say $\xi_\nu$ can be non-zero. As $\dot R_\nu=(\omega_\nu-1)R_\nu$ and $z$ has centroid zero, we conclude that $z=\frac{\xi_\nu}{\omega_\nu-1}R_\nu$. \end{proof} \begin{lemma}\label{lem nonzero} Let $m\in \mathbb N$ and suppose $ a_{-m}, a_{-m+1}, \cdots, a_{-1}, a_{1}, \cdots, a_m $ are complex numbers such that for all $n\ne 0$, $c_n=\sum_{\substack {p-q=n \\ 0<|p|, |q|\le m}} a_p \overline {a_{q}}=0 $. Then at most one of the $a_p$ is non-zero. \end{lemma} \begin{proof} We prove it by induction on $m$. The case where $m=1$ is easy. Assuming the induction hypothesis, we now fix a natural number $m$. As $c_{2m}=a_m\overline {a_{-m}}=0$, either $a_m=0$ or $a_{-m}=0$. By interchanging $a_{l}$ with $a_{-l}$ if necessary, we may assume $a_{-m}=0$. We can assume $a_{m}\ne0$, for otherwise it reduces to the $(m-1)$-case and we can apply the induction hypothesis, and then the results follows. We claim that $a_m$ is the only nonzero coefficient. As $a_{-m}=0$, we have $c_{2m-1}=a_m\overline {a_{-m+1}}=0$. The assumption $a_m\ne0$ then implies $a_{-m+1}=0$. We can go on to consider $c_{2m-2}, c_{2m-3}, \cdots, c_{1}$ one by one to deduce that the remaining coefficients $a_{-m+2}=a_{-m+3}=\cdots=a_{m-1}=0$. This proves the claim. \end{proof} \begin{remark} Let us examine the inequalities in Theorem \ref{thm discrete higher} for $m=1$ and $2$. \begin{enumerate} \item As explained before, when $m=1$, the inequality becomes \begin{align*} 2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right)\ge\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}. \end{align*} This is the discrete version of Chakerian's sharpened isoperimetric inequality. \item When $m=2$ and hence $k\ge 4$, the inequality becomes \begin{align*} &8\left(\sin^{2} \left(\frac{2 \pi}{k}\right)-\sin^{2} \left(\frac{\pi}{k}\right)\right) \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right)\\ \le& 4\left(\sin^{2}\left(\frac{2 \pi}{k}\right)-\sin^{2}\left(\frac{\pi}{k}\right)\right) \left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}+\left\| \boldsymbol{\kappa}+4 \sin^{2}\left(\frac{\pi}{k}\right) z \right\|^{2}. \end{align*} This is the discrete analogue of \eqref{KL}. By Proposition \ref{prop stab}, it can also be rearranged so that it gives a measure of the deficit of the discrete Chakerian's isoperimetric inequality: \begin{equation}\label{case m=2} \begin{split} & 2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right)-\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2} \\ \le & \frac{1}{4 \sin^{2}\left(\frac{2 \pi}{k}\right)}\left[4 \sin^{2}\left(\frac{\pi}{k}\right)\left(\|\dot{z}\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2}\right)+\left\|\boldsymbol{\kappa}+4 \sin^{2}\left(\frac{\pi}{k}\right) z\right\|^{2}\right]\\ = & \frac{1}{4 \sin^{2}\left(\frac{2 \pi}{k}\right)}\left[4 \sin^{2}\left(\frac{\pi}{k}\right) \left(2 \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right)-\left\|\boldsymbol{t}-2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}\right)\right. \\ &\left. +\left\|\boldsymbol\kappa+4 \sin^{2}\left(\frac{\pi}{k}\right) z\right\|^{2}\right]. \end{split} \end{equation} \item When $m=3$ and hence $k \ge 6$, the inequality becomes \begin{equation*} \begin{split} &4 \cos^{2}\left(\frac{\pi}{k}\right)\left(2 \cos^{2}\left(\frac{5 \pi}{k}\right)-\cos \left(\frac{6 \pi}{k}\right)-\cos \left(\frac{8 \pi}{k}\right)\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right) \\ \ge& 2\left(2 \cos^{2}\left(\frac{5 \pi}{k}\right)-\cos \left(\frac{6 \pi}{k}\right)-\cos \left(\frac{8 \pi}{k}\right)\right)\left\| \boldsymbol{t} -2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2} \\ &+4\left(1+2 \cos \left(\frac{2 \pi}{k}\right)\right) \sin^{2}\left(\frac{2 \pi}{k}\right)\left\|\boldsymbol{\kappa}+4 \sin^{2}\left(\frac{\pi}{k}\right) z\right\|^{2} \\ &-\left\| D(\boldsymbol{\kappa} +4 \sin^{2}\left(\frac{\pi}{k}\right) z \right\|^{2}. \end{split} \end{equation*} It should be now clear that this can be rearranged such that it measures the stability of the deficit of the inequality \eqref{case m=2}, due to Proposition \ref{prop stab}: \begin{equation*} \begin{split} & 4\left(\sin^{2}\left(\frac{2 \pi}{k}\right)-\sin^{2}\left(\frac{\pi}{k}\right)\right)\left\|\boldsymbol t -2 i \sin \left(\frac{\pi}{k}\right) e^{i \frac{\pi}{k}} z\right\|^{2}+\left\| \boldsymbol\kappa +4 \sin^{2}\left(\frac{\pi}{k}\right) z\right\|^{2}\\ &- 8\left(\sin^{2}\left(\frac{2 \pi}{k}\right)-\sin^{2}\left(\frac{\pi}{k}\right)\right) \cos^{2}\left(\frac{\pi}{k}\right)\left(S(P)-4 \tan \left(\frac{\pi}{k}\right) F(P)\right) \\ \le& \frac{1}{4 \sin^{2}\left(\frac{3 \pi}{k}\right)}\left[-16\left(1+2 \cos \left(\frac{2 \pi}{ k }\right)\right) \sin^4 \left(\frac\pi k\right)\left(\|\dot{z}\|^{2}-4 \sin^{2}\left(\frac{\pi}{k}\right)\|z\|^{2}\right)\right. \\ &\left. -4 \left(\sin^2\left(\frac{2 \pi}{k}\right)-\sin^2\left(\frac{\pi}{k}\right)\right)\left\| \boldsymbol\kappa +4 \sin^{2}\left(\frac{\pi}{k}\right) z\right\|^{2}+\left\|D\left(\boldsymbol\kappa +4 \sin^{2}\left(\frac{\pi}{k}\right) z\right)\right\|^2\right]. \end{split} \end{equation*} \end{enumerate} \end{remark} \section{The smooth case}\label{sec smooth} We now give the result analogous to Theorem \ref{thm discrete higher} for a smooth curve. In some sense, it is easier because the geometric quantities are more straightforward to compute and the higher order Wirtinger inequalities are somewhat simpler. First we need the smooth version of Proposition \ref{prop s}. \begin{proposition}[\cite{kwong2021higher} Proposition 1]\label{prop higher} Let $m \in \mathbb N$ and $z: \mathbb R \rightarrow \mathbb C$ be a $2 \pi$-periodic function of class $C^{m}$ with zero mean.Then \begin{equation}\label{gen ineq} 0 \le \prod_{j=2}^{m}\left(1-j^{2}\right) \int_{0}^{2 \pi}\left(|\dot{z}|^{2}-|z|^{2}\right) d t+\sum_{l=1}^{m-1} s_{m, l} \int_{0}^{2 \pi}\left|z^{(l+1)}+z^{(l-1)}\right|^{2} d t. \end{equation} Here, the constants $s_{m, 1}, \cdots, s_{m, m-1}$ are the coefficients of the $(m-2)$-th degree polynomial $\mathrm {S}_{m}(t) \in \mathbb Z [t]$, defined by $ \mathrm{S}_{m}(t):=\frac{ \mathrm {P}_{m}(t)-\mathrm {P}_{m}(1)}{t-1}=\sum_{l=1}^{m-1} s_{m, l} t^{l-1}$ and $\mathrm{P}_{m}(t):=\prod_{j=2}^{m}\left(t-j^{2}\right)=\left(t-2^{2}\right) \cdots\left(t-m^{2}\right)$. The equality holds if and only if the function $z$ is of the form $$ z(t)=\sum_{1\le |n|\le m}a_{n} e^{int} $$ for some constants $a_{n} \in \mathbb C$. \end{proposition} \begin{theorem}\label{smooth thm} Let $C$ be a simple closed $C^{l+1}$ curve in $\mathbb C$ with length $L$ and a unit speed counter-clockwise parametrization $z (s) $ with $s \in[0, L] $. Assume that $C =\partial \Omega$ is the boundary curve of a domain $\Omega \subset \mathbb C$ with the area $F$. Let $\boldsymbol{n} $ denotes the outward-pointing unit normal on $C$. Assume $C$ has centroid $0$, then \begin{equation*} \begin{split} 0 \le & \sum_{l=1}^{m-1} s_{m, l}\left(\frac{L}{2 \pi}\right)^{2 l-1} \int_{C}\left|\left(\frac{d}{d s}\right)^{l-1}\left(z+\left(\frac{L}{2 \pi}\right)^{2} \boldsymbol{\kappa}\right)\right|^{2} d s \\ &-\frac{(-1)^{m}}{2}(m-1) !(m+1) !\left(\frac{1}{\pi}\left(L^{2}-4 \pi F\right)-\frac{2 \pi}{L} \int_{C}\left|z-\left(\frac{L}{2 \pi}\right) \boldsymbol{n}\right|^{2} d s\right). \end{split} \end{equation*} The equality holds if and only if $z(t)=be^{it}$ for some $b\in \mathbb C$. \end{theorem} \begin{proof} Using the change of variable $t=\frac{2\pi}{L}s$, we have $$ z(t)+\ddot{z}(t)=z+\left(\frac{L}{2 \pi}\right)^{2} \boldsymbol{\kappa}. $$ Therefore \begin{equation}\label{4} z^{(l+1)}(t)+z^{(l-1)}(t)=\left(\frac{L}{2 \pi}\right)^{l-1} \left(\frac{d}{d s}\right)^{l-1}\left(z+\left(\frac{L}{2 \pi}\right)^{2} \boldsymbol{\kappa}\right). \end{equation} It is easy to see that $$ \frac{L^{2}}{2 \pi}=\int_{0}^{2 \pi}|\dot{z}|^{2} d t $$ and $$ 2 F=\iint_{\Omega} \mathrm{div} \left(z\right)\, dxdy=-\int_{0}^{2 \pi} \mathrm{Im}(z \dot{\bar{z}}) d t. $$ So we can write $$ \frac{1}{\pi}\left(L^{2}-4 \pi F\right)=\int_{0}^{2 \pi}\left(|\dot{z}|^{2}+\mathrm{Im}(z \dot{\bar{z}})\right) d t. $$ We rewrite this integrand as $$ 2\left(|\dot{z}|^{2}+\mathrm{Im}(z \dot{\bar{z}})\right)=|z+i \dot{z}|^{2}+|\dot{z}|^{2}-|z|^{2}. $$ The first term on the right has integral \begin{equation*} \int_{0}^{2\pi}|z+i \dot{z}|^2dt=\frac{2 \pi}{L} \int_{C}\left|z-\left(\frac{L}{2 \pi}\right) \boldsymbol{n}\right|^{2} d s. \end{equation*} Therefore \begin{equation}\label{5} \int_{0}^{2\pi}\left(|\dot z|^{2}-|z|^{2}\right)dt =\frac{1}{\pi}\left(L^{2}-4 \pi F\right)-\frac{2 \pi}{L} \int_C\left|z-\left(\frac{L}{2 \pi}\right) \boldsymbol{n}\right|^{2}ds. \end{equation} Putting \eqref{4}, \eqref{5} into \eqref{gen ineq}, we have \begin{equation*} \begin{split} 0\le& \sum_{l=1}^{m-1} s_{m, l} \int_{0}^{2 \pi}\left|z^{(l+1)}+z^{(l-1)}\right|^{2} d t -\frac{(-1)^{m}}{2}(m-1) !(m+1) ! \int_{0}^{2 \pi}\left(|\dot{z}|^{2}-|z|^{2}\right) d t\\ =& \sum_{l=1}^{m-1} s_{m, l} \left(\frac{L}{2 \pi}\right)^{2l-1} \int_{C}\left|\left(\frac{d}{d s } \right)^{l-1}\left(z+\left(\frac{L}{2\pi}\right)^2\boldsymbol{\kappa}\right)\right|^{2} d s \\ &-\frac{(-1)^{m}}{2}(m-1) !(m+1) ! \left(\frac{1}{\pi}\left(L^{2}-4 \pi F\right)-\frac{2 \pi}{L} \int_{C}\left|z-\left(\frac{L}{2 \pi}\right) \boldsymbol{n}\right|^{2} d s\right). \end{split} \end{equation*} Suppose that the equality holds. By Proposition \ref{prop higher}, we have $$ \dot z(t)=\sum_{0<|n| \le m} a_{n} e^{i n t} $$ for some constants $a_n\in \mathbb C$. Also, , $|\dot z(t)|^{2}=\mu$ is a constant. Then $$ \mu=\dot z(t) \overline{\dot z(t)}=\sum_{n=-2m}^{2m}\left(\sum_{p+q=n} a_{p} \overline{a_{-q}}\right) e^{i n t} $$ The uniqueness of Fourier series shows that, for all $0<|n| \le 2m$, the coefficients vanish: $$ \sum_{p-q=n, 1 \le |p|, |q| \le 2} a_{p} \overline{a_{q}}=0 $$ By Lemma \ref {lem nonzero}, at most one of the coefficients $a_n$ is nonzero, i.e. $\dot z(t)=a_n e^{int}$ for some $0<|n| \le m$. As $z(t)$ has centroid $0$, we have $z(t)= \frac{a_n}{in}e^{int}$. Since $z(t)$ is assumed to be simple and is positively oriented, we deduce that $z(t)=b e^{it}$ for some $b\in \mathbb C$. \end{proof} \section{Higher order Chernoff inequality}\label{chernoff} We now prove a generalization of the Chernoff inequality \cite{chernoff1969area}, which states that for a closed convex curve $\gamma$ on $\mathbb R^{2}$ with area $F$ and width function $w(\theta)$, we have $$ F \le \frac{1}{2} \int_{0}^{\pi / 2} w(\theta) w\left(\theta+\frac{1}{2} \pi\right) d \theta. $$ The equality holds if and only if $\gamma$ is a circle. Ou and Pan \cite{ou2010some} proved the following generalized version of the Chernoff inequality: $$ F \le \frac{1}{k} \int_{0}^{\pi / k} w_{k}(\theta) w_{k}\left(\theta+\frac{1}{k} \pi\right) d \theta $$ where $w_{k}(\theta)=h(\theta)+h\left(\theta+\frac{2 \pi}{k}\right)+\cdots+h\left(\theta+\left(\frac{2(k-1) \pi}{k}\right)\right)$ and $h$ is the support function. The definition of $w_k$ suggests we consider the following operator: let $T_k$ be the transform defined on the space of real-valued functions on $\mathbb S^{1}$, given by \begin{align*} T_k[h](\theta):=\frac{1}{k}\sum_{m=1}^{k} h\left(\theta+\frac{(2m-1)\pi}{k} \right). \end{align*} If $h$ is the support function of a convex curve, then the convex curve with support $T_k[h]$ is one whose support is the average of $h$ in $k$ directions. Now, let $\gamma$ be a simple closed curve parametrised by the normal angle $\theta$, and $h=h(\theta)$ be the support function of $\gamma$. Let $\displaystyle h=\sum_{n=-\infty}^{\infty}a_ne^{in\theta}$ be the Fourier series of $\displaystyle h$. Then the Fourier series of $\displaystyle T_k[h]$ becomes \begin{equation*} \begin{split} T_k[h](\theta)=\sum_{n=-\infty}^{\infty} a_{n} \frac{1}{k}\sum_{m=1}^{k}e^{ i n\left(\theta+\frac{(2 m-1)\pi}{k} \right)} =&\sum_{n=-\infty}^{\infty} a_{n} \frac{1}{k}\sum_{m=1}^{k}e^{ \frac{in(2 m-1)\pi}{k} }e^{in\theta}\\ =&\sum_{n=-\infty}^{\infty} a_{n} \frac{1}{k} \sum_{m=1}^{k} \cos \frac{n(2 m-1) \pi}{k}e^{in \theta}\\ =&\sum_{n=-\infty}^{\infty} a_{n} \beta_n e^{in \theta}, \end{split} \end{equation*} where $\displaystyle \beta_{n}=\frac{1}{k} \sum_{m=1}^{k} \cos \frac{n(2 m-1) \pi}{k}$. It is easy to see that $\displaystyle \beta_{n} \in\{ -1, 0, 1\} $, $\beta_0=1$ and $\beta_{1}=0$ if $k\ge 2$. Indeed, $\beta_n\ne 0$ if and only if $n$ is a multiple of $k$. On the other hand, consider the operator $A[h]=h+\ddot h$. The Fourier series of $A[h]$ is given by \begin{align*} A[h](\theta)=\sum_{n=-\infty}^{\infty} a_{n}(1-n^2) e^{i n \theta} =\sum_{n=-\infty}^{\infty} a_{n} \delta_{n} e^{i n \theta}. \end{align*} Obviously, $\beta_n\ge \delta_n$ for all $n$. As shown in Section \ref{sec idea}, this shows that for $k\ge 2$ and $m\in \mathbb N$, we have \begin{align}\label{ineq gen chernoff} \int_{0}^{2\pi} h(\theta)\cdot(T_k-A)^m[h](\theta)d\theta\ge0. \end{align} Since $\beta_n>\delta_n$ for $|n|\ge 2$, it follows that the equality case holds if and only if $h=\sum_{n=-1}^{1}a_ne^{in\theta}$, with $a_{-n}=\overline{a_{n}}$. This together with the formula $\gamma(\theta)=h(\theta) \boldsymbol n +\dot{h}(\theta) \boldsymbol t =h(\theta) e^{i\theta} +\dot{h}(\theta) i e^{i\theta}=2\overline a_1+a_0e^{i\theta}$, shows that $\gamma$ is a circle. It remains to give a geometric interpretation to the inequality \eqref{ineq gen chernoff}. Recall the definition $w_{k}(\theta)=h(\theta)+h\left(\theta+\frac{2 \pi}{k}\right)+\cdots+h\left(\theta+\left(\frac{2(k-1) \pi}{k}\right)\right)$. When $m=1$, we have (cf. \cite[Eqn. 3-2]{ou2010some}) \begin{equation}\label{ou} \begin{split} \frac{1}{k} \int_{0}^{\frac{\pi}{k}} w_{k}(\theta) w_{k}\left(\theta+\frac{\pi}{k}\right) d \theta =&\frac{1}{2 k} \sum_{m=1}^{k} \int_{0}^{2 \pi} h(\theta) h\left(\theta+\frac{(2 m-1) \pi}{k}\right) d \theta\\ =&\frac{1}{2} \int_{0}^{2 \pi} h(\theta) T_{k}[h](\theta) d \theta. \end{split} \end{equation} On the other hand, the area $F$ enclosed by $\gamma$ is given by (\cite[Eqn 2.4.27]{Groemer1996}) \begin{align*} F=\frac{1}{2} \int_{0}^{2 \pi}\left(h(\theta)^2-\dot h(\theta)^2\right) d \theta =\frac{1}{2} \int_{0}^{2 \pi}h(\theta)\cdot A[h](\theta)d\theta. \end{align*} Therefore when $m=1$, we recover the Ou-Pan generalized Chernoff inequality $$ F \le \frac{1}{k} \int_{0}^{\frac{\pi}{k}} w_{k}(\theta) w_{k}\left(\theta+\frac{\pi}{k} \right) d \theta. $$ Now we assume $m\ge2$. We can parameterise $\gamma$ by the normal angle $\theta$. In fact, it can be explicitly parametrised by \cite[p. 34]{ChouZhu2001} \begin{align*} \gamma(\theta)=h(\theta)\boldsymbol n +\dot h(\theta)\boldsymbol t, \end{align*} where $\boldsymbol n=(\cos \theta, \sin \theta)$ and $\boldsymbol t=(-\sin \theta, \cos \theta)$. The locus of curvature centers $\gamma_{(1)}$ of $\gamma$ is then defined to be $\gamma(\theta)-\rho (\theta) \boldsymbol n $, where $\rho$ is the radius of curvature. Since $\rho=h+\ddot h$, \begin{align*} \gamma_{(1)}(\theta)=\dot{h} \boldsymbol t-\ddot{h} \boldsymbol n. \end{align*} Of course, this curve can fail to be simple and convex, but we can still regard it as a smooth parametrised curve and compute its algebraic area using the formula $\mathrm{Area}=\frac{1}{2}\int xdy-ydx$. Differentiating $\gamma_{(1)}$ gives $\dot \gamma_{(1)}=-(\dot{h}+\dddot{h}) \boldsymbol n$, and hence we can fix $\boldsymbol t$ as a unit normal field of $\gamma_{(1)}$, and w.r.t. this choice of normal, the support function of $\gamma_{(1)}$ is then $\dot h$. Inductively, we can define the locus of curvature centers $\gamma_{(2)}$ of $\gamma_{(1)}$, and so on, and deduce that $\gamma_{(j)}$ has support function $h^{(j)}(\theta)$ w.r.t. the normal $ J^j \boldsymbol n $, where $J$ is the anti-clockwise rotation by $\frac{\pi}{2}$. Therefore the algebraic area $F$ of $\gamma_{(j)}$ is given by \begin{align}\label{alg area} F[\gamma_{(j)}]=\frac{1}{2}\int_{0}^{2\pi} ({h^{(j)}(\theta)}^2-{h^{(j+1)}(\theta)}^2)d\theta. \end{align} The R.H.S. of \eqref{ineq gen chernoff} can be written as \begin{equation}\label{R.H.S. } \begin{split} &\sum_{j=0}^{m}(-1)^{m-j} \binom{m }{j}\int_{0}^{2\pi}h \cdot A^{m-j} \left(T_{k}\right)^{j}[h]d\theta\\ =&(-1)^{m}\int_{0}^{2\pi}h \cdot A^{m} [h]d\theta+\int_{0}^{2 \pi} h \cdot\left(T_{k}\right)^{m}[h] d \theta+ \sum_{j=1}^{m-1}(-1)^{m-j} \binom{m }{j}\int_{0}^{2\pi}h \cdot A^{m-j} \left(T_{k}\right)^{j}[h]d\theta. \end{split} \end{equation} Using \eqref{alg area}, it is not hard to show by induction that \begin{equation}\label{higher alg area} \begin{split} \int_{0}^{2\pi}h A^{m}[h] =&\sum_{r=0}^{m-1}\binom{m-1}{r} \int_{0}^{2 \pi}\left(h^{(r)}(\theta)^{2}-h^{(r+1)}(\theta)^{2}\right) d \theta\\ =&2\sum_{r=0}^{m-1}(-1)^{r}\binom{m-1}{r} F\left[\gamma_{(r)}\right]. \end{split} \end{equation} It is also easy to see that $\left(T_k\right)^{l+2}=\left(T_k\right)^{l}$ if $l\ge 1$. So for $j\ge 1$, the term $\left(T_{k}\right)^{j}[h]$ only depends on $j\pmod 2$. By abuse of notation, we denote by $T_k\gamma$ the convex curve whose support function is $T_k[h]$ and by $T_k^2\gamma$ the convex curve whose support function is $(T_k)^2[h]$. We now consider the term $\int_{0}^{2 \pi} h \cdot\left(T_{k}\right)^{m}[h] d \theta$ in \eqref{R.H.S. }. If $m$ is odd, then \eqref{ou} gives \begin{align*} \int_{0}^{2 \pi} h(\theta)(T_{k})^{m}[h](\theta) d \theta =\int_{0}^{2 \pi} h(\theta) T_{k} [h](\theta) d \theta =\frac{2}{k} \int_{0}^{\frac{\pi}{k}} w_{k}(\theta) w_{k}\left(\theta+\frac{\pi}{k}\right) d \theta. \end{align*} When $m$ is even, \begin{align*} \int_{0}^{2 \pi} h(\theta)(T_{k})^{m}[h](\theta) d \theta =&\int_{0}^{2 \pi} h(\theta)(T_{k})^2[h](\theta) d \theta. \end{align*} Let us consider the Hermitian form defined on $C(\mathbb S^1, \mathbb C)$ defined by \begin{align*} I_1(\phi, \psi)=\int_{0}^{2\pi} \overline \psi \cdot (T_k)^2[\psi]d\theta =\int_{0}^{2\pi} \overline \psi \cdot\frac{1}{k} \sum_{j=0}^{k-1} \psi\left(\theta+\frac{2 j \pi}{k}\right) d\theta. \end{align*} We claim that $\{e^{in\theta}\}$ forms an orthogonal (but not necessarily orthonormal) Schauder basis for $I_1$. To see this, note that $I_1(e^{in\theta}, e^{im\theta})$ is the integral of \begin{align*} e^{-im\theta}\cdot \frac{1}{k} \sum_{j=0}^{k-1} e^{in\left(\theta+\frac{2 j \pi}{k}\right)} =e^{i(n-m)\theta}\cdot \frac{1}{k} \sum_{j=0}^{k-1} e^{ \frac{2i n j \pi}{k} }. \end{align*} If $\displaystyle k\nmid n$, then $\displaystyle \sum_{j=0}^{k-1} e^{\frac{2 i n j \pi}{k}}=0$. Otherwise, $\displaystyle k\mid n$ and the integrand is just $\displaystyle e^{i(n-m)\theta}$, which has integral $0$ over $[0, 2\pi]$ when $m\ne n$. So for $\displaystyle \phi=\sum_n a_n e^{in\theta}$, $\displaystyle I_1(\phi, \phi)=2\pi\sum_{n: k|n} |a_n|^2$. Similar calculation shows that the Hermitian form $\displaystyle I_2(\phi, \psi)=\int_{0}^{\frac{2\pi}{k}} \overline {\psi} \cdot (T_{k})^2 [\phi] d \theta$ satisfies the property that for $ \displaystyle \phi=\sum_n a_n e^{in\theta}$, $ \displaystyle I_2(\phi, \phi)=\frac{2\pi}{k} \sum_{n: k \mid n} |a_n|^2=\frac{1}{k}I_1(\phi, \phi)$. Therefore, when $m$ is even, $$ \int_{0}^{2 \pi} h(\theta)\left(T_{k}\right)^{m}[h](\theta) d \theta=\int_{0}^{2 \pi} h(\theta)\left(T_{k}\right)^{2}[h](\theta) d \theta = k \int_{0}^{\frac{2\pi}{k}} \left(T_k[h]\right)^2d\theta=\frac{1}{k}\int_{0}^{\frac{2\pi}{k}} w_k(\theta)^2d \theta. $$ Now, consider the last term in \eqref{R.H.S. } \begin{equation*} \begin{split} &\sum_{j=1}^{m-1}(-1)^{m-j}\binom{m}{j} \int_{0}^{2 \pi} h \cdot A^{m-j}\left(T_{k}\right)^{j}[h]\\ =&(-1)^{m} \sum_{1\le j \le m -1\atop 2 \mid j}\binom{m}{j} \int_{0}^{2 \pi} h \cdot A^{m-j} T_{k}^{2}[h] -(-1)^{m} \sum_{1\le j \le m -1\atop 2 \nmid j}\binom{m}{j} \int_{0}^{2 \pi} h \cdot A^{m-j} T_{k}[h] \\ \end{split} \end{equation*} The term $\displaystyle \int_{0}^{2 \pi} h \cdot A^{m-j} T_{k}[h]d\theta$ has a similar geometric meaning as \eqref{higher alg area}. In fact, same computation gives \begin{equation*} \begin{split} \int_{0}^{2 \pi} h \cdot A^{m-j} T_{k}[h] d \theta =& \sum_{r=0}^{m-1}\binom{m-1-j}{r} \int_{0}^{2 n}\left(h^{(r)}\left(T_{k}[h]\right)^{(r)}-h^{(r+1)}\left(T_{k}[h]\right)^{(r+1)}\right) d \theta\\ =& 2\sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r}F[\gamma_{(r)}, (T_k\gamma)_{(r)}]. \end{split} \end{equation*} Here $F[\cdot, \cdot]$ is the algebraic mixed area (cf. \cite[Eqn 2.4.28]{Groemer1996}). Combining the above, we have the following conclusion. When $m$ is odd, \begin{equation*} \begin{split} 0 \le&- \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1}{r} F[\gamma_{(r)}] +\frac{1}{k} \int_{0}^{\frac{\pi}{k}} w_{k}(\theta) w_{k}\left(\theta+\frac{\pi}{k}\right) d \theta\\ &- \sum_{1\le j \le m-1 \atop 2 \mid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k}^{2} \gamma\right)_{(r)}\right] \\ &+ \sum_{1\le j \le m -1\atop 2 \nmid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k} \gamma\right)_{(r)}\right]. \end{split} \end{equation*} When $m$ is even, \begin{equation*} \begin{split} 0 \le& \sum_{r=0}^{m-1}\binom{m-1}{r}(-1)^{r} F[\gamma_{(r)}] +\frac{1}{2k} \int_{0}^{\frac{2 \pi}{k}} w_{k}(\theta)^{2} d \theta\\ &+ \sum_{1\le j \le m-1 \atop 2 \mid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k}^{2} \gamma\right)_{(r)}\right] \\ &- \sum_{1\le j \le m -1\atop 2 \nmid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k} \gamma\right)_{(r)}\right]. \end{split} \end{equation*} So we have proved the following result. \begin{theorem}\label{Chernoff} Let $2\le k\in \mathbb N$, $m\in \mathbb N$ and $\gamma$ be a smooth simple closed curve on $\mathbb R^2$. When $m$ is odd, \begin{equation*} \begin{split} 0 \le&- \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1}{r} F[\gamma_{(r)}] +\frac{1}{k} \int_{0}^{\frac{\pi}{k}} w_{k}(\theta) w_{k}\left(\theta+\frac{\pi}{k}\right) d \theta\\ &- \sum_{1\le j \le m-1 \atop 2 \mid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k}^{2} \gamma\right)_{(r)}\right] \\ &+ \sum_{1\le j \le m -1\atop 2 \nmid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k} \gamma\right)_{(r)}\right]. \end{split} \end{equation*} When $m$ is even, \begin{equation*} \begin{split} 0 \le& \sum_{r=0}^{m-1}\binom{m-1}{r}(-1)^{r} F[\gamma_{(r)}] +\frac{1}{2k} \int_{0}^{\frac{2 \pi}{k}} w_{k}(\theta)^{2} d \theta\\ &+ \sum_{1\le j \le m-1 \atop 2 \mid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k}^{2} \gamma\right)_{(r)}\right] \\ &- \sum_{1\le j \le m -1\atop 2 \nmid j}\binom{m}{j} \sum_{r=0}^{m-1}(-1)^{r}\binom{m-1-j}{r} F\left[\gamma_{(r)}, \left(T_{k} \gamma\right)_{(r)}\right]. \end{split} \end{equation*} Here $w_{k}(\theta)=\sum_{j=0}^{k-1}h\left(\theta+\frac{2j\pi}{k} \right)$ is the generalized width, $h$ is the support function of $\gamma$, $F$ is the algebraic area or the algebraic mixed area, $(T_k)^j\gamma$ is the curve whose support function is $(T_k)^j[h]$, and $\beta_{(j)}$ is the $j$-th order locus of curvature centers of a curve $\beta$ as defined above. The equality holds if and only if $\gamma$ is a circle. \end{theorem}
2,869,038,155,804
arxiv
\section{Introduction} Ground-based astronomical observations at optical and near-infrared wavelengths suffer from wavelength-dependent absorption by molecules in Earth's atmosphere. These ``telluric'' absorption features, primarily due to H$_2$O, CO$_2$, O$_2$, and CH$_4$, may be particularly problematic for measurements where high precision is required, such as exoplanet transit photometry and stellar radial velocity measurements. Prominent bands of H$_2$O and O$_2$ at wavelengths less than 1 $\mu$m render portions of the optical spectrum virtually unusable due to the large optical depths of these lines, while the micro-telluric lines of H$_2$O, with depths less than 1$\%$, are problematic because they are so numerous. Absorption features due to water vapor can systematically bias radial velocity measurements \citep{cunha2014, fischer2016} and impose fundamental limits on differential photometry of cool stars \citep{blake2008,berta2012,baker2017}. For the case of H$_2$O, these problems are compounded by the fact that the quantity of water vapor above an observatory changes with time. The quantity of water vapor is usually reported as Precipitable Water Vapor (PWV), expressed in mm, which is the amount of rain that would result from all of the water vapor precipitating out of the atmosphere. There are several ways to measure PWV in real time. Narrow-band stellar photometry targeting specific wavelength regions containing water vapor lines can be used to determine the depths of the underlying telluric features, and therefore measures PWV directly. This type of approach is effective at measuring PWV with an accuracy of a few percent \citep{brooks2007,stubbs2007, ting2013, baker2017}. In the 1990s, it was shown that signal timing information from a Global Positioning System (GPS) receiver can be used to measure PWV (e.g. \citealt{bevis1992}). Water vapor concentration affects the index of refraction of the atmosphere at GPS radio frequencies (e.g., 1.2 and 1.6 GHz), and therefore is a major source of variations in the signal travel time between satellites and a receiver on Earth's surface. This technique is widely used in the atmospheric sciences community\footnote{\url{http://www.suominet.ucar.edu/}.} to study the behavior of water vapor in Earth's atmosphere, and an astronomical application of GPS-based PWV measurements is described in \cite{blake2011}. The water vapor content of the atmosphere can also be actively probed using a microwave radiometer, as in \cite{querel2014}. Finally, the depths of telluric water features imprinted on astronomical spectra can be modeled to directly assess the PWV at the time of a given observation (e.g. \citealt{querel2011}). In \cite{blake2011} this technique was directly calibrated against contemporaneous GPS-based PWV measurements at Apache Point Observatory (APO), home to the Sloan Digital Sky Survey (SDSS). Upcoming surveys like the Large Synoptic Survey Telescope \citep{lsst2008} and spectroscopic instruments like NEID \citep{neid2016} will require PWV information in order to extract the most information from their observations. While it is clear that PWV can change dramatically between nights and throughout the year, the short-term variability, as well as variability across the sky at a given time, are both less well characterized. \cite{querel2014} found that PWV is very homogeneous across the sky at Paranal in Chile, an exceptionally dry site \citep[median $\sim$2.4 mm;][]{kerber2015}. This implies that a single measurement of PWV to zenith could provide sufficient information to help mitigate the impact of water vapor absorption on many types of astronomical measurements. We investigate the temporal and spatial variability of water vapor at a lower altitude site with conditions representative of those found at many observatories around the world. We use about 400,000 high-resolution, near-infrared ($H$-band) spectroscopic observations of telluric standard stars from the Apache Point Observatory Galactic Evolution Experiment (APOGEE; \citealt{majewski2017}) to quantify the temporal and spatial variability of PWV at APO (32\degr~46\arcmin~49\arcmin~N, 105\degr~49\arcmin~13\arcmin~W, 2788 m a.s.l.). APOGEE is a multi-object spectrometer that gathers spectra of up to 300 objects over an area of 3$^\circ$ in diameter simultaneously. The APOGEE observing strategy results in a data set that probes temporal variability of PWV on timescales as short as $\sim$8 minutes (500 s) and spatial variability across both the 3$^\circ$ diameter of an APOGEE field and across wider regions of the sky. We find that the median PWV is 3.2 mm at APO. The large, short timescale PWV variations are uncommon and the typical variation in PWV at APO is less than 0.1 mm hr$^{-1}$. We find no evidence for strong PWV variations over degree scales and also find that measured PWV depends only weakly on altitude and azimuth for observations taken close in time. In Section 2 we describe the APOGEE data and our fits to the telluric standard spectra. In Section 3 we describe the statistical properties of our PWV measurements. In Sections 4 we summarize the main conclusions from this work and discuss how these results might benefit upcoming surveys. \section{Data Collection and Reduction} \subsection{Telluric spectra in APOGEE DR 13} The APOGEE instrument on the Sloan Digital Sky Survey (SDSS) telescope at APO is capable of obtaining high-resolution ($R\approx22,500$) spectra of up to 300 objects simultaneously with fibers plugged into a plate covering an area approximately 3$^\circ$ in diameter. The spectra cover the $H$-band (1.52--1.7 $\mu$m) using a mosaic of three infrared detectors. The primary science goals of the APOGEE survey include large-scale studies of galactic kinematics and chemical composition. At infrared wavelengths, APOGEE is able to obtain high signal-to-noise ratio (SNR) spectra of giant stars in highly extincted regions of the galactic disk to large distances. However, this wavelength range suffers from substantial telluric absorption due to water vapor, CO$_2$, and CH$_4$, and therefore telluric standard stars were observed during every APOGEE observation to allow correction for these features. We analyzed data from APOGEE Data Release 13 (DR13 - \citealt{sdssdr13}), which includes 2349 ``visits'' of 436 distinct fields between 2011 and 2014. In a typical visit of a given field, 35 fibers were placed on telluric standards, mostly hot A stars, spread across the 3$^\circ$ field of view of each plate and spatially distributed as evenly as possible. The typical integration time of a visit is 4000 s (i.e., with eight 500 s exposures). On each observing night, up to eight fields were visited, yielding thousands of spectra tracing spatial and temporal variations of telluric spectral features. The reduction of the APOGEE data is described in \cite{nidever2015}. Raw data from each 500 s exposure were reduced by the APOGEE Visit Data Reduction (APRED) pipeline\footnote{\url{http://www.sdss.org/dr13/irspec/apred/}}, which generated 300 1D spectra (i.e., one spectrum per fiber). These 1D spectra are dark-subtracted, flat-fielded, and matched with a wavelength solution, flux errors, and a bad-pixel mask. No further correction was done at this stage, and thus the 1D spectra (referred to as ``ap1D'' spectra hereafter) contain all sky emission and absorption features. In the next step, all 500 s exposures from one visit were stacked to form a final 1D spectrum for each fiber. Sky and telluric features were modeled and removed, and flux calibration was performed in this step. The best-fit telluric model (i.e., a normalized spectrum of the telluric features; referred to as an ``apVisit'' telluric spectrum hereafter) derived by the APRED pipeline was attached to each apVisit spectrum. Both ap1D and apVisit data are available through the SDSS Science Archive Server\footnote{\url{https://data.sdss.org/sas/dr13/apogee/}} (SAS). We identified telluric standard observations in the raw data using the $\rm{APOGEE\_TARGET2}$ field in the APOGEE targeting bitmask\footnote{\url{http://www.sdss.org/dr13/irspec/targets/}}. We collected all telluric spectra delivered during the three-year science operations of APOGEE, including 78,524 apVisit telluric spectra and 667,386 ap1D spectra. Using the mean (uncalibrated) flux of 1000 ADU per pixel (which roughly corresponds to a SNR of 50) in the ap1D spectra, we selected a subset of ``good'' data that include 44,240 apVisit and 353,920 ap1D spectra from 2,243 visits over 422 nights. \subsection{Spectral fitting} We focus our analysis on $\rm H_{2}O$ bands between 1.5146 $\micron$ and 1.5240 $\micron$ in the APOGEE spectral window. This wavelength range was chosen to contain only telluric water vapor features, avoiding low-level CO$_2$ and CH$_4$ features. To estimate the amount of water vapor in the atmosphere, we fit each (normalized) ap1D or apVisit telluric spectrum to a telluric spectral template multiplied by a scale factor (SF) to account for variations in the water vapor optical depth \begin{equation} \rm ln \it F(\lambda) = \rm SF\cdot ln [\it T(\lambda)\otimes\rm LSF]. \label{eq:fit} \end{equation} Here, $F(\lambda)$ is the scaled template to be compared with the data, $T(\lambda)$ is the original template, LSF is the line spread function, and $\otimes$ indicates convolution. We used the TAPAS web interface\footnote{\url{http://ether.ipsl.jussieu.fr/tapas/}} to generate telluric absorption models appropriate for APO using an average latitude winter atmospheric model \citep{tapas2014}. The telluric template contains only water vapor. The resolution of the original TAPAS template is $10^{-6}$ $\micron$, so we resampled the template (after convolution) onto the APOGEE wavelength grid before calculating the SF. Note that the APOGEE LSF is wavelength- and fiber-dependent \citep{nidever2015}. Using a few prominent sky emission lines within the chosen range of wavelength, we find that the LSF is extremely stable during most visits. The relative change in the line profile (as measured by the FWHM) among eight consecutive 500 s exposures is of the order of $10^{-2}$, thus having negligible influence on the derived SF compared with other sources of uncertainties in the fit. For this reason, we applied the same LSF model, which is the one provided for each apVisit spectrum, for all ap1D spectra within a visit. All ap1D spectra must be properly normalized to have a flat stellar continuum before calculating the SF using Equation \ref{eq:fit}. Finding the continuum is an iterative process based on median-filtering. Strong sky emission lines, telluric lines, and hydrogen lines of the star are removed in the first few iterations. Then, asymmetric sigma-clipping is applied to make the fitted continuum trace the upper envelope of remaining pixels. We found that the continuum determined in this way (rather than symmetric sigma-clipping) makes our SFs derived from ap1D spectra consistent best with those computed for apVisit telluric spectra, which have already been normalized in the APRED pipeline. An initial estimate of the SF for each ap1D or apVisit telluric spectrum was obtained by matching the total absorption by telluric lines in the observed spectrum (essentially the total equivalent width) to that of a scaled TAPAS template. Then, a brute force search for the best-fit SF was performed by calculating $\chi^2$ ($=\sum(\left[DATA-F(\lambda)\right]/\sigma)^{2})$ over a very fine grid around the initial SF estimate. A parabola was fitted to the resulting $\chi^2$ curve to determine the SF that yielded the minimum $\chi^2$. In the search for the best-fit SF, only pixels within 5 pixels of a telluric line with normalized depth greater than 0.01 in the convolved telluric template (i.e., $F(\lambda)$ in Equation \ref{eq:fit}) were used for calculating $\chi^2$. Bad pixels flagged in $\rm{APOGEE\_PIXMASK}$ were also masked out\footnote{\url{http://www.sdss.org/dr13/algorithms/bitmasks/}}. Even under perfectly stable and homogeneous conditions, the water vapor optical depth is a function of path length through Earth's atmosphere, $\rm sec\it(z)$, where $z$ is the zenith angle. To account for this effect, the best-fit SF for each spectrum is divided by airmass to create a quantity we refer to as the Reduced Scale Factor (RSF). \subsection{Calibrating RSF with GPS data} It has been demonstrated that a multi-band, geodetic-quality GPS receiver and a high-accuracy barometer can be combined to derive PWV \citep{bevis1992}. In North America, a network of GPS-based PWV monitors is maintained by UCAR (University Corporation for Atmospheric Research), with a station operated by UNAVCO located at APO. The PWV monitor at APO is equipped with a Trimble\textsuperscript{\textregistered} NetRS\textsuperscript{\textregistered} GPS receiver with a Trimble\textsuperscript{\textregistered} Choke Ring Antenna, while a Vaisala WXT510 weather transmitter is used to monitor the ambient temperature and pressure. Using SuomiNet\footnote{\url{http://www.suominet.ucar.edu. APO site ID: P027}}, we downloaded historical data recorded at the APO station. Except for a gap between 2013 January 1 and 2013 November 14, PWV was measured regularly at 30-minute intervals between 2011 and 2014. Figure \ref{fig:PWV_vs_RSF} shows that the RSF we derived is linearly correlated with the GPS data (Pearson correlation coefficient = 0.96, residual RMS = 0.84) \begin{equation} \rm PWV~(mm) = 9.35\cdot\rm RSF-0.51. \label{eq:gps} \end{equation} The accuracy of GPS-based zenith PWV measurements has been assessed by comparison to contemporaneous radiometer measurements. The uncertainty is typically 1 mm and is dominated by uncertainty in the temperature profile of Earth's atmosphere \citep{gpserror}. It appears that this level of uncertainty is consistent with the scatter observed in Figure \ref{fig:PWV_vs_RSF}. \subsection{Uncertainties and biases} Assuming that PWV is always uniform across the field of view of APOGEE, the relative scatter of PWV measured with different fibers can be used as an empirical estimate of the statistical errors on the SF fits. Distributions of the relative scatter of PWV for ap1D and apVisit telluric spectra are shown in Figures \ref{fig:ap1D_error_dist} and \ref{fig:apVisit_error_dist}. 32,961 apVisit and 257,864 ap1D spectra from 1,236 visits on 417 nights were used to generate the two plots (only visits that have at least 10 fibers, for which all ap1D spectra have SNRs greater than 50, are considered). In both cases, we find that the empirical SF error distribution has larger tails than a Gaussian distribution, but is well fit by a Moffat distribution. For ap1D data, the best-fitting Moffat profile yields a $\chi^2_{\nu}$ of 11.7 with degrees of freedom ($\nu$) of 19. For apVisit data, $\chi^2_{\nu}=22.9$ and $\nu=19$. Considering only the Gaussian core of these distributions, we find that the statistical error in our SF fits is $\pm0.11$ mm for PWV derived from the APOGEE ap1D spectra, or $\pm0.06$ mm for apVisit telluric spectra, both being substantially less than the GPS-based PWV uncertainty ($\sim$1 mm RMS; \citealt{gpserror}). Because the SNR of an apVisit telluric spectrum is generally 3 to 4 times higher than that of the corresponding ap1D spectra, the lower SF fit error for apVisit data is as expected. We also carried out Monte Carlo simulations to estimate the impact of flux errors on the SF uncertainties. We began with a simulated spectrum stacked from 2000 high-SNR (SNR$>$400) ap1D spectra. Gaussian noise (with a varying amplitude to simulate a varying SNR) was added to the spectrum, which was then used as the input for our fitting procedures to derive the SF. Under each SNR setting, we run the simulation 200 times. We found that the statistical uncertainty in PWV measured with the simulated ap1D spectra is $\pm$0.1 mm for SNR = 100 (i.e., approximately the median SNR of the data set), which is in good agreement with the empirical estimate. We find consistent results when comparing PWV estimates from the apVisit telluric spectra to those derived from the ap1D spectra from which the apVisit spectra are formed. Plotted in the left panel of Figure \ref{fig:ap1D_vs_apVisit} is the ratio of the mean PWV derived from eight ap1D spectra of a given fiber and visit to the PWV derived from the corresponding apVisit telluric spectrum. The distribution of this ratio narrowly peaks at unity, with 80\% measurements falling between 0.9 and 1.1. There are, however, some biases noticeable at very low (i.e., PWV $<$ 1 mm) or very high (i.e., PWV $>$ 10 mm) PWV values where our fits to the ap1D spectra may overestimate or underestimate the PWV relative to the apVisit values. A closer inspection of individual ap1D spectra taken at very low or very high PWV suggests that these biases may be introduced by our continuum fitting process. These instances only represent a small portion (10.4\%) of the entire data set. Nevertheless, since ap1D spectra from the same visit and the same fiber are similar to each other (in terms of their fluxes or SNRs), even though their mean PWV might be over- or under-estimated, relative changes in the PWV between individual exposures within a visit should still be accurate and reliable. Because we will primarily use apVisit data to probe telluric variations between visits or nights, whereas ap1D data will mainly be used to study short-timescale (i.e., within a visit) variations, the small biases corresponding to unusual observing conditions seen in Figure \ref{fig:ap1D_vs_apVisit} affect our subsequent analyses negligibly. We note that the APRED pipeline also derived scale factors for ap1D spectra, and their values (and reduced ap1D spectra) can be found in APOGEE apCframe files. However, we chose to re-fit all 1D spectra instead of using the APRED results because it allowed us to examine the fitting process and performance (e.g., the goodness and robustness of the fit) more carefully. It also makes it more convenient to expand our study to include different telluric models and/or more molecules in the future. We cross-checked the APRED scale factors and ours and found that they are consistent with each other. \section{Analysis} \subsection{Long-term PWV Variations} We can investigate the long-term behavior of PWV at APO by using per-visit estimates that correspond to the mean PWV derived from all telluric spectra in a single epoch, which typically lasted for $\sim$4,000 s. Seasonal variations in PWV are clearly seen in our data (Figure \ref{fig:pwv_seasonal}). The average PWV observed between June and September is a few times higher than that in the winter. PWV shows a steep rise between May and June, followed by a rapid decrease between September and October. We find that the distribution of PWV (Figure \ref{fig:pwv_histogram}) has a long tail, which is almost entirely due to observations conducted between June and September. Empirically, we find that the distribution of PWV is well matched by a (truncated) Lorentz distribution ($\chi^2_{\nu}=59.4$, $\nu=27$) with the location parameter 2.8 mm and the scale parameter 1.5. While we do not assign physical significance to this parameterization of the PWV distribution, we hope that it might be useful for future efforts to simulate the impact of PWV on astronomical surveys. Taking into account all data, the mean (median) PWV at APO is 3.8 (3.2) mm, with PWV lower than 10 mm for 92\% of epochs. Excluding data taken between June and September, the mean (median) PWV reduces to 2.9 (2.8) mm. \subsection{Short-term PWV Variations} By considering the individual ap1D spectra taken over the course of a single visit, as well as visits to different fields throughout a single night, the APOGEE data enable us to investigate the variation of PWV over a range of timescales from minutes to hours. The PWV variation (quantified by the standard deviation $\sigma_{\rm PWV}$ and the peak-to-valley spread $\Delta$ PWV) derived from consecutive ap1D spectra of the same star in a given visit, which is typically 4000s in total duration, is plotted in Figure \ref{fig:ap1d_variation}. The distribution of $\sigma_{\rm PWV}$ peaks around 0.1--0.15 mm, consistent with the empirical PWV fit uncertainty shown in Figure \ref{fig:ap1D_error_dist}, though with a long tail. However, our analysis shows that the distribution of $\Delta$ PWV has a longer tail than expected from the empirical error distribution shown in Figure \ref{fig:ap1D_error_dist}, indicating that low-level PWV variations on hour timescales are present, though typically with amplitudes less than our measurement precision. We attempted to simulate the distribution of $\Delta$ PWV shown in the bottom panel of Figure \ref{fig:ap1d_variation} as the convolution of the empirical ap1D PWV error distribution and a small linear trend in PWV during each visit with slope drawn from a Gaussian distribution. We found that the best fit PWV slope distribution was $N$[0, 0.06], where the sigma of the Gaussian is the slope of the PWV change in units of mm 500 s$^{-1}$. We conclude that the typical variation in PWV at APO over the course of an hour is small compared to our measurement precision, less than 0.1 mm hr$^{-1}$. However, the tail of the distribution shown in bottom panel of Figure \ref{fig:ap1d_variation} indicates that $\Delta$ PWV $>$ 1 mm in an hour can occur (in approximately 3.5\% of all visits). More statistics of $\sigma_{\rm PWV}$ and $\Delta$ PWV based on ap1D data are presented in Table \ref{tab:ap1d_statistics}. Similarly, PWV variations on timescales from hours up to a night can be characterized by the apVisit data (Figure \ref{fig:apVisit_variation}). The distribution of $\sigma_{\rm PWV}$ and $\Delta$ PWV (of different epochs in a single night) both have long, slow-decreasing tails. We find that PWV is still generally quite stable on this timescale, with $\sigma_{\rm PWV}<0.3$ mm within 70\% of nights, or $\Delta$ PWV $<$ 1 mm within 65\% of nights. PWV variations greater than 2 mm occur during approximately 7\% of nights. More statistics of $\sigma_{\rm PWV}$ and $\Delta$ PWV based on apVisit data can be found in Table \ref{tab:apVisit_statistics}. In a typical night, up to eight APOGEE fields were visited. Using these fields, which were temporally and spatially separated from each other, we can define a number of field-pairs, each of which tells us how PWV changes over a certain time interval and angular distance. In Figure \ref{fig:temporal_variations}, 90-percent spread (peak to valley) of PWV found between 4971 field-pairs are plotted. We find that large temporal variations in PWV are possible within a night, and that the likelihood of large relative PWV variations between observations increases with time. In Figure \ref{fig:spatial_variations}, we investigate the spatial variability by focusing on situations where APOGEE switched between two plates providing PWV measurements separated by up to 70$^\circ$ on the sky, but less than 1.5 hr in time (i.e., slightly longer than one typical visit). Using 1503 qualified field-pairs, we find that the spatial variation of PWV is very small, typically less than 0.5 mm (90-percent, peak to valley). Using the same set of field-pairs, we plot in Figure \ref{fig:alt_variations} the variability of PWV as a function of the change in elevation, and we found a similar result that the variation of PWV is very small, typically less than 0.5 mm, between consecutive observations at different elevation. Using microwave radiometry, \citet{querel2014} reported spatial variation of PWV across the sky down to 27\fdg5 elevation with a median variation of 0.32 mm (peak to valley) or 0.07 mm (RMS) at Paranal, Chile. Our results suggest that at APO, a lower altitude site with conditions representative of those found at many observatories around the world, the water vapor behaves similarly. \section{Discussion and Conclusions} Using $\sim$400,000 high-resolution, near-infrared ($H$-band) spectroscopic observations of telluric standard stars from the APOGEE survey, we quantify the temporal and spatial variability of PWV at APO. We convert our measurements of the depths of water vapor absorption lines around 1.52~$\mu$m to PWV using simultaneous measurements from a GPS-based PWV monitor located at APO. Using simultaneous measurements over the 3$^\circ$ APOGEE FOV, we estimate that our typical statistical error on PWV is $\pm0.11$mm, significantly more precise than the GPS-based estimates (1 mm RMS). Given the nature of the APOGEE survey, we are able to probe PWV variation on a wide range of temporal and spatial timescales. We find that variations in PWV on timescales shorter than 1 hour are typically small, below our measurements precision. The GPS-based PWV monitoring station at APO provides measurements at 30-minute cadence, so our measurements probe a shorter-timescale regime at higher precision. We investigate the spatial dependence of PWV by considering APOGEE observations obtained close in time but separated by a large angle on the sky. We find that the angular dependence (as well as the elevation dependence) of PWV is very small, typically less than 0.5 mm (90-percent, peak to valley) even for sight lines separated by up to 70$^\circ$. Using microwave radiometry, \citet{querel2014} found that the spatial dependence of PWV across the sky at a given epoch is surprisingly small at Paranal in Chile, with peak-to-valley variations typically less than 0.5 mm. Our results suggest that at APO, a lower altitude site with conditions representative of those found at many observatories around the world, the water vapor is also spatially homogeneous. Our results are useful for planning future efforts to monitor PWV as part of a wide range of astronomical surveys. For example, \cite{baker2017} calculated that high-precision differential photometry of cool stars for transiting exoplanet studies may require correcting for the impact of differential extinction by Earth's atmosphere. These authors found that if PWV is known well, to better than 1 mm, then photometric corrections can be calculated directly using atmospheric models. Upcoming extreme precision radial velocity surveys will benefit from contemporaneous measurements of water vapor optical depth as in input to models designed to minimize the impact of ``micro-telluric'' lines \citep{cunha2014}. Our analyses indicate that high-cadence PWV measurements made along a single sight line (toward Zenith, for example) are likely sufficient for a broad range of astronomical applications. \acknowledgments We thank the anonymous referee for helpful comments that helped to improve this manuscript. This work was performed, in part, by S.H. under contract with the Jet Propulsion Laboratory (JPL) funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute. The GPS data used in the paper are based upon work supported by the National Science Foundation under Grant No. 0313588. {\it Facilities:} \facility{Sloan}. \bibliographystyle{apj}
2,869,038,155,805
arxiv
\section{Cosmology and ``cosmo-vision''} \label{sec:cosmology} An impressive body of experimental and observational facts, models, conjectures and hypotheses give shape to our current cosmo-vision -- ``Weltanschauung''. This evidence arises from a wide variety of sources including astronomy, physics, chemistry, geology, paleontology, biology, genetics, archeology and history, providing a most exciting and vivid picture of humankind and the Universe's evolution. Most likely, this laborious work of gathering ideas and observation supporting this cosmo-vision, which took various generations of researchers many years and that will likely last many more in the foreseeable future, is one of the highest achievements of human endeavour. It allows for the understanding of the articulation of a variety of ``origins'', from the origin of the Universe to the emergence of life on Earth and its subsequent evolution, to the first human social formations (see \cite{Bertolami06a} for a discussion). Studies of the Cosmic Microwave Background Radiation (CMBR) and theoretical cosmology allow estimating the age of the Universe as about 13.7 thousand million years (Gys). The CMBR corresponds to the surface of last scattering, when the Universe turned transparent to electromagnetic radiation (and the approximate time of transition from the early dominance by the density energy of radiation to the dominance of matter), about 370 thousands years after the Big Bang. The first galaxies and stars were formed about 10 Gys ago. Radiative dating allows estimating the age of Earth and Solar System as about 4.5 Gys. Evidence arising from dating of stromatolite's fossils suggests that life on Earth appeared 3.6 Gys ago. The first macroscopic organisms first appeared about 700 million years ago. The tectonic sea-floor spreading which resulted in the Atlantic ocean took place about 100 million years ago. Our primate ancestors first walked on two legs about 3 million years ago. The first human socio-cultural formations appeared 40 thousand years ago and the first human artistic imagery, impressive cave and rock paintings and rock art found in Europe and Australia, date back to 15 to 45 thousand years. It is quite evident that technological developments have also shaped and conditioned the evolution of humankind, most notably from the XIX century onwards. Most likely, this development will be even more conspicuous in the future. Nevertheless, however special it might be (see contributions by Joel Primack and Nancy Abrams), the impact that the discovery of our modest standing in the Universe is much less apparent in our culture. Still, strong arguments can be put forward that science, and cosmology in particular, do permeate our culture. This can be observed in contemporary art, science fiction, cinema\footnote{The scene of a boy in one of Woody Allen's films refusing to eat because the Universe was expanding is one that strikes quite vividly my memory.}, music and so on. A ``visual poetry" inspired by a cosmic view can be recognized in many ancient cultures (see intervention of Chanda Carey). Literature, specially contemporary literature, is receptive to the appeal of cosmology. I will concentrate on some examples I find particularly representative. Pessimism about our cosmic condition and about the emergence of life can be found in the writings of important authors. In one of his well known essays, Bertrand Russell (1872 -- 1970) is quite emphatic about the influence that the knowledge of catastrophic cosmic events and the inevitable death sentence on the Universe and human achievements as a result of the Second Law of Thermodynamics \citep{Russell}: \noindent {\it `` ... all the inspiration, all the noonday brightness of the human genius are destined to extinction in the vast death of the solar system, and the whole temper of Man's achievement must inevitably be buried beneath the debris of a universe.''} Rather against the current anthropic reasoning, which argues in favour of the near inevitability of the emergence of life in the Universe in light of examples like the recent discovery of a profusion extra-solar planets\footnote{An infinite number of worlds was defended by Giordano Bruno (1548 -- 1600), as the only way to God exert his omnipotence \citep{Bruno}. For this heretical view he was burnt at the stake at the Market of Flowers in Rome in 1600.}, Nobel laureate, French biologist, Jacques Monod (1910 -- 1976) expresses his utter pessimism \citep{Monod}: \noindent {\it ``... The Universe was not pregnant with life nor biosphere with man. Our number came up in the Monte Carlo game ...} \noindent {\it then man must at last wake out of his millinery dream, and in doing so wake to his total solitude, his fundamental isolation ...} \noindent {\it Man knows at last that he is alone in the unfeeling immensity of the universe, out of which he has emerged only by chance.''} In a contrasting tone, I find, no author has discussed the influence of the physical world in such an optimistic and rather analytic way than the Italian author Italo Calvino (1923 -- 1985). The character in question is Mr. Palomar\footnote{Other writings of Calvino on the Universe, include ``Le cosmocomiche'' (1965), ``Ti con zero'' (1967), ``La memoria del mondo e altre storie cosmocomiche'' (1968).}, after the mountain where the famous Hale telescope in California enabled remarkable discoveries about the Universe from the late 1940s until 1980s. Mr. Palomar, in a broad sense, the alter-ego of the Italian author himself, appeared for the first time in 1975 on the pages of the ``Il Corriere della Sera'', and then somewhat regularly until appearing as the central figure in a book as simply ``Palomar'' \citep{Calvino}. Mr. Palomar thoughts and reflections are quite original and illuminating, arising through observing the natural world (physical and biological), as well as when sensing the sociological difficulties of our day to day life. Actually, it is rather unique the way he concludes that human relations should necessarily mirror the Universe in order to improve. In the chapter, ``Palomar sulla spiaggia'' and, in particular, in the subsection ``Lettura di un'onda'' (Palomar on the beach - The reading of a wave)", the character discovers, for instance, that the observation of the waves in the sea, not only induces a peaceful and inspirational state of mind, but also that it holds the key to capturing the complexity of the world reducing it into its most elementary mechanisms. \noindent {\it `` ... One cannot observe a wave without taking into account the complex aspects (velocity, shape, direction) that conspire to form it. The factors are always changing so that every wave is different than any other, but it is also true that they are equal to each other ...''} In the chapter Palomar in the garden, (``Palomar in giardino''), and in the subsection on the loving making of the tortoises, (``Gli amori delle tartarughe''), the character reflects on the extraordinary forces of biological attraction and the complication it represents for certain species, including our. In Palomar looks at to the sky, (``Palomar guarda il cielo''), the character draws his conclusions when observing the moon in the afternoon, the motion of the planets and the shinning of the stars (``Luna di pomeriggio'', ``L'occhio e i pianeti'', ``La contemplazione delle stelle''). These conclusions are not always correct from the physics point of view, but nevertheless original and insightful. Actually, he remarks that the contemplation of the stars requires a great effort, as one should be properly equipped with a telescope, a chart of the constellations, a lamp, etc. He refers to the Milky Way as ``the terrible shinning silvered cloud''. His disquiet about the distance separating the objects in the sky is evident, as they are beyond our understanding. Moreover, he claims that the observation of the stars induces unstable and contradictory feelings as it suggests a too complex relationship between harmony and evolution. And rather remarkably: \noindent {\it `` The uncertainty about the distance of the luminous bodies leads one to trust only on the dark! What could be more stable than nothing?''} I think that Calvino would be quite pleased to hear that recent discoveries suggest that the real dynamics of the cosmos is actually ruled by dark entities, dark energy, on the largest scales, and dark matter, on galactic and galaxy cluster's scales. The simplest candidate for dark energy, and for some the most natural, is actually the vacuum energy density, ``nothing'' if one assumes that the ``void'' refers to the absence of matter that does not manifest itself in the electromagnetic spectrum, or that cannot be observed through, for instance, neutrino detectors. In the chapter Universe as mirror (``L'Universo come specchio''), Palomar calls for a ``cosmological thinking'' in order to improve our lives. His point is that his suffering when weighting the difficulties in his relationships with others, would improve if his relationship with the Universe were closer. He argues that despite the infinite combinations, permutations and chain of consequences, all events in the Universe remain and that should be the basic underlying feature of human relationships! Palomar goes as far as, to quote as an example, the explosion of a supernova at the Magellanic Cloud as an event that took place long ago, but that is still there, to be seen, admired and discussed. One can see that as an incredible premonition of the SNe 1987a event, precisely at the Magellanic Cloud! Let us now turn to another author, the Argentinian Ernesto S\'abato (1911). The expansion of the Universe is one of the central issues of discussion in the ``Uno y el Universo'' (One and the Universe) \citep{Sabato}. Being a physicist by training, the discovery of the expansion of the Universe could not fail to impress him. He was aware of the pioneering work of Einstein, who in 1917 defended the idea of a static Universe, and of his Dutch colleague Willem de Sitter, who in the same year showed that a Universe dominated by the cosmological term introduced by Einstein to keep the Universe at bay, actually, would not do the job! He was also aware of the 1922 evolving solution of the Russian engineer Alexander Friedmann and the subsequent discussion and solutions by the Belgium priest George Lema\^\i tre from 1924 onwards\footnote{See \cite{Lemaitre} for a thorough discussion of the historical developments which led to the idea of the ``day without yesterday'', the Big Bang.}. Suspicious of purely theoretical constructions to explain the expansion of the Universe, as suggested for instance by the British astronomer Arthur Eddington \citep{Eddington}, he expresses also reservations about purely empirical evidence. These concerns are ingeniously exemplified through an analogy with ichthyology. The example goes as follows: through the repeating process of collecting fish with a catch with a spacing of 5 cm an ichthyologist acquires knowledge that he/she expresses in terms of two laws: \noindent 1) There are no fish smaller than 5 cm long. \noindent 2) All fish have gills. \noindent From a strictly scientific point of view, any natural scientist would argue that fish belong to the physical world, that a fellow ichthyologist is a well intentioned and competent scientist, and the catch is the cognition apparatus. However, from the point of view of a skeptic, the first law is just a consequence of the net employed and hence the validity of the second law might be in question. To this criticism, a hard core ichthyologist would counter argue that fish that cannot be caught with the available net are beyond ichthyology's knowledge. They belong to metaphysics. Science is built upon observable entities. Of course, on purely epistemological terms one might argue that the first law could be concluded through the examination of the catch, without the need of any empirical work; moreover, by the same order of arguments, the second law may also fail as one cannot fish in all waters. However, it is clear that there is a fundamental difference between physics and ichthyology. In the former, it seems possible to acquire knowledge through purely theoretical epistemological methods. Indeed, special and general relativity are above all intellectual constructions. The same can be said about quantum mechanics and, in particular, about its most fundamental characteristic feature, the Uncertainty Principle. As already mentioned, modern cosmology is a brain child of general relativity and recent developments such as inflation and the subsequent imprinting it leaves on the cosmic microwave background radiation were essentially driven by theoretical problems, both in theoretical cosmology (the horizon and the flatness problems, and the origin of structure) and in high energy physics (the overabundance of magnetic monopoles). The attempt to unify all interactions of nature in a single and encompassing scheme is a purely theoretical programme. Its most developed proposal, superstring theory/M-theory does bring about, as any original theoretical construction, a fairly new view of the Universe. Actually, it seems to suggest a multiverse (see \cite{BPol}, \cite{Susskind}, \cite{Bertolami06b}, \cite{Bertolami08} for discussions). However, reality has always the final word and is its quite exciting when surprising or unexpected possibilities emerge from the observations. The recent discovery of the current accelerated expansion of the Universe falls precisely in this category. \section{The Universe as the framework for literature} For many authors, the Universe, with its laws and dynamics, is an active framework for literary expression. Furthermore, attempts to understand how the Universe works are seen by some authors as guidelines for ethics. For instance, in the "O Homem Duplicado" (2002) (The Duplicated Man), the Nobel laureate Portuguese author, Jos\'e Saramago (1922), asserts the significance of the literary work based on the existence of a cosmic equilibrium \citep{Saramago}: \noindent {\it ``... the conventional tradition of the romance, is not after all, just a somewhat wasted descriptive attempt due to the scarcity of imagination, but actually a literary result of the majestic cosmic equilibrium, given that the universe is, since its origins, a system without any organizational intelligence, but that had enough time to learn with the infinite multiplication of its own experiences, so as to abundantly demonstrate that the performance of life is an infinite machinery of compensation, within each any delay of a minute, an hour, a century is irrelevant.''} The failure to assign a clear cut moral sense from descriptions of the origin of the cosmos is attributed to the lack of consensus around any particular cosmogony \citep{Saramago}: \noindent {\it `` ... It leads one thinking that as all cosmogonies invented since the birth of the word failed so miserably, it does not mean any good in what concerns their implications for our behaviour.''} It is interesting to speculate whether Saramago's opinion would change if introduced to the most recent developments in cosmology and with how observational discoveries can be harmonized in the context of the Big Bang model. This author suspects that not significantly. Fernando Pessoa (1888 -- 1935) was the dominant figure of the Portuguese literature in the first half of the XX century. Multiple literary personas manifest themselves as ``heter\'onimos", Fernando Pessoa, \'Alvaro de Campos, Ricardo Reis, Alberto Caeiro, Bernardo Soares, Bar\~ao de Teive, Alexander Search, etc. (actually 19 ones) through fairly distinct styles \citep{Pessoa}. Beyond doubt a unique example in world literature. His work was only partially published during his lifetime. This rather singular situation has given origin to a great deal of posthumous publications and quite often to the discovery of unknown poems and sometimes even whole manuscripts - even though, most often, not completely finished. Rather recently, a new poem by Alberto Caeiro, the naive and symbolic poet, was found. I present (and translate) one transcription of this poem, which can be regarded as an ``ode" to the Big Bang: \noindent {\it ``I like the sky because I believe it is finite. \noindent How could something that has neither a beginning nor an end have anything to do with me? \noindent I do not believe in infinity, I do not believe in eternity. \noindent I believe that space starts somewhere and ends somewhere. \noindent Beyond and before that there is absolutely nothing. \noindent I believe that time has a beginning and an end. \noindent Before and after that there was no time. \noindent Why any of this should be false? It is false to talk about infinities, \noindent As if we know what they are and if we can understand them. \noindent No: everything is a finite quantity of things. \noindent All is well defined, all has limits, all is made up of things.''} \section{A cosmic inspired ethics?} \label{sec:conclusions} In ancient cultures, given the historical development of a civilization was regarded as a continuation into the human sphere of a cosmogony which took place in the natural world. The fact that the latter occurred through a divine intervention would automatically associate it with a well defined set of religious and ethical values. Cosmology and religion were once quite intertwined. This is evident in the context of the great religions, and this connection can also be found in many other cultures. Let me illustrate this relationship, through an example based on a passage of the cosmology of the Mande peoples \citep{Mande}, an ethnic group of West Africa. Speakers of the Mande languages are found in Gambia, Guinea, Guinea-Bissau, Senegal, Mali, Sierra Leone, Liberia, Burkina Faso, Ivory Coast and the northern half of Ghana: \noindent {\it ``When the Everlasting addressed man, He taught him the law by which all elements of the Cosmos were formed and continue to exist. He made man Guardian and Governor of His universe and charged him supervision and maintenance of universal Harmony. That is why man is a heavy responsibility.''} In my view, the key words in this example are {\it universal Harmony} and {\it responsibility} and I believe that the emphasis on these two concepts is particularly appealing as they open the possibility of considering a ``cosmic ethics" without relating it to a religious view of the world. If so, the question is if cosmology can be the cornerstone for an ethics of responsibility. From a strictly scientific point of view, the answer is clearly negative. Scientific developments were achieved independently from humanistic and anthropocentric concerns. The scientific facts that describe and allow for understanding the existence and dynamics of Earth, home of humankind, are a particular limit of a general set of laws that govern the whole Universe. It is therefore, somewhat improper to ask for the implication that research on the infinitely large (and equally well on the infinitely small) might have, on philosophical and ethical terms, for the future of humankind. Moreover, cosmology does provide, more vociferously than any other subject, a clear perspective of the modest standing of humankind within the picture of the cosmos. Nevertheless, cosmology does render us with a view of how unique, and this is a rather anthropocentric interpretation, are the conditions required to shelter life and, in particular, sentient and reflective life. Even though it is a firm belief of this author that life is a wide spread phenomenon in the Universe, humankind is most likely quite unique within the family of self-conscious species that exist throughout the Universe. We have therefore, a responsibility to keeping the balance of our world and ensure its continuity. A responsibility with a time arrow pointing towards the future, but that is necessarily based on lessons learned from our history, personal and collective. {\bf Acknowledgments~~} \noindent The author is indebted to Ari Belenkiy, Maria da Concei\c c\~ao Bento, Chanda Carey and Jorge P\'aramos for their constructive comments and suggestions. This work is partially supported by Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (Portugal) under the project POCI/FIS/56093/2004. \bibliographystyle{unstr}
2,869,038,155,806
arxiv
\subsection*{Acknowledgement} The presented work is part of the EU research project TERACOMB (Call identifier FP7-ICT-2011-C, Project No.296500). Additional funding comes from the Swiss National Science Foundation. The funding is gratefully acknowledged. The authors acknowledge the technical support of M.J. S\"uess and D. Kazakov.
2,869,038,155,807
arxiv
\section{Introduction} Ordinary differential equations is a natural tool for modeling many phenomena in applied sciences, with a very abundant literature (see e.g. \cite{Arn78,BR89,CL55}) and is rather well understood under many aspects. In a series of recent articles, they have been shown to also correspond to some natural computational model, with a nice computability and complexity theory: See \cite{DBLP:journals/corr/BournezGP16} for a survey. In a recent article \THEPAPIERS, we investigated their discrete counterpart, that are called discrete ODEs, also known as difference equations. The basic principle is, for a function $\tu f(x)$ to consider its discrete derivative defined as $\Delta \tu f(x)= \tu f(x+1)-\tu f(x)$. We will intentionally also write $\tu f^\prime(x)$ for $\Delta \tu f(x)$ to help to understand statements with respect to their classical continuous counterparts. This associated derivative notion, called \emph{finite differences}, has been widely studied in numerical optimization for function approximation \cite{gelfand1963calcul} and in \emph{discrete calculus} \cite{graham1989concrete,gleich2005finite,izadi2009discrete,urldiscretecalculuslau} for combinatorial analysis. While the underlying computational content of finite differences theory is clear and has been pointed out many times, no fundamental connections with algorithms and complexity had been formally established before \THEPAPIERS, where it was proved that many complexity and computability classes from computation theory can actually be characterized algebraically using discrete ODEs. Even if such results were initially motivated by helping to understand the relationships between analog computations and classical discrete models of computation theory, the relation between the two is currently unclear. In the context of algebraic classes of functions, a classical notation is the following: Call \emph{operation} an operator that takes finitely many functions, and returns some new function defined from them. Then, $[f_{1}, f_{2}, \dots, f_{k}; op_{1}, op_{2},\dots,op_{\ell}]$, denotes the smallest set of functions containing functions $f_{1}, f_{2}, \dots, f_{k}$ that is closed under operations $op_{1}$, $op_{2}$, \dots $op_{\ell}$. Call \emph{discrete function} a function of type $ f: S_{1} \times \dots \times S_{d} \to S'_{1} \times \dots S'_{d'}$, where each $S_{i},S'_{i}$ is either $\N$, $\Z$. Write $\cp{FPTIME}$ for the class of functions computable in polynomial time. A main result of \THEPAPIERS{} is the following ($\mathbb{LDL}$ stands for linear derivation on length): \begin{theorem}[\cite{MFCS2019}] \label{th:ptime characterization 2} For discrete functions, we have $$\mathbb{LDL}= \cp{FPTIME}$$ where $\mathbb{LDL} = [\mathbf{0},\mathbf{1},\projection{i}{k}, \length{x}, \plus, \minus, \times, \sign{x} \ ; composition, linear~length~ODE].$ \end{theorem} That is to say, $\mathbb{LDL}$ (and hence $\cp{FPTIME}$ for discrete functions) is the smallest subset of functions, that contains the constant functions $\mathbf{0}$ and $\mathbf{1}$, the projections $\projection{i}{k}$, the length function $\length{x}$ (that maps an integer to the length of its binary representation), the addition function $x \plus y$, the subtraction function $x \minus y$, the multiplication function $x\times y$ (that we will also often denote $x\cdot y$), the sign function $\sign{x}$ and closed under composition (when defined) and linear length-ODE scheme: The linear length-ODE scheme basically (a formal definition is provided in Definition \ref{def:linear lengt ODE}) corresponds in defining functions from linear ODEs with respect to derivation with respect to the length of the argument, that is to say of the form $\dderivl{\tu f(x,\tu y)} = \tu A [\tu f(x,\tu y), x,\tu y] \cdot \tu f(x,\tu y) + \tu B [\tu f(x,\tu y), x,\tu y ]. $ Here, in the above description, we use the notation $\dderivl{\tu f(x,\tu y)}$, which corresponds in derivation of $\tu f$ along the length function: Given some function $\mathcal{L}:\N^{p+1} \rightarrow \Z$, and in particular for the case of where $\mathcal{L}(x,\tu y)=\ell(x)$, \begin{equation}\label{lode} \dderivL{\tu f(x,\tu y)}= \dderiv{\tu f(x,\tu y)}{\mathcal{L}(x,\tu y)} = \tu h(\tu f(x,\tu y),x,\tu y), \end{equation} is a formal synonym for $ \tu f(x+1,\tu y)= \tu f(x,\tu y) + (\mathcal{L}(x+1,\tu y)-\mathcal{L}(x,\tu y)) \cdot \tu h(\tu f(x,\tu y),x,\tu y).$ \begin{remark} This concepts, introduced in \THEPAPIERS, is motivated by the fact that the latter expression is similar to classical formula for classical continuous ODEs: $$\frac{\delta f(x,\tu y )}{\delta x} = \frac{\delta \mathcal{L} (x,\tu y) }{\delta x} \cdot \frac{\delta f(x,\tu y)}{\delta \mathcal{L}(x, \tu y)},$$ and hence this is similar to a change of variable. Consequently, a linear length-ODE is basically a linear ODE over variable $t$, once the change of variable $t=\ell(x)$ is done. \end{remark} In particular, writing as usual $B^{A}$ for functions from $A$ to $B$, we have: \begin{theorem}[\cite{MFCS2019}] \label{th:ptime characterization 2} $\mathbb{LDL} \cap \N^{\N}= \cp{FPTIME} \cap \N^{\N}.$ \end{theorem} This provides a characterization of $\cp{FPTIME}$ for discrete functions that does not require to specify an explicit bound in the recursion, in contrast to Cobham's work \cite{Cob65}, nor to assign a specific role or type to variables, in contrast to safe recursion or ramification \cite{bs:impl,Lei-LCC94}. The characterization happens to be very simple using only natural notions from the world of ODE. Our purpose in this article is to extend this to more general classes of functions. In particular, this makes sense to try to characterize polynomial time functions from the reals to the reals. We consider here computability and complexity over the reals in the most classical sense, that is to say, computable analysis (see e.g. \cite{Wei00}). Indeed, considering that $\N \subset \R$, most of the basic functions and operations in the above characterization (for example, $+$, $-$, \dots) have a clear meaning over the reals. % One clear difficulty is that discrete ODEs are about discrete schemata, while we would like to talk about functions over the continuum. We did not succeed to do so yet, but we propose here a substantial step towards this direction: We provide a characterization of polynomial time computable functions \emph{from the integers to the reals} using discrete linear ODEs: considering linear ODEs is very natural in the context of ODEs. To do so, we naturally go to talking about algebra of functions more general than discrete functions, that is to say over more general space than $\N$ and $\Z$. This introduces some subtleties, and difficulties, that we discuss in this article, with our various concepts, definitions and statements. We hence basically consider in this article functions of type $ f: S_{1} \times \dots \times S_{d} \to S_{0}$, where each $S_{i}$ is either $\N$, $\Z$ or $\Q$ or $\R$. Or possibly vectorial functions whose components are of this type. We denote $\Lesfonctionsquoi$ for the class of such functions. Clearly, we can consider $\N \subset \Z \subset \Q \subset \R$, but as functions may have different type of outputs, composition is an issue. We simply admit that composition may not be defined in some cases. In other words, we consider that composition is a partial operator: for example, given $f: \N \to \R$ and $g: \R \to \R$, the composition of $g$ and $f$ is defined as expected, but $f$ cannot be composed with a function such as $h: \N \to \N$. We then consider the class $$\manonclass = [\mathbf{0},\mathbf{1},\projection{i}{k}, \length{x}, \plus, \minus, \times,\signb{x},\frac{x}{2};{composition, linear~length~ODE}]$$ of functions of $\Lesfonctionsquoi$. Here \shortermcu{\begin{itemize} \item} $\ell: \N \to \N$ is the length function, mapping some integer to the length of its binary representation, $\frac{x}{2}: \R \to \R$ is the function that divides by $2$, and all other basic functions are defined exactly as for $\mathbb{LDL}$, but considered here as functions from the reals to reals. \shortermcu{ \item} $\signb{x}: \R \to \R$ is some piecewise affine function that takes value $1$ for $x>\frac34$ and $0$ for $x<\frac14$, and continuous piecewise affine: In particular, its restrictions to the integer is the function $\sign{x}$ considered in $\mathbb{LDL}$. % \shortermcu{ \end{itemize} } We prove the following ($\|.\|$ stands for the sup-norm). \begin{theorem}[Main theorem $1$] \label{th:main:one} A function $\tu f: \N^{d} \to \R^{d'}$ is computable in polynomial time if and only if there exists $\tilde{\tu f}:\N^{d+1} \to \R^{d'} \in \manonclass$ such that for all $\tu m \in \N^{d}$, $n \in \N$, $\|\tilde{\tu f}(\tu m,2^{n}) - \tu f(\tu m) \| \le 2^{-n}.$ \end{theorem} \begin{proof} Assume there exists $\tilde{\tu f}:\N^{d+1} \to \R^{d'} \in \manonclass$ such that for all $\tu m \in \N^{d}$, $n \in \N$, $\|\tilde{\tu f}(\tu m,2^{n}) - \tu f(m) \| \le 2^{-n}.$ From Proposition \ref{prop:mcu:un}, we know that $\tilde{\tu f}$ is computable in polynomial time (in the binary length of its arguments). Then $\tu f(m)$ is computable: indeed, given some integers $\tu m$ and $n$, we can approximate $\tu f(\tu m)$ at precision $2^{-n}$ as follows: Approximate $\tilde{\tu f}(\tu m,2^{n+1})$ at precision $2^{-(n+1)}$ by some rational $q$, and output $q$. We will then have $$\|q-\tu f(\tu m)\| \le \|q-\tilde{\tu f}(\tu m,2^{n+1}) \| + \|\tilde{\tu f}(\tu m,2^{n+1})-\tu f(\tu m)\| \le 2^{-(n+1)} + 2^{-(n+1)} \le 2^{-n}.$$ All of this is done in polynomial time in $n$ and the size of $\tu m$, and hence we get that $\tu f$ is polynomial time computable from definitions. \end{proof} From the fact that we have the reverse direction in previous theorem, it is natural to consider the operation that maps $\tilde{\tu f}$ to $\tu f$. Namely, we introduce the operation $\MANONlim$ ($\MANONlim$ stands for Effective Limit): \begin{definition}[Operation $\MANONlim$] Given $\tilde{\tu f}:\N^{d+1} \to \R^{d'} \in \manonclass$ such that for all $\tu m \in \N^{d}$, $n \in \N$, $\|\tilde{\tu f}(\tu m,2^{n}) - \tu f(\tu m) \| \le 2^{-n},$ then $\MANONlim(\tilde{\tu f})$ is the (clearly uniquely defined) corresponding function $\tu f: \N^{d} \to \R^{d'}$. \end{definition} We obtain our main result, that provides a characterization of polynomial time computable functions for functions from the integers to the reals. \begin{theorem}[Main theorem $2$] \label{th:main:two} A function $\tu f: \N^{d} \to \R^{d'}$. is computable in polynomial time if and only if all its components belong to $\manonclasslim$, where : $$\manonclasslim= [\mathbf{0},\mathbf{1},\projection{i}{k}, \length{x}, \plus, \minus, \times, \signb{x}, \frac{x}{2}; {composition, linear~length~ODE};\MANONlim].$$ \end{theorem} In particular: \begin{theorem}[Main theorem $2$]\label{th:main:twop} $\manonclasslim \cap \R^{\N} = \cp{FPTIME} \cap \R^{\N} $ \end{theorem} \shortermcu{ The rest of the paper is organized as follows:} In Section \ref{sec:discreteode}, we recall \shorter{some basic statements from }the theory of discrete ODEs. In Section \ref{sec:defanalysecalculable}, we recall required concepts from computable analysis. In Section \ref{manonclassdansfptime}, we prove that functions from $\manonclass$ are polynomial time computable. Section \ref{fptimedansmanonclass} is proving a kind of reverse implication for functions over words. Then this is extended in Section \ref{sec:computablereal} to functions from integers to the reals, and we obtain a proof of Theorem \ref{th:main:one}. Section \ref{sec:main:two} then proves Theorems \ref{th:main:two} and \ref{th:main:twop} . Section \ref{sec:generalizations} is some generalizations of these results. Section \ref{sec:conclusion} discusses future work and difficulties to go to functions of $\R^{\R}$. \paragraph{Related work.} Various computability and complexity classes have been recently characterized using (classical) continuous ODEs: The most up-to-date survey is \cite{DBLP:journals/corr/BournezGP16}. Dealing with discrete ODEs is really different, as most of the constructions heavily rely on some closure properties of continuous ODEs not true for discrete ODEs, in particular because there is no chain rule formula for discrete derivation. The idea of considering discrete ODE as a model of computation is due to \THEPAPIERS. In a non-ODE centric point of view, we are characterizing some complexity classes using particular discrete schemata. Recursion schemes constitutes a major approach of computability theory and to some extent of complexity theory. The foundational characterization of $\cp{FPTIME}$ due to Cobham \cite{Cob65}, and then others based on safe recursion \cite{bs:impl} or ramification (\cite{LM93,Lei94}), or for other classes \cite{lm:pspace}, gave birth to the very vivid field of \textit{implicit complexity} at the interplay of logic and theory of programming: See \cite{Clo95,clote2013boolean} for monographs. Our ways of simulating Turing machines have some reminiscence of similar constructions used in other contexts such as Neural Networks \cite{SS95,LivreSiegelmann}. But with respect to all previous contexts, as far as we know, only a few papers have been devoted to characterizations of complexity, and even computability, classes in the sense of computable analysis. There have been some attempts using continuous ODEs \cite{BCGH07}, or the so-called $\R$-recursive functions \cite{DBLP:journals/corr/BournezGP16}. For discrete schemata, we only know \cite{brattka1996recursive} and \cite{ng2021recursion}, focusing on computability and not complexity. \section{Some concepts from the theory of discrete ODEs} \label{sec:discreteode} In this section, we recall some concepts and definitions from discrete ODEs, either well-known or established in \THEPAPIERS. \newcommand\polynomial{ \fonction{sg}-polynomial} We need to slightly extend the concept of \polynomial{} expression from \THEPAPIERS{} to allow expressions with $\signb$ instead of $\sign$. \olivier{Peut-être besoin d'être plus fin sur la forme de ce qu'on autorise dans la définition d'après pour que ca marche parfaitement/au plus simple} \begin{definition}[Extension of \THEPAPIERS] A \polynomialb{} expression $P(x_1,...,x_h)$ is an expression built-on $+,-,\times$ (often denoted $\cdot$) and $\signb{}$ functions over a set of variables $V=\{x_1,...,x_h\}$ and integer constants. The degree $\deg(x,P)$ of a term $x\in V$ in $P$ is defined inductively as follows: \shortermcu{ \begin{itemize} \item} $\deg(x,x)=1$ and for $x'\in V\cup \Z$ such that $x'\neq x$, $\deg(x,x')=0$; \shortermcu{\item} $\deg(x,P+Q)=\max \{\deg(x,P),\deg(x,Q)\}$; \shortermcu{\item} $\deg(x,P\times Q)=\deg(x,P)+\deg(x,Q)$; \shortermcu{\item} $\deg(x,\sign{P})=0$. \shortermcu{ \end{itemize}} A \polynomialb{} expression $P$ is \textit{essentially constant} in $x$ if $\degre{x,P}=0$. \end{definition} Compared to the classical notion of degree in polynomial expression, all subterms that are within the scope of a sign (that is to say $\signb{}$) function contributes $0$ to the degree. A vectorial function (resp. a matrix or a vector) is said to be a \polynomialb{} expression if all its coordinates (resp. coefficients) are. It is said to be \textit{essentially constant} if all its coefficients are. \begin{definition}[\THEPAPIERS] \label{def:essentiallylinear} A \polynomialb{} expression $\tu g(\tu f(x, \tu y), x, \tu y)$ is \textit{essentially linear} in $\tu f(x, \tu y)$ if it is of the form \tu g(\tu f(x, \tu y), x, \tu y) \tu A [\tu f(x,\tu y), x,\tu y] \cdot \tu f(x,\tu y) + \tu B [\tu f(x,\tu y), x,\tu y ] $ where $\tu A$ and $\tu B$ are \polynomialb{} expressions essentially constant in $\tu f(x, \tu y)$ \end{definition} For example, the expression $P(x,y,z)=x\cdot \signb{(x^2-z)\cdot y} + y^3$ is essentially linear in $x$, essentially constant in $z$ and not linear in $y$. \shortermcu{ \item The expression $P(x,2^{\length{y}},z)=\signb{x^2 - z}\cdot z^2 + 2^{\length{y}}$ is essentially constant in $x$, essentially linear in $2^{\length{y}}$ (but not essentially constant) and not essentially linear in $z$. \item } The expression: $ z + (1-\signb{x})\cdot (1-\signb{-x})\cdot (y-z) $ is essentially constant in $x$ and linear in $y$ and $z$. \begin{definition}[Linear length ODE \THEPAPIERS]\label{def:linear lengt ODE} Function $\tu f$ is linear $\mathcal{L}$-ODE definable (from $\tu u$, $\tu g$ and $\tu h$) if it corresponds to the solution of \begin{equation} \label{SPLode} f(0,\tu y) = \tu g(\tu y) \quad and \quad \dderivl{\tu f(x,\tu y)} = \tu u(\tu f(x,\tu y), \tu h(x,\tu y), x,\tu y) \end{equation} \noindent where $\tu u$ is \textit{essentially linear} in $\tu f(x, \tu y)$. \end{definition} \section{Concepts from computable analysis} \label{sec:defanalysecalculable} When we say that a function $f: S_{1} \times \dots \times S_{d} \to \R^{d'}$ is (respectively: polynomial-time) computable this will always be in the sense of computable analysis. We recall here the basic concepts and definitions, mostly following the book \cite{Ko91}, whose subject is complexity theory in computable analysis. Alternative presentations include \cite{brattka2008tutorial,Wei00}. Actually, as we want to talk about functions in $\Lesfonctionsquoi$, we need to mix complexity issues dealing with integer and real arguments. \shortermcu{ \begin{remark} One difficulty is that \cite{Ko91} does not formalize all the statements for general functions of $\Lesfonctionsquoi$, and actually almost always restricts to functions over compact domains, and hence we cannot always refer to statements fully formalized in this book, and this why we formulate here explicitly some of the statements and definitions. \end{remark} } \shortermcu{ \subsection{On computable analysis: Computability}} A dyadic number $d$ is a rational number with a finite binary expansion. That is to say $d=m / 2^{n}$ for some integers $m \in \Z$, $n\in \N$, $n \geq 0$. Let $\dyadic$ be the set of all dyadic rational numbers. We denote by $\dyadic_{n}$ the set of all dyadic rationals $d$ with a representation $s$ of precision $\operatorname{prec}(s)=n$; that is, $\dyadic_{n}=\left\{m \cdot 2^{-n} \mid m \in \Z\right\}$. \begin{definition}[\cite{Ko91}] \label{def:cinq} For each real number $x$, a function $\phi: \N \rightarrow \dyadic$ is said to binary converge to $x$ if for all $n \in \N, \operatorname{prec}(\phi(n))=n$ and $|\phi(n)-x| \leq 2^{-n}$. Let $C F_{x}$ (Cauchy function) denote the set of all functions binary converging to $x$. \end{definition} Intuitively Turing machine $M$ computes a real function $f$ in the following way: 1. The input $x$ to $f$, represented by some $\phi \in C F_{x}$, is given to $M$ as an oracle; 2. The output precision $2^{-n}$ is given in the form of integer $n$ as the input to $M$; 3. The computation of $M$ usually takes two steps, though sometimes these two steps may be repeated for an indefinite number of times: 4. $M$ computes, from the output precision $2^{-n}$, the required input precision $2^{-m}$; 5. $M$ queries the oracle to get $\phi(m)$, such that $\|\phi(m)-x\| \leq 2^{-m}$, and computes from $\phi(m)$ an output $d \in \dyadic$ with $\|d-f(x)\| \leq$ $2^{-n}$. More formally: \begin{definition}[\cite{Ko91}] A real function $f: \R \rightarrow \R$ is computable if there is a function-oracle {TM} $M$ such that for each $x \in \R$ and each $\phi \in C F_{x}$, the function $\psi$ computed by $M$ with oracle $\phi$ (i.e., $\left.\psi(n)=M^{\phi}(n)\right)$ is in $C F_{f(x)}$. \shortermcu{We say the function $f$ is computable on interval $[a, b]$ if the above condition holds for all $x \in[a, b]$.} \end{definition} \shortermcu{ \begin{remark} Given some $x \in \R$, such an oracle TM $M$ can determine some integer $X$ such that $x \in [-2^{X},2^{X}]$. \end{remark} } \olivier{Probablement superflus pour l'instant: The following concept plays a very important role: \begin{definition} \label{def:above} Let $f:[a, b] \rightarrow \R$ be a continuous function on $[a, b]$. Then, a function $m: \N \rightarrow \N$ is said to be a modulus function of $f$ on $[a, b]$ if for all $n \in \N$ and all $x, y \in[a, b]$, we have $$ |x-y| \leq 2^{-m(n)} \Rightarrow|f(x)-f(y)| \leq 2^{-n} $$ \end{definition} The following is well known (see e.g. \cite{Ko91} for a proof): \begin{theorem} A function $f: \R \rightarrow \R$ is computable iff there exist two recursive functions $m: \N \times \N \rightarrow \N$ and $\psi: \dyadic \times \N \rightarrow \dyadic$ such that \begin{enumerate} \item for all $k, n \in \N$ and all $x, y \in[-k, k],|x-y| \leq 2^{-m(k, n)}$ implies $|f(x)-f(y)| \leq 2^{-n}$, and \item for all $d \in \dyadic$ and all $n \in \N,|\psi(d, n)-f(d)| \leq 2^{-n}$. \end{enumerate} \end{theorem} } \shortermcu{ \subsection{On computable analysis: Complexity} } Assume that $M$ is an oracle machine which computes $f$ on do$\operatorname{main} G$. For any oracle $\phi \in C F_{x}$, with $x \in G$, let $T_{M}(\phi, n)$ be the number of steps for $M$ to halt on input $n$ with oracle $\phi$, and $T_{M}^{\prime}(x, n)=\max \left\{T_{M}(\phi, n) \mid \phi \in C F_{x}\right\}$. The time complexity of $f$ is defined as follows. \begin{definition}[\cite{Ko91}] Let $G$ be bounded closed interval $[a, b]$. Let $f: G \rightarrow \R$ be a computable function. Then, we say that the time complexity of $f$ on $G$ is bounded by a function $t: G \times \N \rightarrow \N$ if there exists an oracle TM $M$ which computes $f$ such that for all $x \in G$ and all $n>0$, $T_{M}^{\prime}(x, n) \leq t(x, n)$. \end{definition} In other words, the idea is to measure the time complexity of a real function based on two parameters: input real number $x$ and output precision $2^{-n}$. Sometimes, it is more convenient to simplify the complexity measure to be based on only one parameter, the output precision. For this purpose, we say the uniform time complexity of $f$ on $G$ is bounded by a function $t^{\prime}: \N \rightarrow \N$ if the time complexity of $f$ on $G$ is bounded by a function $t: G \times \N \rightarrow \N$ with the property that for all $x \in G$, $t(x, n) \leq t^{\prime}(n)$. { However, if we do so, it is important to realize that if we would have taken $G=\R$ in previous definition, for unbounded functions $f$, the uniform time complexity does not exist, because the number of moves required to write down the integral part of $f(x)$ grows as $x$ approaches $+\infty$ or $-\infty$. Therefore, the approach of \cite{Ko91} is to do as follows (The bounds $-2^{X}$ and $2^{X}$ are somewhat arbitrary, but are chosen here because the binary expansion of any $x \in\left(-2^{n}, 2^{n}\right)$ has $n$ bits in the integral part). \begin{definition}[Adapted from \cite{Ko91}] For functions $f(x)$ whose domain is $\R$, we say that the (non-uniform) time complexity of $f$ is bounded by a function $t^{\prime}: \N^{2} \rightarrow \N$ if the time complexity of $f$ on $\left[-2^{X}, 2^{X}\right]$ is bounded by a function $t: \N^{2} \rightarrow \N$ such that $t(x, n) \leq t^{\prime}(X, n)$ for all $x \in\left[-2^{X}, 2^{X}\right]$. \end{definition} \olivier{probablement superflus car on parle pas d'espace: The space complexity of a real function is defined in a similar way. We say the space complexity of $f: G \rightarrow \R$ is bounded by a function $s: G \times \N \rightarrow \N$ if there is an oracle TM $M$ which computes $f$ such that for any input $n$ and any oracle $\phi \in C F_{x}, M^{\phi}(n)$ uses $\leq s(x, n)$ cells, and the uniform space complexity of $f$ is bounded by $s^{\prime}: \N \rightarrow \N$ if for all $x \in G$ and all $\phi \in C F_{x}, M^{\phi}(n)$ uses $\leq s^{\prime}(n)$ cells. } As we want to talk about general functions in $\Lesfonctionsquoi$, we extend the approach to more general functions. (for conciseness, when $\tu x=(x_{1},\dots,x_{p})$, $\tu X= (X_{1},\dots, X_{p})$, we write $\tu x \in [-2^{\tu X}, 2^{\tu X}]$ as a shortcut for $x_{1} \in\left[-2^{X_{1}}, 2^{X_{1}}\right]$, \dots, $x_{p} \in\left[-2^{X_{p}}, 2^{X_{p}}\right]$). \olivier{Pas encore convaincu que c'est propre ce truc.} \begin{definition}[Complexity for real functions: general case] \label{def:bonendroit} Consider a function $f(x_{1},\dots,x_{p},n_{1},\dots,n_{q})$ whose domain is $\R^{p} \times \N^{q}$. We say that the (non-uniform) time complexity of $f$ is bounded by a function $t^{\prime}: \N^{p+q+1} \rightarrow \N$ if the time complexity of $f(\cdot,\dots,\cdot,\ell(n_{1}),\dots,\ell(n_{q}))$ on $\left[-2^{X_{1}}, 2^{X_{1}}\right] \times \dots \left[-2^{X_{p}}, 2^{X_{p}}\right] $ is bounded by a function $t(\cdot,\dots,\cdot,\ell(n_{1}),\dots,\ell(n_{q}),\cdot): \N^{p} \times \N \to \N$ such that $ t(\tu x,\ell(n_{1}),\dots,\ell(n_{q}), n) \leq t^{\prime}(\tu X,\ell(n_{1}), \dots,\ell(n_{q}), n)$ whenever $\tu x \in \left[-2^{\tu X}, 2^{\tu X}\right].$ % We say that $f$ is polynomial time computable if $t^{\prime}$ can be chosen as a polynomial. We say that a vectorial function is polynomial time computable iff all its components are. \end{definition} \shortermcu{ \begin{remark} There is some important {subtlety}: When considering $f: \N \to \Q$, as $\Q \subset \R$, stating $f$ is computable may mean two things: in the classical sense, given integer $y$, i.e. one can compute $p_y$ and $q_{y}$ some integers such that $f(y)=p_{y}/q_{y}$, or that it is computable in the sense of computable analysis: given some precision $n$, given arbitrary $y$, and $n$ we can provide some rational (or even dyadic) $q_{n}$ such that $|q_{n}-f(y)| \leq 2^{-n}$. As we said, we always consider the latter. \end{remark} } We do so that this measures of complexity extends the usual complexity for functions over the integers, where complexity of integers is measured with respects of their lengths, and over the reals, where complexity is measured with respect to their approximation. % In particular, in the specific case of a function $f: \N^{d} \to \R^{d'}$, that basically means there is some polynomial $t': \N^{d+1} \to \N$ so that the time complexity of producing some dyadic approximating $f(\tu m)$ at precision $2^{-n}$ is bounded by $t'(\ell(m_{1}),\dots,\ell(m_{d}),n)$. \olivier{On a besoin de ca? subsection{Some facts from computable analysis} En tous les cas, ca parle de $P_{C}[a, b]$ qu'on a pas introduit. Peut etre faire disparaitre \begin{theorem}[Alternative characterization \cite{Ko91}] A function $f$ is in $P_{C}[a, b]$ iff there exist polynomial functions $m$ and $q$ and a function $\psi:(\dyadic \cap[a, b]) \times \N \rightarrow \dyadic$ such that \begin{enumerate} \item $m$ is a modulus function for $f$ on $[a, b]$, \item for any $d \in \dyadic \cap[a, b]$ and all $n \in \N,|\psi(d, n)-f(d)| \leq 2^{-n}$, and \item $\psi(d, n)$ is computable in time $q(\ell(d)+n)$. \end{enumerate} \end{theorem} } \olivier{New. Est-ce clair. Est-ce une terminologie élégante. Mieux?} In other words, when considering that a function is polynomial time computable, it is in the length of all its integer arguments, as this is the usual convention. However, we need sometimes to consider also polynomial dependency directly in one of some specific integer argument, say $n_{i}$, and not on its length $\ell(n_{i})$. We say that \emph{the function is polynomial time computable, \unaire{n_{i}}} when this holds (keeping possible other integer arguments $n_{j}$, $j \neq i$, measured by their length). A well-known observation is the following. \begin{theorem} Consider $\tu f$ as in Definition \eqref{def:bonendroit} computable in polynomial time. Then $\tu f$ has a polynomial modulus function of continuity, that is to say there is a polynomial function $m_{\tu f}: \N^{p+q+1}\rightarrow \N$ such that for all $\tu x,\tu y$ and all $n>0$, $\|\tu x-\tu y\| \leq 2^{-m_{\tu f}(\tu X,\ell(n_{1}),\dots,\ell(n_{q}), n)}$ implies $\|\tu f(\tu x,n_{1}, \dots,n_{q})-\tu f(\tu y,n_{1}, \dots,n_{q})\| \leq 2^{-n}$, whenever $\tu x,\tu y \in\left[-2^{\tu X}, 2^{\tu X}\right]$. \end{theorem} \olivier{A partir de là, c'est nos résultats} \section{Functions from $\manonclass$ are in $\cp{FPTIME}$} \label{manonclassdansfptime} The following proposition is proved by induction\footnote{Details on proofs are in Section \ref{sec:proofs} in appendix.} from standard arguments. The hardest part is to prove that the class of polynomial time computable functions is preserved by the linear length ODE schema: This is Lemma \ref{lem:un}. \begin{proposition} \label{prop:mcu:un} All functions of $\manonclass$ are computable (in the sense of computable analysis) in polynomial time. \end{proposition} \begin{proof} This is proved by induction. This is true for basis functions, from basic arguments from computable analysis. In particular as $\signb{.}$ is a continuous piecewise affine function with rational coefficients, it is computable in polynomial time from standard arguments. Now, the class of polynomial time computable functions is preserved by composition. The idea of the proof for $COMP(f,g)$, is that by induction hypothesis, there exists $M_f$ and $M_g$ two Turing machines computing in polynomial time $f: \RR \rightarrow \RR$ and $g : \RR \rightarrow \RR$. In order to compute $COMP(f,g)(x)$ with precision $2^{-n}$, we just need to compute $g(x)$ with a precision $2^{-m(n)}$, where $m(n)$ is the polynomial modulus of continuity of $f$. % Then, we compute $f(g(x))$, which, by definition of $M_f$ takes a polynomial time in $n$. % Thus, since $\mathrm{P}_\RR^{\mathrm{P}_\RR} = \mathrm{P}_\RR$, $COMP(f,g)$ is computable in polynomial time, so the class of polynomial time computable functions is preserved under composition. It only remains to prove that the class of polynomial time computable functions is preserved by the linear length ODE schema: This is Lemma \ref{lem:un}. \end{proof} \begin{lemma}\label{lem:un} The class of polynomial time computable functions is preserved by the linear length ODE schema. \end{lemma} \newcommand{\vertiii}[1]{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \newcommand\tnorm[1]{\vertiii{#1}} \olivier{Avis au peuple: Notation $\tnorm{.}$ pourrie? euh, est-ce pas bien de noter comme ca un truc qui n'est pas classique avec une notation qui veut en général dire autre chose.} We propose to write $\MYVEC{x}$ for $2^{x}-1$ for conciseness. We write $\tnorm{\cdots}$ for the sup norm of integer part: given some matrix $\tu A=(A_{i,j})_{1 \le i \le n, 1 \le j \le m}$, $\tnorm{\tu A}=\max_{i,j} \lceil A_{i,j} \rceil $. In particular, given a vector $\tu x$, it can be seen as a matrix with $m=1$, and $\tnorm{\tu x}$ is the sup norm of the integer part of its components. \begin{proof} Using Lemma \ref{fundob} in appendix (This lemma is repeated from \THEPAPIERS), when the schema of Definition \ref{def:linear lengt ODE} holds, we can do a change of variable to consider $\tu f(x,\tu y)=\tu F(\ell(x),\tu y)$, with $\tu F$ solution of a discrete ODE of the form $\dderiv{\tu F(t,\tu y)}{t} = {\tu A} ( \tu F(t,\tu y), \tu h(\MYVEC{t},\tu y), \MYVEC{t}, \tu y) \cdot \tu F(t,\tu y) + {\tu B} ( \tu F(t,\tu y), \tu h(\MYVEC{t},\tu y), \MYVEC{t}, \tu y),$ that is to say, of the form \eqref{eq:bcg} below. It then follows from: \end{proof} \begin{lemma}[Fundamental observation] \label{fundamencoreg} Consider the ODE % \begin{equation} \label{eq:bcg} \tu F^\prime(x,\tu y)= {\tu A} ( \tu F(x,\tu y), \tu h(\MYVEC{x},\tu y), \MYVEC{x}, \tu y) \cdot \tu F(x,\tu y) + {\tu B} ( \tu F(x,\tu y), \tu h(\MYVEC{x},\tu y), \MYVEC{x}, \tu y). \end{equation} % Assume: 1. The initial condition $\tu G(\tu y) = ^{def} \tu F(0, \tu y)$, as well as $\tu h(\MYVEC{x},\tu y)$ are polynomial time computable \unaire{x}. 2. ${\tu A} ( \tu F(x,\tu y), \tu h (\MYVEC{x},\tu y), \MYVEC{x}, \tu y)$ and ${\tu B} ( \tu F(x,\tu y), \tu h(\MYVEC{x},\tu y), \MYVEC{x}, \tu y)$ are \polynomial{} expressions essentially constant in $\tu F(x,\tu y)$. % Then, there exists a polynomial $p$ such that $\length{\tnorm{\tu F(x,\tu y)}}\leq p(x,\length{\tnorm{\tu y}})$ and $\tu F(x,\tu y)$ is polynomial time computable \unaire{x}. \end{lemma} \begin{proof} The fact that there exists a polynomial $p$ such that $\length{\tnorm{\tu f(x,\tu y)}}\leq p(x,\length{\tnorm{\tu y}})$, follows from the fact that we can write some explicit formula for the solution of \eqref{eq:bcg}: This is Lemma \ref{def:solutionexplicitedeuxvariables} in appendix, repeated from \THEPAPIERS. Now, bounding the size of the right hand side of formula \eqref{eq:rq:fund} provides the statement. Now the fact that $\tu F(x,\tu y)$ is polynomial time computable, follows from a reasoning similar to the one of following lemma (the lemma below restricts the form of the recurrence by lack of space, but the more general recurrence of \eqref{eq:bcg} would basically not lead to any difficulty): The fact that the modulus of continuity of a linear expression of the form of the right hand side of \eqref{eq:bcg} is necessarily affine in its first argument follows from the hypothesis and from previous paragraph, using the fact that $\signb$ has a linear modulus of convergence. \end{proof} \begin{lemma} Suppose that function $\tu f: \N \times \R^{d} \to \R^{d'}$ is such that for all $x, \tu y$, \olivier{Pas assez général sous cette forme, meme si ca marche pareil. Tenter d'escroquer en disant qu'on manque de place. Est-ce une escroquerie?} $$ \tu f(0,\tu y) =\tu g(\tu y) \quad and \quad \tu f(x+1,\tu y) = \tu h(\tu f(x,\tu y),x,\tu y)) $$ for some functions $\tu g: \R^{d} \to \R^{d'}$ and $\tu h: \R^{d'} \times \R \times \R^{d} \to \R^{d'}$ both computable in polynomial time \unaire{x}. Suppose that the modulus $m_{h}$ of continuity of $\tu h$ is affine in its first argument: For all $\tu f,\tu f' \in [-2^{\tu F}, 2^{\tu F}]$, $\tu y \in [-2^{\tu Y}, 2^{\tu Y}]$, $\|\tu f-\tu f'\| \le 2^{-m_{h}(\tu F,\ell(x),\tu Y,n)}$ implies $|\tu h(\tu f,x,\tu y)-\tu h(\tu f',x,\tu y)| \le 2^{-n}$ with $m_{h}(\tu F, \ell(x),\tu Y,n) = \alpha n + p_{h}(\tu F,\ell(x),\tu Y)$ for some $\alpha$. Suppose there exists a polynomial $p$ such that $\length{\tnorm{\tu f(x,\tu y)}}\leq p(x,\length{\tnorm{\tu y}})$. Then $\tu f(x, \tu y)$ is computable in polynomial time \unaire{x}. \end{lemma} \begin{proof} The point is that we can compute $\tu f(n,\tu y)$ by \shortermcu{ \begin{align*} \tu f(n, \tu y) &= \tu h(\tu f(n-1,\tu y), n-1, \tu y) \\ &= \tu h(\tu h(\tu f(n-2,\tu y), n-2, \tu y), n-1, \tu y) \\ &= \dots \\ &= \underbrace{\tu h(\tu h(\dots \tu h}_{n}(\underbrace{f(0,\tu y)}_{g(\tu y)}, 0, \tu y)\dots), n-1, \tu y) \end{align*} Basically, the strategy is to compute } $\tu z_{0}=\tu f(0,\tu y)=\tu g(\tu y)$, then $\tu z_{1}=\tu f(1,\tu y) = \tu h(\tu z_{0}, 0, \tu y)$, then $\tu z_{2}=\tu f(2,\tu y) = \tu h(\tu z_{1}, 1, \tu y)$, then \dots, then $\tu z_{m}=\tu f(m,\tu y) = \tu h(\tu z_{m-1}, m-1, \tu y)$. One needs to do so with some sufficient precision so that the result given by $\tu f(l,\tu y)$ is correct, and so that the whole computation can be done in polynomial time. Given $\tu y$, we can determine $\tu Y$ such that $\tu y \in [-2^{\tu Y},2^{\tu Y}]$. Assume for now that for all $m$, \begin{equation} \label{eq:bienborne} z_{m} \in [-2^{Z_{m}},2^{Z_m}] \end{equation} \olivier{Version top-down: pour aider à comprendre. \begin{itemize} \item It is basically sufficient to determine $z_{l}=\tu f(l,\tu y)=\tu h(z_{l-1},l-1,\tu y)$ with precision $2^{-n}$. (*) \item To get such an approximation (*), it suffices to approximate $z_{l-1}$ with precision $2^{-m_{h}(Z_{l-1},\ell(l-1),Y,n)}$ (**). Then indeed, $z_{l}$ could then be computed in a time $poly(Z_{l-1},\ell(l-1), Y, n )$. \item To get such an approximation (**) of $z_{l-1}=\tu f(l-1,\tu y)=\tu h(z_{l-2},l-2,\tu y)$ , it suffices to approximate $z_{l-2}$ with precision $2^{-m_{h}(Z_{l-2},\ell(l-2),Y,m_{h}(Z_{l-1},\ell(l-1),Y,n))}$ We have: $$m_{h}(Z_{l-2},\ell(l-2),Y,m_{h}(Z_{l-1},\ell(l-1),Y,n)) = \alpha^{2} n + \alpha p_{h}(Z_{l-1},\ell(l-1),Y) + p_{h}(Z_{l-2},\ell(l-2),Y)$$ Then indeed, $z_{l-1}$ could then be computed in a time $poly(Z_{l-2},\ell(l-2), Y, m_{h}(Z_{l-1},\ell(l-1),Y,n) )$. \item and so on, until $z_{0}$ \end{itemize} } \olivier{Version bottom-up} For $i=0,1,\dots l$, consider $p(i)= \alpha^{l-i} n + \sum_{k=i}^{l-1} \alpha^{k-i} p_{h}(\tu Z_{k},\ell(k),\tu Y).$ Using the fact that $\tu g$ is computable, approximate $\tu z_{0}=\tu g(\tu y)$ with precision $2^{-p(0)}$. This is doable polynomial time \unaire{p(0)}. Then for $i=0,1,\dots, l$, using the approximation of $\tu z_{i}$ with precision $2^{-p(i)}$, compute an approximation of $\tu z_{i+1}$ with precision $2^{-p(i+1)}$: this is feasible to get precision $2^{-p(i+1)}$ of $\tu z_{i+1}$, as $\tu z_{i+1}=\tu f(i+1,\tu y) = \tu h(\tu z_{i},i,\tu y)$, it is sufficient to consider precision $$ \begin{array}{lll} m_{h}(\tu Z_i,\ell(i),\tu Y,p(i+1)) &=&\alpha p(i+1) + p_{h}(\tu Z_{i},\ell(i),\tu Y) \\ &=& \alpha^{l-i-1+1} n \\ &&+ \sum_{k=i+1}^{l-1} \alpha^{k-i-1+1} p_{h}(\tu Z_{k},\ell(k),\tu Y) + p_{h}(\tu Z_{i},\ell(i),\tu Y) \\ &=& p(i). \end{array}$$ % Observing that $p(l)=n$, we get $z_{l}$ with precision $2^{-n}$. % All of this is is indeed feasible in polynomial time \unaire{l}, under the condition that all the $Z_{i}$ remain of size polynomial, that is to say, that we have indeed \eqref{eq:bienborne}. But this follows from our hypothesis on $\length{\tnorm{\tu f(x,\tu y)}}$. \end{proof} \section{Functions from $\cp{FPTIME}$ are in $\manonclass$} \label{fptimedansmanonclass} This section is devoted to prove a kind of reverse implication of Proposition \ref{prop:mcu:un}: For any polynomial time computable function $\tu f: \N^{d} \to \R^{d'}$, we can construct some function $\tilde{\tu f} \in \manonclass$ that simulates the computation of $f$. This basically requires to be able to simulate the computation of a Turing machine using some functions from $\manonclass$. \newcommand\base{4} \newcommand\symboleun{1} \newcommand\symboledeux{3} \newcommand\encodageconfiguration{\gamma_{config}} \newcommand\encodagemot{\gamma_{word}} Consider without loss of generality some Turing machine $$M= (Q, \{0,1\}, q_{init}, \delta, F) $$ using the symbols $0,\symboleun,\symboledeux$, where $B=0$ is the blank symbol. The reason of the choice of symbols $\symboleun$ and $\symboledeux$ will be made clear latter. We assume $Q=\{0,1,\dots,|Q|-1\}$. Let $$ \dots l_{-k} l_{-k+1} \dots l_{-1} l_{0} r_0 r_1 \dots r_n .\dots$$ denote the content of the tape of the Turing machine $M$. In this representation, the head is in front of symbol $r_{0}$, and $l_i, r_{i} \in \{0,\symboleun,\symboledeux\}$ for all $i$. Such a configuration $C$ can be denoted by $C=(q,l,r)$, where $l,r \in \Sigma^{\omega}$ are (possibly infinite, if we consider that the tape can be seen as a non finite word, in the case there is no blank on it) words over alphabet $\Sigma=\{\symboleun,\symboledeux\}$ and $q \in Q$ denotes the internal state of $M$. The idea is that such a configuration $C$ can also be encoded by some element $\encodageconfiguration(C)=(q, \bar l,\bar r) \in \N \times \R^{2}$, by considering \begin{eqnarray*} \bar r &=& r_0 \base^{-1} + r_1 \base^{-2} + \dots + r_n \base^{-(n+1)} + \dots , \\ \bar l &= & l_{0} \base^{-1} + l_{-1} \base^{-2} + \dots + l_{-k} \base^{-(k+1)} + \dots \end{eqnarray*} Basically, in other words, we encode the configuration of bi-infinite tape Turing machine $M$ by real numbers using their radix \base{}  encoding, but using only digits $\symboleun$,$\symboledeux$. If we write: $\encodageconfiguration: \Sigma^{\omega} \to \R$ for the function that maps word $w=w_{0} w_{1} w_{2} \dots$ to $\encodagemot(w)= w_0 \base^{-1} + w_1 \base^{-2} + \dots + w_n \base^{-(n+1)} + \dots$, we can also write $\encodageconfiguration(C)=\encodageconfiguration(q,l,r)= (q,\encodagemot(l),\encodagemot(r)).$ \newcommand\Image{\mathcal{I}} Notice that this lives in $Q \times [0,1]^{2}$. Actually, if we write denote the image of $\encodagemot: \Sigma^{\omega} \to \R$ by $\Image$, this even lives in $Q \times \Image^{2}$. \shortermcu{ A key point is to observe that } \begin{lemma} We can construct some function $\bar {Next}$ in $\manonclass$ that simulates one step of $M$, i.e. that computes the $Next$ function sending a configuration $C$ of Turing machine $M$ to the next one. This function is essentially linear. \end{lemma} \begin{proof} We can write $l = l_0 l^\bullet $ and $r = r_0r^\bullet $, where $l^\bullet$ and $r^\bullet$ corresponding to (possibly infinite) word $l_{-1} l_{-2} \dots$ and $r_{1} r_{2} \dots$ respectively. \vspace{-0.2cm} \begin{center} \begin{tabular}{c c|c|c|c c} \hline ... & $l^\bullet $ & $l_0 $ & $r_0$ & $ r^\bullet$ & ... \\ \hline \multicolumn{1}{c}{} & \multicolumn{2}{@{}c@{}}{$\underbrace{\hspace*{\dimexpr6\tabcolsep+3\arrayrulewidth}\hphantom{012}}_{l}$} & \multicolumn{2}{@{}c@{}}{$\underbrace{\hspace*{\dimexpr6\tabcolsep+3\arrayrulewidth}\hphantom{3}}_{r}$} \end{tabular} \end{center} The function $ {Next}$ is basically of the form \begin{align*} \mathit{Next}(q,l,r) &= \mathit{Next}(q,l^\bullet l_0,r_0r^\bullet) = (q', l', r')\\ &= (q', l^\bullet l_0 x, r^\bullet) ~ whenever ~ \delta(q,r_{0}) = (q',x, \rightarrow) \\ & ~~~~ (q', l^\bullet, l_0 x r^\bullet) ~ whenever ~ \delta(q,r_{0}) = (q',x, \leftarrow) \\ &~~~~~ \dots \end{align*} \\[-0.7cm] where the dots is a list of lines of similar types for the various values of $q$ and $r_0$. This rewrites as a function $\bar{Next}$ which is similar, working over the representation of the configurations as reals: \begin{align*} \mathit{\bar{Next}}(q,\bar l, \bar r) &= \mathit{\bar{Next}}(q,\bar{l^\bullet l_0},\bar{r_0r^\bullet}) = (q', \bar{l'}, \bar{r'})\\ &= (q', \bar{l^\bullet l_0 x}, \bar{r^\bullet}) ~ whenever ~ \delta(q,r_{0}) = (q',x, \rightarrow) \\ & ~~~~ (q', \bar{l^\bullet}, \bar{l_0 x r^\bullet}) ~ whenever ~ \delta(q,r_{0}) = (q',x, \leftarrow) \\ &~~~~~ \dots \end{align*} \\[-0.7cm] \vspace{-0.5cm} \begin{equation} \begin{array}{l} \label{textaremplacer} \mbox{where $r_{0} = \lfloor \base \bar{r}\rfloor$ and \hfill } \\ \mbox{ $\bullet$ in the first case ``$\rightarrow$'' : $\bar{l'} = \base^{-1} \bar l + \base^{-1} x $ and $\bar{r'} = \bar{r^\bullet} = \{\base \bar r\} $} \\ \mbox{ $\bullet$ in the second case ``$\leftarrow$'' : $\bar{l'} =\bar{ l^\bullet} = \{\base \bar l \} $ and $\bar{r'} = \base^{-2} \bar{r^\bullet} + \base^{-1} x + \lfloor \base \bar{l}\rfloor $ } \end{array} \end{equation} Here $\{.\}$ stands for fractional part. The problem about such expressions is that we cannot expect the integer part and the fractional part function to be in $\manonclass$ (as functions of this class are computable, and hence continuous, unlike the fractional part). But, a key point is that from our trick of using only symbols $\symboleun$ and $\symboledeux$, we are sure that in an expression like $\lfloor \bar r \rfloor$, either it values $0$ (this is the specific case where there remain only blanks in $r$), or that $\base \bar r$ lives in interval $[\symboleun,\symboleun+1)$ or in interval $[\symboledeux,\symboledeux+1)$. That means that we could replace $\{\base \bar r\}$ by $\sigma(\base \bar r)$ where $\sigma$ is some (piecewise affine) function obtained by composing in a suitable way the basic functions of $\manonclass$. Namely, define $If (b, T, E)$ as a synonym for $ \signb{b} \times T + (1 - \signb{b}) \times E$. Then, considering $i(x)= If(x,0,If(x-1,1,3))$, $\sigma(x)=x-i(x)$, then $i(\base \bar r)$ would be the same as $\lfloor \base \bar r \rfloor$, and $\sigma(\base \bar r)$ would be the same as $\{ \base \bar r \}$ in our context in above expressions. In other words, we could replace the paragraph \eqref{textaremplacer} above by: \begin{equation*} \begin{array}{l} \mbox{ where $r_{0}= i(\base \bar r)$} \\ \mbox{$\bullet$ in the first case ``$\rightarrow$'' : $\bar{l'} = \base^{-1} \bar l + \base^{-1} x $ and $\bar{r'} = \bar{r^\bullet} = \sigma(\base \bar r) $} \\ \mbox{$\bullet$ in the second case ``$\leftarrow$'' : $\bar{l'} =\bar{ l^\bullet} = \sigma( \base \bar l) $ and $\bar{r'} = \base^{-2} \bar{r^\bullet} + \base^{-1} x + i(\base \bar{l})$} \end{array} \end{equation*} and get something that would be still work exactly, but using only functions from $\manonclass$. Notice that these imbrications of $If$ rewrite to an essentially constant expression. We can then write: $$q'= If(q-0, nextq^{0}, If(q-1, nextq^{1}, \cdots, If(q-|Q-2|, nextq^{|Q|-2},nextq^{|Q|-1})))$$ \olivier{ $$q'= \sum_{u=0}^{|Q|-1} \left( \prod_{i=0}^{u-1} Cond(q-u) \right) \cdot (1-Cond(q-i)) \cdot next^{q}$$ } where $$nextq^{q}= If(v-0,nextq^{q}_{0},If(v-\symboleun,nextq^{q}_{\symboleun},nextq^{q}_{\symboledeux}))$$ \olivier{ $$nextq^{q}= \sum_{v \in \{0,\symboleun,\symboledeux\}} \left( \prod_{j \in \{0,\symboleun,\symboledeux\}, j<i} Cond(r - \sigma(\base \bar r)-v) \right) \cdot (1-Cond(r - \sigma(\base \bar r)-j) \cdot next^{q}_{v}$$ } and where $nextq^{q}_{v}=q'$ if $\delta(q,v)= (q',x,m)$ for $m \in \{\leftarrow,\rightarrow\}$, for $v \in \{0,\symboleun,\symboledeux\}$. Similarly, we can write $$r'= If(q-0, nextr^{0}, If(q-1, nextr^{1}, \cdots, If(q-|Q-2|, nextr^{|Q|-2},nextr^{|Q|-1})))$$ \olivier{ $$r'= \sum_{u=0}^{|Q|-1} \left( \prod_{i=0}^{u-1} Cond(q-u) \right) \cdot (1-Cond(q-i)) \cdot next^{r},$$ } where $nextr^{q}= If(v-0,nextr^{q}_{0},If(v-\symboleun,nextr^{q}_{\symboleun},nextr^{q}_{\symboledeux}))$ \olivier{$$next^{r}= \sum_{v \in \{0,\symboleun,\symboledeux\}} \left( \prod_{j \in \{0,\symboleun,\symboledeux\}, j<i} Cond(r - \sigma(\base \bar r)-v) \right) \cdot (1-Cond(r - \sigma(\base \bar r)-j) \cdot next^{r}_{v}$$ } and where $nextr^{q}_{v}$ that corresponds to the corresponding expression in the item above according to the value of $\delta(q,v)$. We can clearly write a similar expression for $l'$. These imbrications of $If$ rewrite to some essentially linear expressions. \end{proof} Once we have one step, we can simulate some arbitrary computation of a Turing machine, using some linear length ODE: \begin{proposition} \label{prop:deux} Consider some Turing machine $M$ that computes some function $f: \Sigma^{*} \to \Sigma^{*}$ in some time $T(\ell(\omega))$ on input $\omega$. One can construct some function $\tilde{\tu f}: \N \times \R \to \R$ in $\manonclass$ that does the same, with respect to the previous encoding: We have $\tilde{\tu f}(2^{T(\ell(\omega))},\encodagemot(\omega))$ provides $f(\omega)$. \end{proposition} \begin{proof} The idea is to define the function $\bar {Exec}$ that maps some time $2^{t}$ and some initial configuration $C$ to the configuration number at time $t$. This can be obtained using some linear length ODE using previous Lemma. $$ \bar {Exec}(0,C) =C \quad and \quad \dderivl{\bar {Exec} (t, C) =\bar{Next}(\bar {Exec}(t,C)) $$ We can then get the value of the computation as $\bar {Exec}(2^{T(\ell(\omega))}, C_{init})$ on input $\omega$, considering $C_{init}=(q_{0},0,\encodagemot(\omega))$. By applying some projection, we get the following function $\tilde{\tu f}(x,y)= \projection{3}{3}(\bar {Exec}(x, q_{0},0,y))$ that satisfies the property. \end{proof} \section{Towards functions from integers to the reals} \label{sec:computablereal} The purpose of this section is to prove Theorem \ref{th:main:one}. \shortermcu{ \subsection{Reverse implication} } The reverse implication of Theorem \ref{th:main:one} mostly follows from Proposition \ref{prop:mcu:un} and arguments from computable analysis. By lack of space, details are in appendix. For the direct implication of Theorem \ref{th:main:one}, the difficulty is that we know from previous section how to simulate Turing machines working over $\Image$, while we want functions that work directly over the integers and over the reals. A key is to be able to convert from integers/reals to representations using only symbols $\symboleun$ and $\symboledeux$, that is to say, to map integers to $\Image$, and $\Image$ to reals. \begin{lemma}[{From $\Image$ to $\R$}] \label{lem:codage:manon} We can construct some function $\Encode: \N \times [0,1] \to \R$ in $\manonclass$ that maps $\encodagemot(\overline{d})$ with $\overline{d} \in \{1,3\}^*$ to some real $d$. It is surjective over the dyadic, in the sense that for any dyadic $d \in \dyadic$, there is some (easily computable) such $\overline{d}$ with $\Encode(2^{\ell(\overline{d})},\overline{d})=d$. \end{lemma} \begin{proof} Consider the following transformation: Every digit in the binary expansion of $d$ is encoded by a pair of symbols in the radix $4$ encoding of $\overline{d} \in [0,1]$: digit $0$ (respectively: $1$) is encoded by $11$ (resp. $13$) if before the ``decimal'' point in $d$, and digit $0$ (respectively: $1$) is encoded by $31$ (resp. $33$) if after. For example, for $d=101.1$ in base $2$, $\overline{d}=0.13111333$ in base $4$. The transformation from $\overline{d}$ to $d$ can be done by considering a function $F: [0,1]^{2} \to [0,1]^{2}$ that satisfies $$ F( \overline{r_1}, \overline{l_2}) = \left\{ \begin{array}{ll} (\sigma(16 \overline{r_1}), 2 \overline{l_2} + 0) & \mbox{ whenever } i( 16 \overline{r_1})= 5\\ (\sigma(16 \overline{r_1}), 2 \overline{l_2} + 1) & \mbox{ whenever } i( 16 \overline{r_1})= 7\\ (\sigma(16 \overline{r_1}), (\overline{l_2} + 0)/2) & \mbox{ whenever } i( 16 \overline{r_1})= 13\\ (\sigma(16 \overline{r_1}), (\overline{l_2} + 1)/2) & \mbox{ whenever } i( 16 \overline{r_1})= 15 \end{array}\right. $$ A natural candidate for this is an expression such as $If(i(16 \overline{r_1})-0,(\sigma(16 \overline{r_1}), 2 \overline{l_2} + 0), If(i(16 \overline{r_1})-7,(\sigma(16 \overline{r_1}), 2 \overline{l_2} + 1), If(i(16 \overline{r_1})-13,(\sigma(16 \overline{r_1}), (\overline{l_2} + 0)/2), (\sigma(16 \overline{r_1}),$ $(\overline{l_2} + 1)/2))))$ with $\sigma$ and $i$ constructed as suitable approximation of the fractional and integer part as in previous section. We provide more details and intuition on the proof of Lemma \ref{lem:codage:manon}: To compute $d$, given $\overline{d}$, the intuition is to consider a two-tapes Turing machine $(Q, \Sigma, q_{init}, \delta, F)$ : the first tape contains the input ($\overline{d}$), and is read-only, the second one is write-only and empty at the beginning. We just use a different encoding on the second tape that the previous one: For the first tape, we do restrict to digits $0,\symboleun,\symboledeux$, while for the second, we use binary encoding. Writing the natural Turing machine that does the transformation, this would basically do the following (in terms of real numbers), if we forget the encoding of the internal state. Here we write $\overline{ab}$ for the integer whose base $\base$ radix expansion is $ab$. This is how we got the function $F$ considered in the main part of the paper. Then the previous reasoning applies on the iterations of function $F$ that would provide some encoding function. Concerning the missing details on the choice of function $\sigma$ and $i$. From the fact that we have only $\symboleun$ and $\symboledeux$ in $\overline{r}$, the reasoning is valid as soon as $i(16 \overline{r})$ is correct for $16 \overline{r} \in \{\overline{11},\overline{13},\overline{31},\overline{33})$. So $i(x)=If(x-5,5,If(x-7,7,If(x-13,15))))$ works. Then take $\sigma(x)=x-i(x)$. We then just need to apply $\ell(\overline{d})$th times $F$ on $(\overline{d},0)$, and then project on the second component to get a function $\Encode$ that does the job. That is $Encode(x,y)= \projection{3}{3}(G(x,y))$ with $$ G(0,y) = (\overline{d},0) \quad and \quad \dderivl{G (t, \overline{d},\overline{l}) =F(G(t,\overline{d},\overline{l})). $$ \end{proof} \begin{lemma}[From $\N$ to $\Image$] \label{lem:manquant} We can construct some function $\Decode: \N^{d} \to \R$ in $\manonclass$ that maps $n \in \N$ to some (easily computable) encoding of $n$ in $\Image$. \end{lemma} \begin{proof} We discuss only the case $d=1$ by lack of space. Let $div_{2}$ (respectively: $mod_{2}$) denote integer (resp. remainder of) division by $2$: As these functions are from $\N \to \N$, from Theorem \ref{th:ptime characterization 2} from \THEPAPIERS{}, they belongs to $\mathbb{LDL}$. Their expression in $\mathbb{LDL}$, replacing $\sign$ by $\signb$, provides some extensions $\overline{div_{2}}$ and $\overline{mod{2}}$ in $\manonclass$. We then do something similar as in the previous lemma but now with function $$ F( \overline{r_1}, \overline{l_2}) = \left\{ \begin{array}{ll} (\overline{div_{2}}(\overline{r_1}), (\overline{l_2} + 0)/2) & \mbox{ whenever } \overline{mod_{2}}( \overline{r_1})=0 \\ (\overline{div_{2}}(\overline{r_1}), (\overline{l_2} + 1)/2) & \mbox{ whenever } \overline{mod_{2}}( \overline{r_1})=1. \\ \end{array}\right. $$ \end{proof} We can now prove the direct direction of Theorem \ref{th:main:one}: Assume that $\tu f: \N^{d} \to \R^{d'}$ is computable in polynomial time. That means that each of its components are, so we can consider without loss of generality that $d'=1$. We assume also that $d=1$ (otherwise consider either multi-tape Turing machines, or some suitable alternative encoding in $\Encode$). That means that we know that there is a TM polynomial time computable functions $d: \N^{d+1} \to \{\symboleun,\symboledeux\}^{*}$ so that on $\tu m,n$ it provides the encoding of some dyadic $\phi(\tu m,n)$ with $\|\phi(\tu m,n)-\tu f(\tu m)\| \le 2^{-n}$ for all $\tu m$. From Proposition \ref{prop:deux}, we can construct $\tilde{d}$ with $\tilde{d}(2^{p(max(\tu m,n))},\Decode(n,\tu m))=d(\tu m,n)$ for some polynomial $p$ corresponding to the time required to compute $d$. Both functions $\length{\tu x}=\length{x_1}+ \ldots + \length{x_p}$ and $B(\tu x)=2^{\length{\tu x}\cdot \length{\tu x}}$ are in $\mathbb{LDL}$ (see \THEPAPIERS). It is easily seen that : $\length{\tu x}^c\leq B^{(c)}(\length{\tu x}))$ where $B^{(c)}$ is the $c$-fold composition of function $B$. Then $\tilde{\tu f}(\tu m,n)=Encode(\tilde{d}( B^{(c)}(\max(\tu m,n)), \Decode(n,\tu m)))$ provides a solution such that $\|\tilde{\tu f}(\tu m,2^{n})-\tu f(\tu m)\| \le 2^{-n}.$ \section{Proving Theorems \ref{th:main:two} and \ref{th:main:twop}} \label{sec:main:two} Clearly Theorem \ref{th:main:twop} follows from the case where $d=1$ and $d'=1$ from Theorem \ref{th:main:two}. Hence, there only remain to prove Theorem \ref{th:main:two}. The direct direction is immediate from Theorem \ref{th:main:one}. For the reverse direction, by induction, the only thing to prove is that the class of functions from to the integers computable in polynomial time is preserved by the operation $\MANONlim$. Take such a function $\tilde{\tu f}$. By definition, given $\tu m$, we can compute $\tilde{f}(\tu m, 2^n)$ with precision $2^{-n} $ in time polynomial in $n$. This must be by definition of $\MANONlim$ schema some approximation of $\tu f(\tu m)$, and hence $\tu f$ is computable in polynomial time. \section{Generalizations} \label{sec:generalizations} \olivier{Mettre ici certaines genéralisations prouvées par Manon} Recall that a function $M : \N \rightarrow \N$ is a modulus of convergence of $g: \N \to \R$, with $g(n)$ converging toward $0$ when $n$ goes to $\infty$, if and only if for all $i>M(n)$, we have $\| g(i) \| \le 2^{-n} $. A function $M :\N \rightarrow \N$ is a uniform modulus of convergence of a sequence $g: \N^{d+1} \to \R$, with $g(\tu m,n)$ converging toward $0$ when $n$ goes to $\infty$ if and only if for all $i>M(n)$, we have $\| g(\tu m,i) \| \le 2^{-n} $. Intuitively, the modulus of convergence gives the speed of convergence of a sequence. \begin{definition}[Operation $\MANONlimd$] Given $\tilde{\tu f}:\N^{d+1} \to \R \in \manonclass$, $g: \N^{d+1} \to \R$ such that for all $\tu m \in \N^{d}$, $n \in \N$, $\|\tilde{\tu f}(\tu m,2^{n}) - \tu f(\tu m) \| \le g(\tu m, n)$ under the condition that $0 \le g(\tu m, n)$ is decreasing to $0$, with $\| g(\tu m,p(n)) \| \le 2^{-n}$ for some polynomial $p(n)$ then $\MANONlimd(\tilde{\tu f},g)$ is the (clearly uniquely defined) corresponding function $\tu f: \N^{d} \to \R^{e}$. \end{definition} \begin{theorem} We could replace $\MANONlim$ by $\MANONlimd$ in \shortermcu{the statements of} Theorems \ref{th:main:two} and \ref{th:main:twop}. \end{theorem} This is equivalent to prove the following, and observe from the proof that we can replace in above statement ``$g(\tu m,n)$ going to $0$'' by ``decreasing to $0$'', and last condition by $\| g(\tu m,p(n)) \| \le 2^{-n}$. \begin{theorem}\label{th:dix} $\tu F: \N^{d} \to \R^{d'}$ is computable in polynomial time iff there exists $\tu f: \N^{d+1} \rightarrow \Q^{d'}$, with $\tu f(\tu m,n)$ computable in polynomial time \unaire{n}, and $g : \N^{d+1} \rightarrow \Q$ such that \begin{itemize} \item $\| \tu f(\tu m,n) - \tu F(\tu m) \| \leq g(\tu m,n) $ \item $0 \le g(\tu m,n)$ and $g(\tu m,n)$ converging to $0$ when $n$ goes to $+\infty$, \item with a uniform polynomial modulus of convergence $p(n)$ \end{itemize} \end{theorem} \begin{proof} $\Rightarrow$ If we assume that $\tu F$ is computable in polynomial time, we set $g(\tu m, n) = 2^{-n}$, and we take the identity as uniform modulus of convergence. $\Leftarrow$ Given $\tu m$ and $n$, approximate $\tu f(\tu m,p(n+1)) \in \Q$ at precision $2^{-(n+1)}$ by some dyadic rational $q$ and output $q$. This can be done in polynomial time \unaire{n}. % We will then have $$\begin{array}{lll} \| q - \tu F(\tu m) \| & \le & \| q- \tu f(\tu m,p(n+1)) \| + \| \tu f(\tu m,p(n+1)) - \tu F(\tu m) \| \\ & \le & 2^{-(n+1)}+ g(\tu m,p(n+1)) \\ & \le & 2^{-(n+1)}+ 2^{-(n+1)} \le 2^{-n}. \end{array}$$ \end{proof} From the proofs we also get a normal form theorem. In particular, \begin{theorem}[Normal form theorem] Any function $f: \N^{d} \to \R^{d'}$ can be obtained from the class $\manonclasslim$ using only one schema $\MANONlim$ (or $\MANONlimd$). \end{theorem} \section{Conclusion and future work} \label{sec:conclusion} In this article, we characterized the set of functions from the integer to the reals. As we already said, our aim in a future work is to characterize $\cp{FPTIME} \cap \R^{\R} $ and not only $\cp{FPTIME} \cap \R^{\N} $. This is clearly a harder task. In particular, a natural approach would be to consider some function $\Encode$ from $\R$ to $\Image$. Unfortunately, such a function $decode$ is necessarily discontinuous, hence not-computable, hence cannot be in the class. The approach of \emph{mixing} of \cite{BCGH07} might provide a solution, even if the constructions there, based on (classical) continuous ODEs use deeply some closure properties of these functions that are not true for discrete ODEs. \newpage \printbibliography \newpage
2,869,038,155,808
arxiv
\section{Introduction} As a high order extension of matrix, the tensor is an important data format for multi-dimensional data applications, such as color image and video processing \cite{sobral2017matrix,Korah2007TIP,Liu2013PAMItensor}, hyperspectral data recovery and fusion \cite{yang2020remote,dian2019hyperspectral,deng2019fusion}, personalized web search \cite{Sun2005web,lima2017cellular}, high-order web link analysis \cite{Kolda2005Datamining}, magnetic resonance imaging (MRI) data recovery \cite{MRITV}, and seismic data reconstruction \cite{Kreimer2012HSVDtensor}. Owing to the objective restrictions, for example, the imaging condition for the visual data acquiring and the limitation of the transmission bandwidth, the multi-dimensional data in many applications are incomplete or grossly corrupted. This motivates us to perform tensor completion \cite{Liu2013PAMItensor} or tensor robust principal component analysis (RPCA) \cite{yang2020low}, in which how to characterize and utilize the internal structural information of these multidimensional data is of crucial importance. For the matrix processing, low-rank models can effectively and efficiently handle two-dimensional data of various sources \cite{candes2012exact,candes2011robust}. Generalized from matrix format, a tensor is able to contain more essentially structural information, being a powerful tool for dealing with multi-modal and multi-relational data \cite{song2016sublinear}. Unfortunately, it is not easy to directly extend the low-rankness from the matrix to tensors. More precisely, there is not an exact (or unique) definition for the tensor's rank. In the past decades, the most popular rank definitions are the CANDECOMP/PARAFAC (CP)-rank \cite{acar2011scalable,tichavsky2017numerical} and the Tucker-rank \cite{li2018low,li2017mr} (or denoted as ``$n$-rank'' in \cite{gandy2011tensor}). The CP-rank is based on the CP decomposition, however, computing the CP-rank of a given tensor is NP-hard \cite{hillar2013most}. The Tucker-rank is based on the Tucker decomposition, in which the tensor is unfolded along each mode unavoidably destroying the intrinsic structures of the tensor. In this paper, we investigate the newly emerged tensor rank definitions, i.e., the tensor multi-rank and the tensor tubal-rank, which are computable and induced from the tensor singular value decomposition (t-SVD). The t-SVD is initially proposed by Braman {\em et al.} \cite{braman2010third} and Kilmer {\em et al.} \cite{kilmer2011factorization}, based on the tensor-tensor product (denoted as t-prod), in which the third-order tensors are operated integrally avoiding the loss of information inherent in matricization or flattening of the tensor \cite{kilmer2013third}. Meanwhile, the t-SVD has shown its superior performance in capturing the spatial-shifting correlation that is ubiquitous in real-world data \cite{martin2013order,braman2010third,kilmer2011factorization}. Although the t-SVD is initially designed for third-order tensors, it has been extended to high order tensors with arbitrary dimensions \cite{martin2013order,zheng2019mixed}. In \cite{kernfeld2015tensor}, Kernfeld {\em et al.} note that the t-prod is based on a convolution-like operation, which can be implemented using the discrete Fourier transform (DFT). Then, given a third-order tensor $\mathcal{X}\in\mathbb{R}^{n_1\times n_2\times n_3}$, its Fourier transformed (along the third mode) tensor is denoted as $\widehat{\mathcal{X}}\in\mathbb{R}^{n_1\times n_2\times n_3}$ and its tensor multi-rank is a vector with the $i$-th element equal to the rank of $i$-th frontal slice of $\widehat{\mathbf{\mathcal{X}}}$ \cite{zhang2014novel}. The tensor nuclear norm (TNN) of $\mathcal{X}$ is subsequently defined and it equals to the sum of the nuclear norm of $\widehat{\mathbf{\mathcal{X}}}$'s frontal slices and is the relaxation of the sum of matrix ranks from all $\widehat{\mathbf{\mathcal{X}}}$'s slices. By minimizing the TNN, Zhang {\em et al.} \cite{zhang2014novel} build the low-rank tensor completion model and provided theoretical performance bounds for third-order tensor recovery from limited sampling. Lu {\em et al.} \cite{lu2016tensor} utilize the TNN\footnote{In \cite{lu2016tensor}, the TNN is defined with a factor $1/n_3$.} for the tensor RPCA. Similar researches, which adopt the TNN for multi-dimensional data recovery, can be found in \cite{jiang2017exact,lu2019tensor,hu2017twist}. Other than the Fourier transform, Kernfeld {\em et al.} find that the t-prod, together with the tensor decomposition scheme, can be defined via any invertible transform, for instance, the discrete cosine transform (DCT). Namely, the t-prod can be implemented by the matrices' product after the invertible transformation along the third mode. Xu {\em et al.} \cite{xu2018cosine} validate that, when minimizing the DCT based TNN for the tensor completion problem, the DCT is superior to the DFT in terms of the preservation of the head and the tail frontal slices, because of its mirror boundary condition. Corroborative results can be found in \cite{lu2019low,lu2019exact2}, which demonstrates that any invertible linear transform can be applied to induce the TNN for the tensor completion task. Coincidentally, Song {\em et al.} \cite{song2019robust} find that the corresponding transformed tubal-rank could be approximately smaller with an appropriate unitary transform, for instance, the Haar wavelet transform, and they prove that one can recover a low transformed tubal-rank tensor exactly with overwhelming probability provided that its transformed tubal rank is sufficiently small and its corrupted entries are reasonably sparse. The tensor data recovery within the t-SVD framework can be viewed as finding a low-rank approximation in the transformed domain. Therefore, if the transformed tensor could be approximately lower-rank, minimizing the corresponding TNN, namely the TNN defined based on the transformation, would be more effective for the recovery \cite{song2019robust}. In \cite{song2019robust,lu2019low,lu2019exact2}, the authors establish elegant theoretical results based on the unitary transform or the invertible linear transform. However, the requirement of the invertibility prevents their results from other non-invertible (or semi-invertible) transformations, which could bring in redundancy. We note that redundancy in the transformation is important as such transformed coefficients can contain information of missing data in the original domain, see for example the work by Cai {\em et al.} \cite{cai10}. In this paper, we suggest to use the tight wavelet frame (framelet) as the transformation within the t-SVD framework. Because of framelet basis redundancy, the representation of each tube is sparsely represented. We expect when each matrix slices of the original tensor, the corresponding sum of matrix ranks from all framelet transformed matrix slices would be small. As an example, we illustrate this motivation by using magnetic resonance image (MRI) of size $142 \times 178 \times 121$, multispectral image (MSI) of size $512 \times 512 \times 31$ and video data of size $144 \times 176 \times 100$ to demonstrate their rank reduction via framelet transformation\footnote{The piece-wise cubic B-spline is used to generate framelet system.} to the Fourier transformation. Note that for real imaging data, each transformed matrix frontal slice is not an exact low-rank matrix, but it is close to a low-rank matrix. There are many small singular values of each transformed matrix frontal slice. We show in Table \ref{lowerrank} that the mean value of the matrix ranks of $\mathcal{X}(:,:,i)$ (the $i$-th transformed matrix frontal slice). Here we discard the singular values of transformed matrix frontal slice when they are smaller than the truncation parameter, and the truncated rank of transformed matrix slice is obtained. It is clear that the mean value of such truncated matrix ranks by using framelet transformation is lower than that by using the Fourier transformation. When a framelet transformed tensor is close to a low-rank tensor compared with the use of the Fourier transform, it is expected that the resulting tensor completion can be performed much better in practice. The framelet based TNN (F-TNN) minimization models are subsequently formulated for the low-rank tensor completion (LRTC) and tensor RPCA. The proposed minimization models are convex and global minimizers can be obtained via the alternating direction multipliers method (ADMM) \cite{boyd2011distributed} with a theoretical convergence guarantee. We conduct numerical experiments on various types of multi-dimensional imaging data and the results verify that our framelet based method outperforms the compared methods. \begin{table}[!t]\label{lowerrank} \renewcommand\arraystretch{0.9}\setlength{\tabcolsep}{4pt}\scriptsize\centering \caption{The mean value of all the truncated transformed matrix slices ranks by using the FFT and the framelet transform for MRI, MSI and video data sets.} \begin{tabular}{ccccccc} \toprule \multirow{3}{*}{Data} &\multirow{3}{*}{Parameter $\epsilon$} &FFT & Framelet & \multirow{3}{*}{Reduction}\\ & &Multi-rank & Multi-rank & \\ & &(mean value) & (mean value) & \\\midrule \multirow{3}{*}{MRI} & 0.02 & 101.0 & 77.8 & 23.3 \\ & 0.01 & 120.1 & 94.1 & 25.9 \\ & 0.005 & 131.9 & 108.9 & 23.0 \\\midrule \multirow{3}{*}{Video} & 0.02 & 106.7 & 74.5 & 32.2 \\ & 0.01 & 122.7 & 92.2 & 30.5 \\ & 0.005 & 132.6 & 108.5 & 24.1 \\\midrule \multirow{3}{*}{MSI} & 0.02 & 83.8 & 46.1 & 37.7 \\ & 0.01 & 132.8 & 77.8 & 55.0 \\ & 0.005 & 218.0 & 136.0 & 82.0 \\ \bottomrule \end{tabular} \label{MRIpara} \end{table} \subsection{Contributions} The main contributions can be summarised as follows. \textbf{(i)} We suggest the framelet transform within the t-SVD framework and proposed a tensor completion model, which minimizes the framelet representation of the tensor nuclear norm. \textbf{(ii)} To tackle the non-invertible framelet transform based models, we develop alternating direction multipliers method (ADMM) based algorithms with guaranteed convergence, and we test our method on various types of multi-dimensional data. The outperformance of our method further corroborates the usage of framelet. The outline of this paper is given as follows. In Section \ref{Sec:Pre}, some preliminary background on tensors and the framelet is given. The main results, including the proposed model and algorithm, are presented in Section \ref{Sec:Model}. Experimental results are reported in Section \ref{Sec:Exp}. Finally, Section \ref{Sec:Con} draws some conclusions. \section{Preliminaries}\label{Sec:Pre} This section provides the basic ingredients to induce the proposed method. We firstly give the basic tensor notations and then introduce the t-SVD framework, which has been proposed in \cite{kilmer2013third,kilmer2011factorization,zhang2014novel,lu2016tensor}. We restate them here at the readers' convenience. Next, the basics of framelet are briefly presented. \subsection{Tensor Notations And Definitions} Generally, a third-order tensor is denoted as $\mathbf{\mathcal{X}}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$, and $x_{i,j,k}$ is its $(i,j,k)$-th component. We use $\mathcal{X}^{(k)}$ or $\mathcal{X}(:,:,k)$ to denote the $k$-th frontal slice of a third-order tensor $\mathbf{\mathcal{X}}\in\mathbb{R}^{n_1\times n_2\times n_3}$. {\begin{mydef}[tensor mode-3 unfolding and folding \cite{kolda2009tensor}] The mode-$3$ unfolding of a tensor $\mathcal{X}\in\mathbb{R}^{n_1\times n_2\times n_3}$ is denoted as a matrix $\mathbf X_{(3)}\in \mathbb{R}^{n_3\times n_1n_2}$, where the tensor's $(i,j,k)$-th element maps to the matrix's $(k,l)$-th element satisfying {$l=(j-1)n_1+i$}. The mode-3 unfolding operator and its inverse are respectively denoted as ${\tt{unfold}}_3$ and ${\tt{fold}}_3$, and they satisfy $\mathcal{X}={\tt fold}_{3}({\tt unfold}_{3}(\mathcal{X})) = {\tt fold}_{3}(\mathbf X_{(3)})$. \end{mydef}} { \begin{mydef}[mode-3 tensor-matrix product \cite{kolda2009tensor}] The mode-3 tensor-matrix product of a tensor $\mathcal{X} \in \mathbb{R}^{n_1\times n_2\times n_3}$ with a matrix $\mathbf A\in\mathbb{R}^{m\times n_3}$ is denoted by ${\mathcal{X}}\times_3\mathbf A$ and is of size $n_1\times n_2\times m$. Elementwise, we have \begin{equation} (\mathcal{X}\times_3 \mathbf A)_{i,j,k}=\sum_{n=1}^{n_3}x_{i,j,n}\cdot a_{k,n}. \end{equation} The mode-3 tensor-matrix product can also be expressed in terms of the mode-3 unfolding \begin{equation}\nonumber \mathcal{Y}=(\mathcal{X}\times_3 \mathbf A)\quad \Leftrightarrow \quad \mathbf Y_{(3)}=\mathbf A\cdot\text{unfold}_3(\mathcal{X}). \end{equation} \end{mydef}} The one-dimensional DFT on a vector ${\mathbf{x}}\in\mathbb{R}^n$, denoted as $\mathbf{\bar{x}}$, is given by $\mathbf{\bar{x}} = \mathbf F_n\mathbf x \in \mathbb{C}^n$, where $\mathbf F_n\in\mathbb{C}^{n\times n}$ is the DFT matrix. In this paper, we use $\widehat{\mathcal{X}}$ to denote the transformed tensor by performing one-dimensional DFT along the mode-3 fibers (tubes) of $\mathcal{X}$. By using the DFT matrix $\mathbf{F}_{n_3}\in\mathbb{C}^{n_3\times n_3}$, we have $$ \widehat{\mathbf{\mathcal{X}}} =\mathcal{X}\times_3\mathbf F_{n_3} = {\tt fold}_3\left( \mathbf{F}_{n_3}{\tt unfold}_3(\mathcal{X})\right)\in \mathbb{C}^{n_{1}\times n_2\times n_{3}}.$$ \begin{mydef}[tensor conjugate transpose \cite{kilmer2013third}] The conjugate transpose of a tensor $\mathbf{\mathcal{A}}\in \mathbb{C}^{n_{2}\times n_1\times n_{3}}$ is tensor $\mathbf{\mathcal{A}}^\text{\rm H}\in \mathbb{C}^{n_{1}\times n_2\times n_{3}}$ obtained by conjugate transposing each of the frontal slice and then reversing the order of transposed frontal slices 2 through $n_3$, i.e., $ \left(\mathbf{\mathcal{A}}^\text{\rm H}\right)^{(1)}=\left(\mathbf{\mathcal{A}}^{(1)}\right)^\text{\rm H}$ and $\left(\mathbf{\mathcal{A}}^\text{\rm H}\right)^{(i)}=\left(\mathbf{\mathcal{A}}^{(n_3+2-i)}\right)^\text{\rm H}$ ($i=2,\cdots,n_3$). \end{mydef} \begin{mydef}[t-prod \cite{kilmer2013third}]\label{Def:1} The tensor-tensor-product (t-prod) $\mathbf{\mathcal{C}}=\mathbf{\mathcal{A}}*\mathbf{\mathcal{B}}$ of $\mathbf{\mathcal{A}}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$ and $\mathbf{\mathcal{B}}\in \mathbb{R}^{n_{2}\times n_4\times n_{3}}$ is a tensor of size $n_1\times n_4 \times n_3$, where the $(i,j)$-th tube $\mathbf{c}_{ij:}$ is given by \begin{equation} \mathbf{c}_{ij:} = \mathbf{\mathcal{C}}(i,j,:) = \sum_{k=1}^{n_2}\mathbf{\mathcal{A}}(i,k,:)*\mathbf{\mathcal{B}}(k,j,:) \end{equation} where $*$ denotes the circular convolution between two tubes of same size. \label{def:tprod} \end{mydef} \begin{mydef}[identity tensor \cite{kilmer2013third}]\label{Def:2} The identity tensor $\mathbf{\mathcal{I}}\in \mathbb{R}^{n_{1}\times n_1\times n_{3}}$ is the tensor whose first frontal slice is the $n_1\times n_1$ identity matrix, and whose other frontal slices are all zeros. \end{mydef} \begin{mydef}[orthogonal tensor \cite{kilmer2013third}]\label{Def:3} A tensor $\mathbf{\mathcal{Q}} \in \mathbb{C}^ {n_{1} \times n_1\times n_{3}}$ is orthogonal if it satisfies \begin{equation} \mathbf{\mathcal{Q}}^\text{\rm H}*\mathbf{\mathcal{Q}}=\mathbf{\mathcal{Q}}*\mathbf{\mathcal{Q}}^\text{\rm H}=\mathbf{\mathcal{I}}. \end{equation} \end{mydef} \begin{mydef}[f-diagonal tensor \cite{kilmer2013third}]\label{Def:1} A tensor $\mathbf{\mathcal{A}}$ is called f-diagonal if each frontal slice $\mathbf{\mathcal{A}}^{(i)}$ is a diagonal matrix. \end{mydef} \begin{thm}[t-SVD \cite{kilmer2013third,kilmer2011factorization}] For $\mathbf{\mathcal{A}}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$, the t-SVD of $\mathbf{\mathcal{A}}$ is given by \begin{equation} \mathbf{\mathcal{A}}=\mathbf{\mathcal{U}}*\mathbf{\mathcal{S}}*\mathbf{\mathcal{V}}^\text{\rm H} \end{equation} where $\mathbf{\mathcal{U}}\in \mathbb{R}^{n_{1}\times n_1\times n_{3}}$ and $\mathbf{\mathcal{V}}\in \mathbb{R}^{n_{2}\times n_2\times n_{3}}$ are orthogonal tensors, and $\mathbf{\mathcal{S}}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$ is an f-diagonal tensor. \end{thm} The t-SVD is illustrated in Figure \ref{tsvd}. \begin{figure}[hbtp] \centering \includegraphics[width=0.85\linewidth]{figs/tsvd.pdf} \caption{The t-SVD of an $n_1 \times n_2 \times n_3$ tensor.} \label{tsvd} \end{figure} \begin{mydef}[tensor tubal-rank and multi-rank \cite{zhang2014novel}]\label{Def:tubal} The tubal-rank of a tensor $\mathbf{\mathcal{A}}\in\mathbb{R}^{n_1\times n_2\times n_3}$, denoted as $\text{rank}_t(\mathbf{\mathcal{A}})$, is defined to be the number of non-zero singular tubes of $\mathbf{\mathcal{S}}$, where $\mathbf{\mathcal{S}}$ comes from the t-SVD of $\mathbf{\mathcal{A}}$: $\mathbf{\mathcal{A}}=\mathbf{\mathcal{U}}*\mathbf{\mathcal{S}}*\mathbf{\mathcal{V}}^\top$. That is \begin{equation} \text{rank}_t(\mathbf{\mathcal{A}})=\#\{i:\mathbf{\mathcal{S}}(i,:,:)\neq0\}. \end{equation} The tensor multi-rank of $\mathbf{\mathcal{A}}\in\mathbb{R}^{n_1\times n_2\times n_3}$ is a vector, denoted as $\text{rank}_r (\mathcal{A})\in\mathbb{R}^{n_3}$, with the $i$-th element equals to the rank of $i$-th frontal slice of $\widehat{\mathbf{\mathcal{A}}}$. \end{mydef} \begin{mydef}[block diagonal form \cite{zhang2014novel}]\label{Def:bldg} Let $\overline{\mathbf{\mathcal{A}}}$ denote the block-diagonal matrix of the tensor $\widehat{\mathbf{\mathcal{A}}}$ in the Fourier domain, i.e., \begin{equation} \begin{aligned} \overline{\mathcal{A}}&\triangleq {\tt blockdiag}(\widehat{\mathbf{\mathcal{A}}})\\ &\triangleq \left [ \begin{tabular}{cccc} $\widehat{\mathbf{\mathcal{A}}}^{(1)}$&&&\\ &$\widehat{\mathbf{\mathcal{A}}}^{(2)}$ &&\\ &&$\ddots$ &\\ &&&$\widehat{\mathbf{\mathcal{A}}}^{(n_3)}$ \end{tabular}\right] \in\mathbb{C}^{n_1n_3\times n_2n_3}, \end{aligned} \end{equation} \end{mydef} where $\widehat{\mathcal{A}}^{(k)}=\widehat{\mathcal{A}}(:,:,k)$ is the $k$-th slice of $\widehat{\mathcal{A}}$ for $k = 1,2,\cdots,n_3$. It is not difficult to find that $\overline{\mathcal{{A}}^\text{\rm H}}=\overline{\mathcal{{A}}}^\text{\rm H}$, i.e., the block diagonal form of a tensor's conjugate transpose equals to the matrix conjugate transpose of the tensor's block diagonal form. Further more, for any tensor $\mathbf{\mathcal{A}}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$ and $\mathbf{\mathcal{B}}\in \mathbb{R}^{n_{2}\times n_4\times n_{3}}$, we have \[ \mathbf{\mathcal{A}}*\mathbf{\mathcal{B}}=\mathbf{\mathcal{C}} \Leftrightarrow \overline{\mathcal{A}}\cdot\overline{\mathcal{B}}=\overline{\mathcal{{C}}}, \] where $\cdot$ is the matrix product. \begin{mydef}[tensor-nuclear-norm (TNN) \cite{zhang2014novel}] The tensor nuclear norm of a tensor $\mathbf{\mathcal{A}}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$, denoted as $\|\mathbf{\mathcal{A}}\|_{\text{\rm TNN}}$, is defined a \begin{equation} \begin{aligned} \|\mathbf{\mathcal{A}}\|_{\text{TNN}}\triangleq\|\overline{\mathbf{\mathcal{A}}}\|_{*}, \end{aligned} \label{tnn} \end{equation} \end{mydef} where $\|\cdot\|_*$ refers to the matrix nuclear norm. For a matrix $\mathbf X\in\mathbb{C}^{m\times n}$, $\|\mathbf X\|_* = \sum_i^{\min\{m,n\}}\sigma_i$, where $\sigma_i$ is the $i$-th singular value of $\mathbf X$. The TNN can be computed via the summation of the matrix nuclear norm of Fourier transformed tensor's slices, which are also the blocks of $\overline{\mathbf{\mathcal{A}}}$. That is $\|\mathbf{\mathcal{A}}\|_{\text{TNN}}=\sum\limits_{i=1}^{n_3}\|\widehat{\mathbf{\mathcal{A}}}^{(i)}\|_*$. We summary the frequent used notations in Table \ref{notations}. \begin{table}[htbp]\label{notations} \renewcommand\arraystretch{1.3}\setlength{\tabcolsep}{2pt} \caption{Tensor notations} \begin{tabular}{p{0.27\columnwidth} p{0.70\columnwidth}} \toprule Notation & Explanation \\ \midrule $\mathbf{\mathcal{X}},\mathbf{X},\mathbf{x},x$ & Tensor, matrix, vector, scalar.\\ \multirow{2}{*}{$*$} & The tensor-tensor product or the circular convolution between vectors.\\ \multirow{2}{*}{$\mathcal{X}(:,:,k)$ (or $\mathcal{X}^{(k)}$)} & The $k$-th frontal slice of a third-order tensor $\mathbf{\mathcal{X}}\in\mathbb{R}^{n_1\times n_2\times n_3}$.\\ ${\tt fold}_3$ (${\tt unfold}_3$) & The fold (or unfold) operation along the third mode.\\ $\mathbf{{X}}_{(3)}$ & The mode-3 unfolding of a tensor $\mathbf{\mathcal{X}}$.\\ $\widehat{\mathcal{X}}$ & The Fourier transformed (along the third mode) tensor.\\ \multirow{2}{*}{$\text{rank}_r (\mathcal{A})$} & The multi-rank of a tensor $\mathbf{\mathcal{X}}$ and its $i$-th element equals to $\text{rank}(\widehat{\mathcal{X}}^{(k)})$.\\ \multirow{2}{*}{$\left\|\mathbf{\mathcal{X}}\right\|_\text{TNN}$} & The tensor nuclear norm of a tensor $\mathbf{\mathcal{X}}$ and it equals to the sum of the nuclear norms of $\widehat{\mathcal{X}}$'s slices. \\ \bottomrule \end{tabular} \end{table} \subsection{Framelet}\label{Framelet} A tight frame is defined as a countable set $X\subset L_2(\mathbb{R}$) with the property that $\forall f\in L_2(\mathbb{R})$, $f=\sum\limits_{g\in X}\langle f,g\rangle.$ This is equivalent to that $\forall f\in L_2(\mathbb{R})$, we have \begin{equation*} \Vert{f}\Vert_{L_2(\mathbb{R})}^2=\sum\limits_{g\in X}\vert\langle f,g\rangle\vert^2, \end{equation*} where $\langle \cdot ,\cdot\rangle$ is the inner product in $L^2(\mathbb{R})$, and $\Vert \cdot \Vert_{L^2(\mathbb{R})}=\langle \cdot ,\cdot\rangle^ \frac {1}{2}$. For given $\Psi :=\{\psi_1,\psi_2,\cdots,\psi_r\}\subset L^2(\mathbb{R})$, the affine (or wavelet) system is defined by the collection of the dilations and the shifts of $\Psi$ as $X(\Psi):=\{\psi_{l,j,k} : 1\le l\le r; j,k\in\mathbb{Z}\}$, where $\psi_{l,j,k}:=2^{j/2}\psi_l(2^j\cdot \textnormal{-}k)$. When $X(\Psi)$ forms a tight frame of $L^2(\mathbb{R})$, it is called a tight wavelet frame, and $\psi_l,l=1,2,\cdots,r$ are called the (tight) framelets. In the numerical scheme of image processing, the framelet transform (decomposition operator) of a vector $\mathbf v\in \mathbb{R}^{n}$ can be represented by a matrix $\mathbf W \in\mathbb{R}^{wn\times n}$ is the framelet transform matrix constructed with $n$ filters and $l$ levels and $w=(n-1)l+1$. The processes of generating such matrices have been detailed in many literatures such as \cite{cai10,JiangFramelet}. We omit them here for readability. Then the framelet transform of a discrete signal $\mathbf v\in \mathbb{R}^{n}$, can be written as $\mathbf u=\mathbf W\mathbf v \in \mathbb{R}^{wn}$. Besides, the unitary extension principle (UEP) \cite{ron1997affine} asserts that $\mathbf W^\top\mathbf W\mathbf v=\mathbf v$, where $\mathbf W^\top $ indicates the inverse framelet transform. However, $\mathbf W\mathbf W^\top\mathbf u\neq\mathbf u$. \vspace{-2mm} \section{Main results}\label{Sec:Model} In this section, we replace the Fourier transform by the framelet transform. The starting point of our idea is that the framelet transform would bring in redundancy and the transformed data is of lower multi-rank. Then, we build the LRTC model and tensor RPCA model based on the framelet representation of the tensor nuclear norm and propose the ADMM based algorithms to optimize these models. \vspace{-2mm} \subsection{From DFT to The Framelet Transform} For a three way tensor $\mathcal{X}\in\mathbb{R}^{n_1\times n_2 \times n_3}$, owing to the circular convolution in Def. \ref{def:tprod}, its t-SVD can be efficiently computed via the DFT. Computing the one-dimensional DFT of a vector of length $n$ by using the DFT matrix costs $O(n^2)$, and the computational cost can be reduced to O($n\log n$) by employing the fast Fourier transform (FFT) technique \cite{bergland1969guided}. Using the DFT matrix, for a tensor $\mathbf{\mathcal{X}}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$, we can obtain its Fourier transformed tensor as $$ \widehat{\mathbf{\mathcal{X}}} ={\tt fold}_3\left( \mathbf{F}_{n_3}\mathbf{X}_{(3)}\right)\in \mathbb{C}^{n_{1}\times n_2\times n_{3}}, $$ where $\mathbf{X}_{(3)}$ is the mode-3 unfolding of $\mathcal{X}$ Next, we will adopt the framelet transform as a substitute for the Fourier transform, and give the definition of the framelet representation of the tensor nuclear norm. For simplicity, we denote the tensor after framelet transform along the third mode as $$\mathcal{X}_{\mathbf W}={\tt fold}_3\left(\mathbf{W}\mathbf{X}_{(3)}\right)\in\mathbb{R}^{n_1\times n_2\times wn_3},$$ where $\mathbf W \in\mathbb{R}^{wn_3\times n_3}$ is the framelet transform matrix constructed with $n$ filters and $l$ levels and $w=(n-1)l+1$. Considering the UEP property of the framelet transform, we have $\mathcal{X} ={\tt fold}_3(\mathbf{W}^\top{[\mathbf{X}_{\mathbf W}]}_{(3)})$, where ${[\mathbf{X}_{\mathbf W}]}_{(3)}= {\tt unfold}_3\left(\mathcal{X}_\mathbf{W}\right).$ Recalling Def. \ref{Def:tubal}, the tensor multi-rank is defined as a vector of the ranks of the frontal slices in the Fourier transform domain. Therefore, the framelet based multi-rank is defined in the same manner as follows. \begin{figure}[!t] \centering \includegraphics[width=8.1cm,height=5.85cm]{figs/svhDCT.pdf} \caption{The distribution of singular values. Here, the singular values are obtained by conducting SVD on each frontal slice of the original tensor data or the transformed tensors.} \label{sva1} \end{figure} \begin{mydef}[Framelet based multi-rank] The framelet based multi-rank of a tensor $\mathbf{\mathcal{X}}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$ is defined as a vector $\mathbf r_w\in\mathbb{R}^{wn_3}$ with the $i$-th elements $\mathbf r_w(i) = \text{rank}(\mathcal{X}_{\mathbf W}(:,:,i))$ for $i = 1,2,\cdots,wn_3$. \end{mydef} Here we have replaced the Fourier transform by the framelet and defined the framelet based multi-rank. As mentioned before, the framelet transformed tensor can be of lower (framelet based) multi-rank. To understand this in-depth, we give some empirically numerical analyses on the singular values of the frontal slices of the transformed tensors. Here, taking the video data “news”\footnote{Data available at { http://trace.eas.asu.edu/yuv/}.} as an example, the original video data is denoted as $\mathcal{X}\in\mathbb{R}^{144\times176\times100}$ and its Fourier, DCT, and framelet transformed tensors are denoted as $\widehat{\mathcal{X}}$, $\mathcal{X}_\text{DCT}$\footnote{$\mathcal{X}_\text{DCT}$ is obtained by replacing the DFT with DCT, being similar to $\mathcal{X}_{\mathbf W}$.}, and $\mathcal{X}_{\mathbf W}$, respectively. In Figure \ref{sva1}, we exhibit the distributions of the singular values of the frontal slices of $\mathcal{X}$, the Fourier transformed tensors $\widehat{\mathcal{X}}$, the DCT transformed tensor $\mathcal{X}_\text{DCT}$, and the framelet transformed tensors $\mathcal{X}_{\mathbf W}$\footnote{The piece-wise cubic B-spline is used to generate framelet system.}. In Figure \ref{sva1}, we show the proportion of the number of singular values of transformed matrix frontal slices in each magnitude interval. It can be found in the figure that a large proportion of the singular values of the framelet transformed data appears in the interval of $[0,10^{-2}]$ compared with the original video data, the Fourier transformed tensor $\widehat{\mathcal{X}}$, and the DCT transformed tensor $\mathcal{X}_\text{DCT}$. This phenomenon brings in an advantage that the data can be better approximated with lower rank via the framelet representation. In Section \ref{Sec:Exp}, we will illustrate tensor completion and tensor RPCA can be obtained by using the framelet representation. \subsection{Framelet Based TNN} Using the DFT matrix $\mathbf F_{n_3}$, the tensor nuclear norm in \eqref{tnn} of a tensor $\mathcal{X}\in \mathbb{R}^{n_{1}\times n_2\times n_{3}}$ can be expressed as \begin{equation} \begin{aligned} \|\mathbf{\mathcal{X}}\|_{\text{TNN}}&=\|\overline{\mathbf{\mathcal{X}}}\|_{*}=\sum\limits_{i=1}^{n_3}\|\widehat{\mathbf{\mathcal{X}}}^{(i)}\|_*\\ &= \sum\limits_{i=1}^{n_3}\|\left[{\tt fold}_3\left(\mathbf{F}_{n_3}\mathbf{X}_{(3)}\right)\right](:,:,k)\|_*, \label{TNN} \end{aligned} \end{equation} where $\mathbf{X}_{(3)}$ is the mode-3 unfolding of $\mathcal{X}$. \begin{mydef}[Framelet based TNN (F-TNN)] Similarly, the framelet representation of the tensor nuclear norm can be formulated as\end{mydef} \begin{equation} \begin{aligned} \|\mathbf{\mathcal{X}}\|_{\text{F-TNN}} &=\|{\tt{blockdiag}}(\mathcal{X}_{\mathbf W})\|_*= \sum\limits_{k=1}^{wn_3}\|\mathcal{X}_{\mathbf W}(:,:,k)\|_*\\ &=\sum\limits_{k=1}^{wn_3}\|\left[{\tt fold}_3\left( \mathbf W \mathbf X_{(3)}\right)\right](:,:,k)\|_*, \end{aligned}\label{FTNN} \end{equation} {where $\mathbf W \in\mathbb{R}^{wn_3\times n_3}$ is the framelet transform matrix.} It is not difficult to obtain that the F-TNN is a convex envelope of the $\ell_1$ norm of the framelet based multi-rank. \subsection{Tensor Completion via Minimizing F-TNN} Based on the proposed framelet based TNN, our tensor completion model, which is convex, is formulated as \begin{equation} \begin{aligned} \min\limits_{\mathcal{X}} \quad& \|\mathbf{\mathcal{X}}\|_{\text{F-TNN}}\\ \text{s.t.}\quad& \mathcal{X}_{\Omega}=\mathcal{O}_{\Omega}, \end{aligned} \label{Model_1} \end{equation} where $\mathcal{O}\in\mathbb{R}^{n_1\times n_2 \times n_3}$ is the incomplete observed data, and $\Omega$ is the set of indexes of the observed entries. $\mathcal{X}_{\Omega}=\mathcal{O}_{\Omega}$ constrains that the entries of $\mathcal{X}$ should agree with $\mathcal{O}$ in $\Omega$. The next part gives the solving algorithm for our tensor completion model \eqref{Model_1}. Let \begin{equation}\centering \mathcal{I}_\Phi(\mathbf{\mathcal{X}})=\left\{ \begin{aligned} &0,\quad & \mathbf{\mathcal{X}}\in\Phi,\\ &\infty, &\text{otherwise}, \end{aligned} \right. \end{equation} where $\Phi :=\{\mathbf{\mathcal{X}}\in\mathbb{R}^{n_1\times n_2 \times n_3}, \mathbf{\mathcal{X}}_{\Omega}=\mathbf{\mathcal{O}}_{\Omega}\}$. Thus, the problem (\ref{Model_1}) can be rewritten as \begin{equation} \min\limits_{\mathbf{\mathcal{X}}} \quad\mathcal{I}_\Phi(\mathbf{\mathcal{X}})+\sum\limits_{k=1}^{wn_3}\|\mathcal{X}_{\mathbf W}(:,:,k)\|_{*}\\ \label{A_un} \end{equation} Then, the minimization problem (\ref{A_un}) can be efficiently solved via ADMM \cite{boyd2011distributed}. After introducing the auxiliary variable $\mathcal{V}\in\mathbb{R}^{n_1\times n2\times wn_3}$, the problem (\ref{A_un}) can be rewritten as the following unconstraint problem \begin{equation} \begin{aligned} \min\limits_{\mathbf{\mathcal{X}}} \quad&\mathcal{I}_\Phi(\mathcal{X}) +\sum\limits_{k=1}^{wn_3}\|\mathcal{V}(:,:,k)\|_{*}\\ \text{s.t.}\quad & \mathcal{V} = \mathcal{X}_{\mathbf W}. \end{aligned} \label{A_AU} \end{equation} The augmented Lagrangian function of (\ref{A_AU}) is given by \begin{equation} \begin{aligned} L_\beta(\mathcal{X}, \mathcal{V},\Lambda)=&\mathcal{I}_\Phi(\mathcal{X}) +\sum\limits_{k=1}^{wn_3}\|\mathcal{V}(:,:,k)\|_{*}\\ &+ \frac{\beta}{2}\|\mathcal{X}_{\mathbf{W}}-\mathcal{V}+\frac{\Lambda}{\beta}\|_F^2 \\ \end{aligned} \label{A_AUG} \end{equation} where $\Lambda\in\mathbb{R}^{n_1\times n_2\times wn_3}$ is the Lagrangian multiplier, $\beta$ is the penalty parameter for the violation of the linear constraints. In the scheme of the ADMM, we update each variable alternately. \textbf{$\mathcal{V}$ sub-problem}: The $\mathcal{V}$ at $t$-th iteration is \begin{equation} \begin{aligned} \mathcal{V}^{t+1} = &\arg\min\limits_{\mathcal{V}}\ \sum\limits_{k=1}^{wn_3}\|\mathcal{V}(:,:,k)\|_{*} + \frac{\beta}{2}\|\mathcal{X}_{\mathbf{W}}^t-\mathcal{V}+\frac{\Lambda^t}{\beta}\|_F^2\\ \end{aligned} \label{V1_1} \end{equation} Then, \eqref{V1_1} can be decomposed into $wn_3$ subproblems and it is easy to obtain the closed form solution of these sub-problems with the singular value thresholding (SVT) operator \cite{cai2010singular}. Hence, we update $\mathcal{V}$ as \begin{equation} \begin{aligned} \mathcal{V}^{t+1}(:,:,k) = {\tt SVT}_{\frac{1}{\beta}}\left(\mathcal{X}^{t}_{\mathbf{W}}(:,:,k)+\frac{\Lambda^t(:,:,k)}{\beta}\right), \end{aligned} \label{V1_2} \end{equation} where $k = 1,2\cdots,wn_3$. The complexity of computing $\mathcal{V}$ at each iteration is $O(wn_1n_2n_3 \min(n_1n_2))$. \textbf{$\mathcal{X}$ sub-problem}: For convenience, the subproblem of optimizing $L_\beta$ with respect to $\mathcal{X}$ at $t$-th iteration is written in the matrix format as (recalling that $\mathcal{X}_{\mathbf W} = {\tt fold}_3\left(\mathbf{W}\mathbf{X}_{(3)}\right)$) \begin{equation} \begin{aligned} \mathbf{X}^{t+1} = \arg\min\limits_{\mathbf{X}}\ \mathcal{I}_\Phi(\mathcal{X}) + \frac{\beta}{2}\|\mathbf{WX}-\mathbf{V}^{t+1}_{(3)}+\frac{\Lambda^t_{(3)}}{\beta}\|_F^2, \end{aligned} \label{X_1} \end{equation} where $\mathbf{V}^{t+1}_{(3)} = {\tt unfold}_3(\mathcal{V}^{t+1})$ and $\Lambda^t_{(3)} = {\tt unfold}_3(\Lambda^t)$. To optimize \eqref{X_1}, we first solve the following equation \begin{equation} \begin{aligned} \mathbf{W^\top W}\mathbf{X}_{(3)} = & \mathbf{W^\top}\left(\mathbf{V}^{t+1}_{(3)}-\frac{\Lambda^t_{(3)}}{\beta}\right).\\ \end{aligned} \label{X_2} \end{equation} Thus, considering that $\mathbf{W^\top W}\mathbf{X}_{(3)} = \mathbf{X}_{(3)}$ (the UEP property of the framelet transformation), we have \begin{equation} \begin{aligned} \mathcal{X}^{t+1} = \mathcal{P}_{\Omega^C}\left({\tt fold}_3 (\mathbf{W^\top}(\mathbf{V}^{t+1}_{(3)}-\frac{\Lambda^t_{(3)}}{\beta}))\right)+\mathcal{P}_{\Omega}\left(\mathcal{O}\right), \end{aligned} \label{X_4} \end{equation} where $\mathcal{P}_{\Omega}(\cdot)$ is the projection function that keeps the entries of $\cdot$ in $\Omega$ while making others be zeros, and $\Omega^{c}$ denotes the complementary set of $\Omega$. Meanwhile, we have $\mathcal{X}^{t+1}_{\mathbf W} = {\tt fold}_3(\mathbf{W}\mathbf{X}^{t+1}_{(3)})$. The complexity of computing $\mathcal{X}$ is $O(wn_1n_2n_3^2)$ at each iteration. \textbf{Updating the multiplier}: The multiplier $\Lambda$ can be updated by \begin{equation} \begin{aligned} \Lambda^{t+1} & = \Lambda^{t} +\beta \left(\mathcal{X}^{t+1}_{\mathbf W}-\mathcal{V}^{t+1}\right).\\ \end{aligned} \label{M_up} \end{equation} Updating $\Lambda$ costs $O(wn_1n_2n_3)$ at each iteration. Finally, our algorithm is summarized in Algorithm \ref{alg}. The total complexity of Algorithm \ref{alg} at each iteration is $O(wn_1n_2n_3(n_3+\min(n_1,n_2)))$. The objective function of the proposed model in \eqref{Model_1} is convex. Our algorithm fits the standard ADMM framework and its convergence is theoretically guaranteed \cite{boyd2011distributed}. \begin{algorithm}[htp] \renewcommand\arraystretch{1.3} \caption[Caption for LOF]{Tensor completion via minimizing F-TNN} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \Require The observed tensor $\mathcal{O}\in\mathbb{R}^{n_1\times n_2\times n_3}$; Lagrange parameter $\beta$; convergence criteria $\epsilon$; maximum iteration $t_\text{max}$. \renewcommand{\algorithmicrequire}{\textbf{Initialization:}} \Require The framelet transform matrix $\mathbf W$; $\mathcal{V}^{(0)}={\tt fold}_3(\mathbf{W}\mathbf O_{(3)})$; $\mathcal{X}^{(0)} = \mathcal{O}$; $t = 0$. \While {not converged and $t<t_\text{max}$} \State Update $\mathcal{V}^{t+1}$ via Eq. \eqref{V1_2}; \State Update $\mathcal{X}^{t+1}$ via Eq. \eqref{X_4}; \State Update $\Lambda^{t+1}$ via Eq. (\ref{M_up}); \State Check the convergence conditions $\|\mathbf{\mathcal{V}}^{k+1}-\mathbf{\mathcal{V}}^{k}\|_\infty\leq\epsilon$ and $\| \mathbf{\mathcal{X}}^{k+1} -\mathbf{\mathcal{X}}^{k} \|_\infty\leq\epsilon$; \State $t = t+1$. \EndWhile \renewcommand{\algorithmicrequire}{\textbf{Output:}} \Require The reconstructed tensor $\mathcal{X}$. \end{algorithmic} \label{alg} \end{algorithm} \subsection{Tensor Robust Principal Components Analysis} As aforementioned, another typical tensor recovery problem is the tensor RPCA problem, which aims to recover the tensor from grossly corrupted observations. Adopting the F-TNN to characterize the low-rank part, our tensor RPCA model is formulated as \begin{equation} \begin{aligned} \min\limits_{\mathcal{L},\mathcal{S}} \quad& \|\mathbf{\mathcal{L}}\|_{\text{F-TNN}}+\lambda\|\mathcal{E}\|_1\\ \text{s.t.}\quad& \mathcal{L} +\mathcal{E}=\mathcal{O}, \end{aligned} \label{Model_2} \end{equation} where $\mathcal{O}\in\mathbb{R}^{n_1\times n_2 \times n_3}$ is the observed data, $\mathcal{E}$ indicates the sparse part, $\|\mathcal{E}\|_1=\sum_{ijk}|\mathcal{E}_{i,j,k}|$, and $\lambda$ is a non-negative parameter. For convenience, we introduce an auxiliary variable $\mathcal{V}\in\mathbb{R}^{n_1\times n2\times wn_3}$, and reformulate \eqref{Model_2} as \begin{equation} \begin{aligned} \min\limits_{\mathcal{L},\mathcal{S},\mathcal{V}} \quad& \sum\limits_{k=1}^{wn_3}\|\mathcal{V}(:,:,k)\|_{*}+\lambda\|\mathcal{E}\|_1\\ \text{s.t.}\quad& \mathcal{L} +\mathcal{E}=\mathcal{O},\quad \mathcal{V}=\mathcal{L}_\mathbf{W}, \end{aligned} \label{Model_2} \end{equation} where $\mathcal{L}_\mathbf{W}={\tt fold}_3\left(\mathbf{W}\mathbf{L}_{(3)}\right)\in\mathbb{R}^{n_1\times n_2\times wn_3}$ and $\mathbf W \in\mathbb{R}^{wn_3\times n_3}$ is the framelet transform matrix constructed with $n$ filters and $l$ levels ($w=(n-1)l+1$). Similarly, we adopt ADMM to solve \eqref{Model_2}. The augmented Lagrangian function of \eqref{Model_2} is given as \begin{equation} \begin{aligned} L_\beta(\mathcal{L},\mathcal{V},\mathcal{E},\Lambda)\hspace{-.5mm}=\hspace{-.5mm}&\sum\limits_{k=1}^{wn_3}\hspace{-.5mm}\|\mathcal{V}(:,:,k)\|_{*} \hspace{-.5mm}+\hspace{-.5mm}\frac{\beta}{2}\|\mathcal{L}_\mathbf{W}\hspace{-.5mm}-\hspace{-.5mm}\mathcal{V}\hspace{-.5mm}+\hspace{-.5mm}\frac{\Lambda_1}{\beta}\|_F^2\\ &+\lambda\|\mathcal{E}\|_1+ \frac{\beta}{2}\|\mathcal{O}-\mathcal{L}-\mathcal{E}+\frac{\Lambda_2}{\beta}\|_F^2 \end{aligned} \label{A_AUG} \end{equation} where $\Lambda_1\in\mathbb{R}^{n_1\times n_2\times wn_3}$ and $\Lambda_2\in\mathbb{R}^{n_1\times n_2\times n_3}$ are the Lagrangian multiplier, and $\beta$ is a nonnegative parameter. In the scheme of the ADMM, we update each variable alternately as: \begin{equation} \left\{\hspace{-1mm} \begin{aligned} \mathcal{V}^{t+1} \hspace{-1mm}&=\hspace{-.75mm} \arg\min\limits_{\mathcal{V}} \sum\limits_{k=1}^{wn_3}\|\mathcal{V}(:,:,k)\|_{*} + \frac{\beta}{2}\|\mathcal{L}_\mathbf{W}^t-\mathcal{V}+\frac{\Lambda_1^t}{\beta}\|_F^2,\\ \mathcal{L}^{t+1} \hspace{-1mm}&=\hspace{-.75mm} \arg\min\limits_{\mathcal{L}}\frac{\beta}{2}\|\mathcal{L}_\mathbf{W}\hspace{-1mm}-\hspace{-1mm}\mathcal{V}^{t\hspace{-.5mm}+\hspace{-.5mm}1}\hspace{-1mm}+\hspace{-1mm}\frac{\Lambda_1^t}{\beta}\|_F^2 \hspace{-1mm}+\hspace{-1mm}\frac{\beta}{2}\|\mathcal{O}\hspace{-1mm}-\hspace{-1mm}\mathcal{L}\hspace{-1mm}-\hspace{-1mm}\mathcal{E}^t\hspace{-1mm}+\hspace{-1mm}\frac{\Lambda^t_2}{\beta}\|_F^2,\\ \mathcal{E}^{t+1} \hspace{-1mm}&=\hspace{-.75mm} \arg\min\limits_{\mathcal{E}}\lambda\|\mathcal{E}\|_1+ \frac{\beta}{2}\|\mathcal{O}-\mathcal{L}^{t+1}-\mathcal{E}+\frac{\Lambda^t_2}{\beta}\|_F^2,\\ \Lambda_1^{t+1} \hspace{-1mm}&=\hspace{-.75mm} \Lambda_1^{t} +\beta \left(\mathcal{L}^{t+1}_{\mathbf W}-\mathcal{V}^{t+1}\right),\\ \Lambda_2^{t+1} \hspace{-1mm}&=\hspace{-.75mm} \Lambda_2^{t} +\beta \left(\mathcal{O}-\mathcal{L}^{t+1}-\mathcal{E}^{t+1}\right).\\ \end{aligned}\right. \label{RPCA-update} \end{equation} Specifically, the $\mathcal{V}$ subproblem in \eqref{RPCA-update} can be solved by \begin{equation} \mathcal{V}^{t+1}(:,:,k)={\tt SVT}_{\frac{1}{\beta}}\left(\mathcal{L}^{t}_{\mathbf{W}}(:,:,k)+\frac{\Lambda_1^t(:,:,k)}{\beta}\right), \label{RPCA-update-V} \end{equation} for $k=1,2,\cdots,wn_3$. The complexity of updating $\mathcal{V}$ is $O(wn_1n_2n_3 \min(n_1n_2))$ at each iteration. The $\mathcal{L}$ subproblem is a least square problem and its solution can be obtained as \begin{equation} \mathcal{L}^{t+1} \hspace{-1mm}=\hspace{-.75mm} \frac{1}{2}{\tt fold}_3 \left(\mathbf{W}^\top(\mathbf{V}^{t+1}_{(3)}\hspace{-.75mm}-\hspace{-.75mm}\frac{{\Lambda^t_1}_{(3)}}{\beta})\right)\hspace{-.75mm}+\hspace{-.75mm} \frac{1}{2}\left(\mathcal{O}\hspace{-.75mm}-\hspace{-.75mm}\mathcal{E}^t\hspace{-.75mm}+\hspace{-.75mm}\frac{\Lambda^t_2}{\beta}\right). \label{RPCA-update-L} \end{equation} At each iteration, computing $\mathcal{L}$ costs $O(wn_1n_2n_3^2)$. The $\mathcal{E}$ subproblem can be solved by \begin{equation} \mathcal{E}^{t+1}= {\tt Soft}_{\frac{\lambda}{\beta}}\left(\mathcal{O}-\mathcal{L}^{t+1}+\frac{\Lambda^t_2}{\beta}\right), \label{RPCA-update-E} \end{equation} where ${\tt Soft}_\tau (\cdot)$ is the tensor soft-thresholding operator, and ${\tt Soft}_\tau(\cdot)={\tt{sign}}(\cdot)\max(|\cdot|-\tau,0)$. Computing $\mathcal{E}$ and updating the multipliers $\Lambda_1$ cost $O(wn_1n_2n_3)$ at each iteration. While the computation complexity of updating $\Lambda_2$ is $O(n_1n_2n_3)$. \begin{algorithm}[htp] \caption[Caption for LOF]{Tensor RPCA via minimizing F-TNN} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \Require The observed tensor $\mathcal{O}\in\mathbb{R}^{n_1\times n_2\times n_3}$; the Lagrange parameter $\beta$; the parameter $\lambda$; convergence criteria $\epsilon$; maximum iteration $t_\text{max}$. \renewcommand{\algorithmicrequire}{\textbf{Initialization:}} \Require the framelet transform matrix $\mathbf W$; $\mathcal{V}^{(0)}={\tt fold}_3(\mathbf{W}\mathbf O_{(3)})$ and $\mathcal{E}^{(0)} = {\tt zeros}(n_1\times n_2\times n_3)$; $t = 0$. \While {not converged and $t<t_\text{max}$} \State Update $\mathcal{V}^{t+1}$ via Eq. \eqref{RPCA-update-V}; \State Update $\mathcal{L}^{t+1}$ via Eq. \eqref{RPCA-update-L}; \State Update $\mathcal{E}^{t+1}$ via Eq. \eqref{RPCA-update-E}; \State Update $\Lambda_1$ and $\Lambda_2$ via Eq. (\ref{M_up}); \State Check the convergence conditions $\|\mathbf{\mathcal{V}}^{k+1}-\mathbf{\mathcal{V}}^{k}\|_\infty\leq\epsilon$, $\|\mathbf{\mathcal{L}}^{k+1} -\mathbf{\mathcal{L}}^{k} \|_\infty\leq\epsilon$, and $\|\mathbf{\mathcal{E}}^{k+1} -\mathbf{\mathcal{E}}^{k} \|_\infty\leq\epsilon$; \State $t = t+1$. \EndWhile \renewcommand{\algorithmicrequire}{\textbf{Output:}} \Require The low-rank component $\mathcal{L}$ and the sparse component $\mathcal{E}$. \end{algorithmic} \label{alg2} \end{algorithm} The pseudo-code of our algorithm for tensor RPCA is summarized in Algorithm \ref{alg2}. At each iteration of Algorithm \ref{alg2}, it costs $O(wn_1n_2n_3(n_3+\min(n_1,n_2)))$. Likewise, Algorithm \ref{alg2} fits the standard ADMM framework and its convergence is theoretically guaranteed \cite{boyd2011distributed}. \section{Numerical experiments}\label{Sec:Exp} In this section, to illustrate the performance of the proposed method, we will exhibit the tensor completion experimental results on three typical kinds of third-order data, i.e., the MRI data, the MSI data, and the video data. Meanwhile, we conduct Three numerical metrics, consisting of the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM) \cite{ssim}, and the feature similarity index (FSIM) \cite{zhang2011fsim} are selected to quantitatively measure the reconstructed results. On account of that the data are third-order tensors, we report the mean values of PSNR, SSIM, and FISM of all the frontal slices. \begin{figure*}[htbp] \centering\scriptsize\setlength{\tabcolsep}{1pt} \renewcommand\arraystretch{0.9} \begin{tabular}{cccccccccc} &Observed&LRMC \cite{candes2009exact}&HaLRTC \cite{Liu2013PAMItensor}&TMac \cite{Xu2013Tmac}&TNN \cite{zhang2017exact}&PSTNN \cite{jiang2017PSTNN}&DCTNN \cite{lu2019low}& F-TNN& Ground truth\\ \rotatebox[origin=l]{90}{\quad\textbf{SR = 0.1}}& \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_Observed_Frame_106.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_Matrix_Frame_106.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_HaLRTC_Frame_106.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_TMac_Frame_106.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_TNN_Frame_106.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_PSTNN_Frame_106.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_DCT-TNN_Frame_106.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_F-TNN_Frame_106.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_GT_Frame_106.png} \\ \rotatebox[origin=l]{90}{\quad\textbf{SR = 0.2}}& \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_Observed_Frame_110.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_Matrix_Frame_110.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_HaLRTC_Frame_110.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_TMac_Frame_110.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_TNN_Frame_110.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_PSTNN_Frame_110.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_DCT-TNN_Frame_110.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_F-TNN_Frame_110.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_GT_Frame_110.png} \\ \rotatebox[origin=l]{90}{\quad\textbf{SR = 0.3}}& \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_Observed_Frame_115.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_Matrix_Frame_115.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_HaLRTC_Frame_115.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_TMac_Frame_115.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_TNN_Frame_115.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_PSTNN_Frame_115.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_DCT-TNN_Frame_115.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_F-TNN_Frame_115.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MRI/MRI1_GT_Frame_115.png} \\ \end{tabular} \caption{The visual illustration of the results on the {\bf MRI} data by different methods with different sampling rates (SR). From left to right are the frontal slices of observed incomplete data, results by different methods and the ground truth, respectively. From top to bottom are respectively corresponding to the 106-th slice, the 110-th slice and the 115-th slice.} \label{MRIframe} \end{figure*} \textbf{Experimental Settings}: We generated the framelet system via the piece-wise cubic B-spline. If not specified, the framelet decomposition level $l$ is set as 4 ($l=2$ for the MSI data), and the Lagrangian penalty parameter $\beta = 1$ for the tensor completion task and $\beta = 5$ when dealing with the tensor RPCA problems. The maximum iteration $t_\text{max}$ and the convergence tolerance $\epsilon$ are chosen as $(t_\text{max},\epsilon) = (100,10^{-2})$ for the tensor completion and $(t_\text{max},\epsilon) = (200,10^{-3})$ for the tensor RPCA. All the methods are implemented on the platform of Windows 10 and Matlab (R2017a) with an Intel(R) Core(TM) i5-4590 CPU at 3.30GHz and 16 GB RAM. \begin{figure*}[!t] \centering\scriptsize\setlength{\tabcolsep}{1pt} \renewcommand\arraystretch{0.9} \begin{tabular}{ccccccccccc} &Observed&LRMC \cite{candes2009exact}&HaLRTC \cite{Liu2013PAMItensor}&TMac \cite{Xu2013Tmac}&TNN \cite{zhang2017exact}&PSTNN \cite{jiang2017PSTNN}&DCTNN \cite{lu2019low}& F-TNN& Ground truth\\ \rotatebox[origin=l]{90}{\quad ``beads''}& \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_Observed.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_Matrix.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_HaLRTC.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_TMac.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_PSTNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_DCT-TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_F-TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_2_GT.png} \\ \rotatebox[origin=l]{90}{\quad ``cd''}& \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_Observed.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_Matrix.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_HaLRTC.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_TMac.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_PSTNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_DCT-TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_F-TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_3_GT.png} \\ \rotatebox[origin=l]{90}{ \quad``clay''}& \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_Observed.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_Matrix.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_HaLRTC.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_TMac.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_PSTNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_DCT-TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_F-TNN.png} & \includegraphics[width=0.105\linewidth]{figs/TC_MSI/MSI_no_5_GT.png} \\ \end{tabular} \caption{The pseudo-color images (R-1 G-2 B-31) of the completion results on the {\bf MSI} data ``beads'' (top row), ``cd'' (mid row), and ``clay'' (bottom row) by different methods, respectively, with the sampling rate = 0.05. From left to right are the observed incomplete data, results by different methods and the ground truth, respectively. For better visualization, the intensity of the pixels are adjusted.} \label{MSIframe} \end{figure*} \subsection{Tensor Completion} We compare our F-TNN based tensor completion method with six methods, including a baseline low-rank matrix completion (LRMC) method \cite{candes2009exact}, two Tucker-rank based methods HaLRTC \cite{Liu2013PAMItensor} and TMac \cite{Xu2013Tmac}, a TNN based method \cite{zhang2017exact}, a non-convex method minimizing the partial sum of the TNN (PSTNN) \cite{jiang2017PSTNN}, the DCT based TNN method (denoted as DCTNN) \cite{lu2019low}. When employing LRMC, the input third-order tensor data is unfolded to a matrix along the third dimension. \subsubsection{\bf MRI Data} We evaluate the performance of the proposed method and the compared methods on the MRI data\footnote{http://brainweb.bic.mni.mcgill.ca/brainweb/selection\_normal.html.}, which is of size $142\times178\times121$. As shown in Fig. \ref{MRIframe}, this is an MRI of the brain, which consists of abundant textures of the gray matter and the white matter. The sampling rates (SR) are set as 10\%, 20\%, and 30\%. Table \ref{MRI} shows the quantitative assessments of the results recovered by different methods. Form Table \ref{MRI}, it can be found that the proposed method reaches the highest indices for different sampling rates. The results by TMac and DCTNN alternatively rank the second-best place. The margins between the results by our method and the second-best results are more than 1.3dB considering the PSNR, and 0.03 for the SSIM and FSIM. \begin{table}[!t]\renewcommand\arraystretch{1}\setlength{\tabcolsep}{2pt}\scriptsize \renewcommand\arraystretch{1}\centering \caption{Quantitative comparisons of the {\bf MRI} data completion results by LRMC \cite{candes2009exact}, HaLRTC \cite{Liu2013PAMItensor}, TMac \cite{Xu2013Tmac}, TNN \cite{zhang2017exact}, PSTNN \cite{jiang2017PSTNN}, DCTNN \cite{lu2019low} and the proposed method. The \textbf{best} values and the \underline{second best} values are respectively highlighted by bolder fonts and underlines.} \begin{tabular}{cccccccccccc} \toprule SR &Index &Observed& LRMC & HaLRTC&TMac& TNN& PSTNN& DCTNN& F-TNN\\ \midrule \multirow{3}{*}{10\% } & PSNR & 9.048 & 17.541 & 18.012 & \underline{24.866} & 21.855 & 24.578 & 24.716 & \bf 26.104 \\ & SSIM & 0.047 & 0.317 & 0.388 & 0.658 & 0.524 & 0.628 & \underline{0.659} & \bf 0.759 \\ & FSIM & 0.474 & 0.694 & 0.686 & 0.809 & 0.760 & 0.802 & \underline{0.817} & \bf 0.862 \\ \midrule \multirow{3}{*}{20\% } & PSNR & 9.561 & 22.781 & 23.404 & {28.523} & 27.301 & 28.566 & \underline{28.595} & \bf 30.207 \\ & SSIM & 0.073 & 0.590 & 0.657 & \underline{0.835} & 0.776 & 0.806 & 0.820 & \bf 0.886 \\ & FSIM & 0.523 & 0.813 & 0.823 & \underline{0.896} & 0.871 & 0.885 & 0.892 & \bf 0.925 \\ \midrule \multirow{3}{*}{30\% } & PSNR & 10.141 & 25.730 & 26.896 & 30.771 & 30.897 & 31.382 & \underline{31.547} & \bf 33.142 \\ & SSIM & 0.103 & 0.730 & 0.794 & 0.889 & 0.880 & 0.885 & \underline{0.896} & \bf 0.936 \\ & FSIM & 0.550 & 0.875 & 0.892 & 0.919 & 0.925 & 0.928 & \underline{0.935} & \bf 0.956 \\ \bottomrule \end{tabular} \label{MRI} \end{table} We illustrate one frontal slice of the results by different methods with different random sampling rates in Fig. \ref{MRIframe}. As shown in the top row of Fig. \ref{MRIframe}, when the sampling rate is 10\%, the proposed method accurately reconstructs the MRI data, with a clear margin of the gray matter and the white matter. When the sampling rate is 30\%, all the methods get good performances, and the white matter regions recovered by the proposed method and TMac are the visually best. \subsubsection{MSI Data} In this subsection, we evaluate the performance of our method and the compared methods on 32 MSIs \footnote{{http://www.cs.columbia.edu/CAVE/databases/multispectral/}.} from the CAVE databases \cite{yasuma2010generalized}. The size of the MSIs is $512\times512\times31$, where the spatial resolution is $512\times512$ and the spectral resolution is 31. The sampling rates (SR) are set as 5\%, 10\%, and 20\%\footnote{For the MSI data, when the sampling rate is higher than 20\%, all the methods achieve very high performances and the results are very close to the ground truths. Therefore, we select the lower sampling rates to exhibit.}. The average quantitative assessments of all the results by different methods are listed in Table \ref{MSI}. We can find that the proposed method achieves the best performance while DCTNN obtains the second best-metrics. When the sampling rate is 20\%, TMac, TNN, PSTNN, DCTNN, and the proposed method all have good performances. \begin{table}[!t] \renewcommand\arraystretch{1}\setlength{\tabcolsep}{2pt}\scriptsize \centering \caption{The average PSNR, SSIM and FSIM of the completion results on 32 {\bf MSIs} by LRMC \cite{candes2009exact}, HaLRTC \cite{Liu2013PAMItensor}, TMac \cite{Xu2013Tmac}, TNN \cite{zhang2017exact}, PSTNN \cite{jiang2017PSTNN}, DCTNN \cite{lu2019low} and the proposed method with different sampling rates. The \textbf{best} values and the \underline{second best} values are respectively highlighted by bolder fonts and underlines.} \begin{tabular}{cccccccccc} \toprule SR &Index &Observed& LRMC & HaLRTC&TMac& TNN& PSTNN& DCTNN& F-TNN\\ \midrule \multirow{3}{*}{5\% } & PSNR & 14.718 & 16.687 & 17.831 & 25.633 & 21.863 & 23.073 & \underline{32.068} & \bf 33.536 \\ & SSIM & 0.231 & 0.588 & 0.661 & 0.794 & 0.729 & 0.771 & \underline{0.909} & \bf 0.930 \\ & FSIM & 0.697 & 0.773 & 0.799 & 0.871 & 0.836 & 0.856 & \underline{0.940} & \bf 0.955 \\ \midrule \multirow{3}{*}{10\% } & PSNR & 14.954 & 19.369 & 22.369 & 32.306 & 31.165 & 33.945 & \underline{37.870} & \bf 38.415 \\ & SSIM & 0.277 & 0.679 & 0.789 & 0.917 & 0.906 & 0.945 & \underline{0.974} & \bf 0.977 \\ & FSIM & 0.718 & 0.828 & 0.876 & 0.942 & 0.939 & 0.961 & \underline{0.981} & \bf 0.984 \\ \midrule \multirow{3}{*}{20\% } & PSNR & 15.464 & 24.581 & 33.004 & 38.258 & 40.077 & 41.944 & \underline{42.675} & \bf 43.557 \\ & SSIM & 0.368 & 0.783 & 0.940 & 0.973 & 0.983 & 0.988 & \underline{0.992} & \bf 0.993 \\ & FSIM & 0.740 & 0.892 & 0.963 & 0.979 & 0.987 & 0.991 & \underline{0.994} & \bf 0.995 \\ \bottomrule \end{tabular} \label{MSI} \end{table} The third dimension of the MSI represents the spectral information and facilitates a fine delivery of more faithful knowledge under real scenes \cite{xie2016multispectral}. Therefore, in Fig. \ref{MSIframe}, we illustrate the pseudo-color images (Red-1 Green-2 Blue-31) of the results on the MSI data ``beads'', ``cd'', and ``clay'', with the sampling rate = 0.05. From the similarity of the color between the results and the ground truth, we can recognize the spectral distortion. From the first row of Fig. \ref{MSIframe}, we can see that, although DCTNN also obtains clear results on ``beads'' as our F-TNN, the result by DCTNN is spectrally distorted. TMac performs well on ``clay'', however, undesirable artifacts can be found. The superior of the proposed F-TNN is visually obvious, considering the reconstruction of the image and preservation of spectral information. \subsubsection{Video Data} \begin{figure*}[!t] \centering\scriptsize\setlength{\tabcolsep}{1pt} \renewcommand\arraystretch{0.9} \begin{tabular}{cccccccccc} &Observed&LRMC \cite{candes2009exact}&HaLRTC \cite{Liu2013PAMItensor}&TMac \cite{Xu2013Tmac}&TNN \cite{zhang2017exact}&PSTNN \cite{jiang2017PSTNN}&DCTNN \cite{lu2019low}& F-TNN& Ground truth\\ \rotatebox[origin=l]{90}{ \textbf{SR = 0.1}}& \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_Observed_Frame_15.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_Matrix_Frame_15.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_HaLRTC_Frame_15.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_TMac_Frame_15.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_TNN_Frame_15.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_PSTNN_Frame_15.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_DCT-TNN_Frame_15.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_F-TNN_Frame_15.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_GT_Frame_15.png} \\ \rotatebox[origin=l]{90}{ \textbf{SR = 0.2}}& \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_Observed_Frame_67.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_Matrix_Frame_67.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_HaLRTC_Frame_67.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_TMac_Frame_67.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_TNN_Frame_67.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_PSTNN_Frame_67.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_DCT-TNN_Frame_67.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_F-TNN_Frame_67.png} & \includegraphics[width=0.105\linewidth]{figs/TC_news/V_news_GT_Frame_67.png} \\ \end{tabular} \caption{The completion results on the {\bf video} data ``news'' different methods with different sampling rates. From left to right are the observed incomplete data, results by different methods and the ground truth, respectively. From top to bottom are respectively the 15-th frame and the 67-th frame.} \label{videoframe} \end{figure*} In this subsection, 9 videos\footnote{http://trace.eas.asu.edu/yuv/.} (respectively named ``foreman'', ``hall'', ``carphone'', ``highway'', ``container'', ``claire'', ``news'', ``coastguard'' and ``suzie'') with the size $144\times176\times100$ are selected as the ground truth third-order data. The contents of these videos are different, consisting of humans, roads, rivers, cars, boats, bridges, walls and so on. The scenarios in some videos (such as ``foreman'', ``coastguard'', ``suzie'', and ``highway'') are more dynamic while in others are more static. \begin{table}[htbp] \renewcommand\arraystretch{0.9}\setlength{\tabcolsep}{2pt}\scriptsize \caption{The average PSNR, SSIM and FSIM of the completion results on 9 {\bf videos} by LRMC \cite{candes2009exact}, HaLRTC \cite{Liu2013PAMItensor}, TMac \cite{Xu2013Tmac}, TNN \cite{zhang2017exact}, PSTNN \cite{jiang2017PSTNN}, DCTNN \cite{lu2019low} and the proposed method with different sampling rates. The \textbf{best} values and the \underline{second best} values are respectively highlighted by bolder fonts and underlines.} \centering \begin{tabular}{cccccccccc} \toprule SR &Index &Observed& LRMC & HaLRTC&TMac& TNN& PSTNN& DCTNN& F-TNN\\ \midrule \multirow{3}{*}{10\% } & PSNR & 6.176 & 18.190 & 19.936 & 24.317 & 26.411 & 29.118 & \underline{29.246} & \bf 30.654 \\ & SSIM & 0.018 & 0.417 & 0.567 & 0.688 & 0.758 & 0.809 & \underline{0.819} & \bf 0.880 \\ & FSIM & 0.423 & 0.719 & 0.773 & 0.829 & 0.875 & 0.904 & \underline{0.909} & \bf 0.931 \\ \midrule \multirow{3}{*}{20\% } & PSNR & 6.687 & 29.315 & 30.150 & 30.250 & 31.329 & 32.012 & \underline{32.259} & \bf 33.568 \\ & SSIM & 0.031 & 0.851 & 0.871 & 0.868 & 0.871 & 0.876 & \underline{0.881} & \bf 0.927 \\ & FSIM & 0.413 & 0.928 & 0.927 & 0.921 & 0.934 & 0.937 & \underline{0.940} & \bf 0.957 \\ \midrule \multirow{3}{*}{30\% } & PSNR & 7.266 & 32.080 & 32.977 & 32.189 & 34.050 & 34.056 & \underline{34.434} & \bf 35.820 \\ & SSIM & 0.046 & 0.907 & \underline{0.917} & 0.910 & 0.915 & 0.912 & 0.915 & \bf 0.951 \\ & FSIM & 0.408 & 0.952 & 0.952 & 0.944 & 0.956 & 0.956 & \underline{0.958} & \bf 0.971 \\ \bottomrule \end{tabular} \label{videoa} \end{table} Table \ref{videoa} lists the average MPSNR, MSSIM, and MFSIM on these 9 videos with different sampling rates. For different sampling rates, our F-TNN obtains the results with the best quantitative metrics. When the sampling rates are 10\% and 20, the performances of PSTNN and DCTNN are comparable. The DCTNN ranks second with the sampling rate of 30\%. Fig. \ref{videoframe} exhibits the frames of the results on the videos, ``news'' with sampling rates 10\% and 20\%. The video ``news'' is captured by a static camera in a stationary scenery, and there are two dynamic parts, which are the two newscasters in the front position and a playing screen in the back, in this video. Thus, the scenario in this video contains both dynamic and static components. Most compared methods can reconstruct the static parts well while the proposed method obtains the best recovering performances on both the two newscasters (see their faces) and the dynamic screen. \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{figs/all_video2.pdf} \caption{The PNSR, SSIM, and FSIM of the results by different methods on all the video data with the sampling rate 10\%.} \label{video_01} \end{figure} To further illustrate the performance of all the methods on different videos, in Fig. \ref{video_01} we exhibit the PSNR, SSIM, and FSIM on all the videos by all the methods when the sampling rate is 10\%. From Fig. \ref{video_01}, it can be found that TMac is unstable with respect to different videos while other methods maintain better metrics when the video is more static. Although the scenario in ``highway'' is dynamic along the temporal direction, the contents in this video are not complicated. Therefore, many methods achieve good performances. It can be observed that the proposed method obtains the highest PSNR, SSIM, and FISM on all the videos. This validates the robustness of our F-TNN. \subsection{Tensor Robust Principal Component Analysis} In this section, we test our F-TNN based TRPCA methods on two problems, i.e., color images recovery from observations corrupted by the salt-and-pepper noise, and the background subtraction for surveillance videos. The compared methods consist of one matrix nuclear norm minimization based RPCA method (denoted as MRPCA) \cite{candes2011robust}, a sum of the nuclear norm minimization based tensor RPCA method (denoted as ``SNN'')\cite{goldfarb2014robust}, a TNN based tensor RPCA method \cite{lu2019tensor}, and a DCT transformed TNN based tensor RPCA method \cite{lu2019exact2}. The $\ell_1$ norm is used to characterize the sparse component by all the compared methods. The balance parameter $\lambda$, which is added to the $\ell_1$ term, is manually selected for the best performances for all the methods. We list the settings of $\lambda$ in Table \ref{table-lambda} When implementing MRPCA, we unfold the observed data $\mathcal{O}$ along the third mode and input $\mathbf O_{(3)}$. For the image recovery, since that the framelet transformation matrix $\mathbf W$ requires the third dimension of the input data no less than 40, we shift the dimension of the observed image as $\hat{\mathcal{O}}\in\mathbb{R}^{n_2\times n_3\times n_1}$ via the Matlab command ``shiftdim($\cdot$,1)''. \begin{table}[!t] \renewcommand\arraystretch{1}\setlength{\tabcolsep}{8pt}\scriptsize \renewcommand\arraystretch{1}\centering \caption{The settings of the parameter $\lambda$ for all the methods, given the observation $\mathcal{O}\in\mathbb{R}^{n_1\times n_2\times n_3}$.} \begin{tabular}{ccc} \toprule Method & Image recovery & Background substraction\\ \midrule MRPCA & $ 1.5/\sqrt{n_1n_2}$ & $ 1/\sqrt{n_1n_2}$ \\ SNN & $ 3/\sqrt{n_1n_2}$ & $ 0.5/\sqrt{\max(n_1,n_2)n_3}$ \\ TNN & $ 3/\sqrt{\max(n_1,n_2)n_3}$ & $ 1/\sqrt{2\max(n_1,n_2)n_3}$ \\ DCTNN & $ 2/\sqrt{\max(n_1,n_2)n_3}$ & $ 4/\sqrt{\max(n_1,n_2)n_3}$ \\ F-TNN & $ 3/\sqrt{\max(n_1,n_3)n_2}$ & $ 3/\sqrt{\max(n_1,n_2)n_3}$ \\ \bottomrule \end{tabular} \label{table-lambda} \end{table} \subsubsection{Color Image Recovery} We select 4 images\footnote{The images named ``airplane'', ``fruits'', and ``baboon'' are of the size $512\times512\times3$ and available at {http://sipi.usc.edu/database/database.php}, while the image ``watch'' of the size $1024\times768\times3$ is available at {https://www.sitepoint.com/mastering-image-optimization-in-wordpress/}}, respectively named ``airplane'', ``watch'', ``fruits'', and ``baboon'', as ground truth clean images. Then, the salt-and-pepper noise is added to these images, affecting $\rho$ pixels. The parameter $\rho$ varies from 5\% to 10\%. Table \ref{table-rpca-image} presents the averaged PSNR, SSIM, and FSIM values of the results by different methods for the color image recovery. We can find that the performance of our method is the best with different $\rho_s$. We exhibit the visual results on the images ``airplane'' and ``watch'' in Fig. \ref{figure-rpca-image}. It can be obtained that all the tensor-based methods remove the salt-and-pepper noise while the performance of MRPCA is unsatisfactory. The residual images, which are absolute values of the difference between results and clean images, are magnified with a factor 2 for better visualization. From the residual images, we can see that our method preserves the structure and details of the color images well. \begin{table}[!t]\renewcommand\arraystretch{1}\setlength{\tabcolsep}{5pt}\scriptsize \renewcommand\arraystretch{1}\centering \caption{Quantitative comparisons of the image recovery results of MRPCA \cite{candes2011robust}, SNN \cite{goldfarb2014robust}, TNN \cite{lu2019tensor}, DCTNN \cite{lu2019exact2}, and the proposed method. The \textbf{best} values and the \underline{second best} values are respectively highlighted by bolder fonts and underlines.} \begin{tabular}{ccccccccc} \toprule $\rho$ &Index &Observed& MRPCA & SNN & TNN& DCTNN &F-TNN\\ \midrule \multirow{3}{*}{ 5\% } & PSNR & 18.005 & 21.671 & 30.188 & 29.791 & \underline{31.735} & \bf 33.846 \\ & SSIM & 0.587 & 0.771 & 0.962 & 0.964 & \underline{0.979} & \bf 0.987 \\ & FSIM & 0.833 & 0.896 & 0.973 & 0.970 & \underline{0.982} & \bf 0.990 \\ \midrule \multirow{3}{*}{10\% } & PSNR & 14.987 & 19.245 & 27.932 & 29.140 & \underline{30.807} & \bf 31.937 \\ & SSIM & 0.450 & 0.664 & 0.917 & 0.957 & \underline{0.971} & \bf 0.975 \\ & FSIM & 0.744 & 0.844 & 0.954 & 0.965 & \underline{0.977} & \bf 0.984 \\ \bottomrule \end{tabular} \label{table-rpca-image} \end{table} \begin{figure}[!t] \centering\tiny\setlength{\tabcolsep}{0.95pt} \renewcommand\arraystretch{0.9} \begin{tabular}{cccccccc} &Observed& {\tiny MRPCA \cite{candes2011robust}} &{\tiny SNN \cite{goldfarb2014robust}}&{\tiny TNN \cite{lu2019tensor}} &{ \tiny DCTNN \cite{lu2019exact2}}& FTNN&Groudtruth \\ \multirow{3}{*}{\rotatebox[origin=c]{90}{$\rho=5\%$}}& \includegraphics[width=0.128\linewidth]{figs/image222/airplane_noisy5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_IALM-RPCA5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_HoRPCA5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_TNN5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_DCTNN5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_FTNN15.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_GT.png} \\ & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_noisy5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_IALM-RPCA5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_HoRPCA5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_TNN5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_DCTNN5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_FTNN15_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_GT_rse.png} \\ \multirow{3}{*}{\rotatebox[origin=c]{90}{$\rho=10\%$}}& \includegraphics[width=0.128\linewidth]{figs/image222/airplane_noisy10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_IALM-RPCA10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_HoRPCA10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_TNN10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_DCTNN10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_FTNN110.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_GT.png} \\ & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_noisy10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_IALM-RPCA10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_HoRPCA10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_TNN10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_DCTNN10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_FTNN110_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/airplane_GT_rse.png} \\ &Observed& {\tiny MRPCA \cite{candes2011robust}} &{\tiny SNN \cite{goldfarb2014robust}}&{\tiny TNN \cite{lu2019tensor}} &{ \tiny DCTNN \cite{lu2019exact2}}& FTNN&Groudtruth \\ \multirow{3}{*}{\rotatebox[origin=c]{90}{$\rho=5\%$}}& \includegraphics[width=0.128\linewidth]{figs/image222/watch_noisy5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_IALM-RPCA5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_HoRPCA5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_TNN5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_DCTNN5.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_FTNN15.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_GT.png} \\ & \includegraphics[width=0.128\linewidth]{figs/image222/watch_noisy5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_IALM-RPCA5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_HoRPCA5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_TNN5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_DCTNN5_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_FTNN15_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_GT_rse.png} \\ \multirow{3}{*}{\rotatebox[origin=c]{90}{$\rho=10\%$}}& \includegraphics[width=0.128\linewidth]{figs/image222/watch_noisy10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_IALM-RPCA10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_HoRPCA10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_TNN10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_DCTNN10.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_FTNN110.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_GT.png} \\ & \includegraphics[width=0.128\linewidth]{figs/image222/watch_noisy10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_IALM-RPCA10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_HoRPCA10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_TNN10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_DCTNN10_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_FTNN110_rse.png} & \includegraphics[width=0.128\linewidth]{figs/image222/watch_GT_rse.png} \\ \end{tabular} \caption{The top four rows are the image recovery results and residual images on the image ``airplane'', and the bottom 4 rows are corresponding to the image ``watch''. } \label{figure-rpca-image} \end{figure} \subsubsection{Background substraction} Four video sequences, respectively named ``Bootstrap1285'', ``Escalator2805'', ``ShoppingMall1535'', and ``hall1368'', are selected from Li's dataset\footnote{Data available at {http://vis-www.cs.umass.edu/~narayana/castanza/I2Rdataset/}}. After transforming the color frames to gray level ones, each video is of the size $130\times160\times40$. Results by all of the methods are displayed in Fig. \ref{fig-rpca-video}. We can see that our method and MRPCA perform well for the videos ``Bootstrap1285'' and ``ShoppingMall1535'', while some incorrectly extractions can be found in the foreground results by other three methods, the front desk in ``Bootstrap1285'' and the dot pattern of the ground in ``ShoppingMall1535'' for examples. For videos ``Escalator2805'' and ``hall1368'', all the methods incorrectly extract contents of the background to the foreground, more or less. Overall, the foregrounds extracted by our method are the purest. \begin{figure}[!t] \centering\scriptsize\setlength{\tabcolsep}{1pt} \renewcommand\arraystretch{0.9} \begin{tabular}{cccccc} Observed& MRPCA \cite{candes2011robust} &SNN \cite{goldfarb2014robust}&TNN \cite{lu2019tensor} &DCTNN \cite{lu2019exact2}& FTNN\\ \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_Ori_Frame_1286_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_IALM-RPCA_Frame_1286_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_HoRPCA_Frame_1286_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_TNN_Frame_1286_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_DCTNN_Frame_1286_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_FTNN1_Frame_1286_bcak.png} \\ &\includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_IALM-RPCA_Frame_1286_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_HoRPCA_Frame_1286_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_TNN_Frame_1286_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_DCTNN_Frame_1286_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Bootstrap_FTNN1_Frame_1286_fore.png} \\ Observed& MRPCA \cite{candes2011robust} &SNN \cite{goldfarb2014robust}&TNN \cite{lu2019tensor} &DCTNN \cite{lu2019exact2}& FTNN\\ \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_Ori_Frame_1408_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_IALM-RPCA_Frame_1408_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_HoRPCA_Frame_1408_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_TNN_Frame_1408_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_DCTNN_Frame_1408_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_FTNN1_Frame_1408_bcak.png} \\ &\includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_IALM-RPCA_Frame_1408_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_HoRPCA_Frame_1408_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_TNN_Frame_1408_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_DCTNN_Frame_1408_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/Escalator_FTNN1_Frame_1408_fore.png} \\ Observed& MRPCA \cite{candes2011robust} &SNN \cite{goldfarb2014robust}&TNN \cite{lu2019tensor} &DCTNN \cite{lu2019exact2}& FTNN\\ \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_Ori_Frame_536_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_IALM-RPCA_Frame_536_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_HoRPCA_Frame_536_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_TNN_Frame_536_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_DCTNN_Frame_536_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_FTNN1_Frame_536_bcak.png} \\ &\includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_IALM-RPCA_Frame_536_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_HoRPCA_Frame_536_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_TNN_Frame_536_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_DCTNN_Frame_536_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/ShoppingMall_resized_FTNN1_Frame_536_fore.png} \\ Observed& MRPCA \cite{candes2011robust} &SNN \cite{goldfarb2014robust}&TNN \cite{lu2019tensor} &DCTNN \cite{lu2019exact2}& FTNN\\ \includegraphics[width=0.15\linewidth]{figs/Video233/hall_Ori_Frame_1290_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_IALM-RPCA_Frame_1290_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_HoRPCA_Frame_1290_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_TNN_Frame_1290_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_DCTNN_Frame_1290_bcak.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_FTNN1_Frame_1290_bcak.png} \\ &\includegraphics[width=0.15\linewidth]{figs/Video233/hall_IALM-RPCA_Frame_1290_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_HoRPCA_Frame_1290_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_TNN_Frame_1290_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_DCTNN_Frame_1290_fore.png} & \includegraphics[width=0.15\linewidth]{figs/Video233/hall_FTNN1_Frame_1290_fore.png} \\ \end{tabular} \caption{Background substraction results by different methods. The left column lists one frame of the observed video. From top to bottom are respectively separation results, i.e., the background and the foreground, of the video ``Bootstrap1285'', ``Escalator2805'', ``ShoppingMall1535'', and ``hall1368''. For better visualization, we add 0.5 to the foreground.} \label{fig-rpca-video} \end{figure} \subsection{Discussions} \subsubsection{Framelet setting} In this part, taking the completion of MRI data (SR = $10\%$) as an example, we evaluate the performance of the proposed method with different Framelet transformation settings. Firstly, including the piece-wise cubic B-spline (denoted as ``cubic''), we also adopted the Haar wavelet (denoted as ``Haar'') and the piece-wise linear B-spline (denoted as ``linear'') to generate the framelet transformation. Meanwhile, we also set the decomposition levels from 1 to 5. The quantitative metrics of the results obtained by the proposed method with different framelet settings are reported in Table \ref{MRIpara}. From Table \ref{MRIpara}, we can find that the piece-wise cubic B-spline is the best choice. As the decomposition level arise, the performance of the proposed method becomes better until level 5. Setting the level as 3 or 4 is a good choice. \begin{table}[!t] \renewcommand\arraystretch{0.9}\setlength{\tabcolsep}{4pt}\scriptsize\centering \caption{The PSNR, SSIM and FSIM of the recovery results on the MRI data by the proposed method with different framelet settings. The \textbf{best} values are highlighted by bolder fonts.} \begin{tabular}{ccccccc} \toprule Filters &Index & Level = 1 & Level = 2 & Level = 3 & Level = 4 & level = 5 \\ \midrule \multirow{3}{*}{Haar} & PSNR & 21.176 & 23.327 & 24.183 & 24.366 & 24.372 \\ & SSIM & 0.537 & 0.647 & 0.680 & 0.685 & 0.685 \\ & FSIM & 0.755 & 0.801 & 0.817 & 0.821 & 0.821 \\ \midrule \multirow{3}{*}{Linear} & PSNR & 22.466 & 24.904 & 25.538 & 25.563 & 25.509 \\ & SSIM & 0.611 & 0.717 & 0.738 & 0.738 & 0.735 \\ & FSIM & 0.785 & 0.834 & 0.846 & 0.848 & 0.847 \\ \midrule \multirow{3}{*}{Cubic} & PSNR & 23.726 & 26.077 & \bf 26.287 & 26.104 & 25.970 \\ & SSIM & 0.673 & 0.761 & \bf 0.765 & 0.759 & 0.746 \\ & FSIM & 0.812 & 0.858 & \bf 0.863 & 0.862 & 0.858 \\ \bottomrule \end{tabular} \label{MRIpara} \end{table} \subsubsection{Convergency Behaviours} Also, we take the completion of MRI data as an example to illustrate the convergency behaviours of our algorithm with respect to different sampling rates and different parameters. In the framework of ADMM, the parameter $\beta$, which is brought in by the augmented Lagrangian function, mainly affects the convergency behaviour of our method. Thus, we test our algorithm with $\beta = 10^{-1},1,10$. We plot $\|\mathbf{\mathcal{V}}^{k+1}-\mathbf{\mathcal{V}}^{k}\|_\infty$ and $\| \mathbf{\mathcal{X}}^{k+1} -\mathbf{\mathcal{X}}^{k} \|_\infty$ of each iteration in Fig. \ref{fig-convergency}. It can be seen that when $\beta = 10^{-1}$ and 1 our algorithm steadily converges. Although the behaviour of $\| \mathbf{\mathcal{X}}^{k+1} -\mathbf{\mathcal{X}}^{k} \|_\infty$ is not that stable when $\beta = 10$, our algorithm also converges rapidly. \begin{figure}[!t] \centering \includegraphics[width=0.98\linewidth]{figs/convergency2.pdf} \caption{The convergence behaviours of Algorithm \ref{alg}, with respect to different sampling rates and different $\beta$.} \label{fig-convergency} \end{figure} \section{Conclusions}\label{Sec:Con} In this paper, we propose to replace the Fourier transform by the framelet in the t-SVD framework. Then, we formulate the framelet representation of the tensor multi-rank and tensor nuclear norm. A low-rank tensor completion model and a tensor robust principal component analysis model are proposed by minimizing the framelet based tensor nuclear norm. We develop ADMM based algorithms to solve these convex models with guaranteed convergence. We compare the performance of the proposed method with state-of-the-art methods via numerical experiments on the magnetic resonance imaging data, videos, color images, and multispectral images. Our method outperforms many state-of-the-art methods quantitatively and visually. {\footnotesize \bibliographystyle{ieeetran}
2,869,038,155,809
arxiv
\section{Introduction} \label{sec:intro} Image segmentation with topological correctness is a challenging problem, especially for images with fine-scale structures, e.g., satellite images, neuron images and vessel images. Deep learning methods have delivered strong performance in image segmentation task~\cite{long2015fully,he2017mask,chen2014semantic,chen2018deeplab,chen2017rethinking}. However, even with satisfying per-pixel accuracy, most existing methods are still prone to topological errors, i.e., broken connections, holes in 2D membranes, missing connected components, etc. These errors may significantly impact downstream tasks. For example, the reconstructed road maps from satellite images can be used for navigation~\cite{barzohar1996automatic,batra2019improved}. A small amount of pixel errors will result in broken connections, causing incorrect navigation route. See Fig.~\ref{fig:teaser} for an illustration. In neuron reconstruction~\cite{funke2017deep,januszewski2018high,uzunbas2016efficient,ye2019diverse,yang2021topological}, incorrect topology of the neuron membrane will result in erroneous merge or split of neurons, and thus errors in morphology and connectivity analysis of neuronal circuits. \begin{figure}[ht] \centering \begin{subfigure}{0.32\linewidth} \includegraphics[width=1\textwidth]{figures/ori.pdf} \caption{Image} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=1\textwidth]{figures/teaser_gt.pdf} \caption{GT mask} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[width=1\textwidth]{figures/teaser_pre.pdf} \caption{U-Net pred. mask} \end{subfigure} \caption{An illustration for the importance of topological correctness. If one wants to go to point $B$ from $A$, the shortest path in the GT is illustrated in \textbf{(b)} (the green path). However, in the result predicted by U-Net, though only a few pixels are misclassified, the shortest path from $A$ to $B$ is totally different, which is illustrated by the green path in \textbf{(c)}. Please zoom-in for better viewing.} \label{fig:teaser} \end{figure} Topological errors usually happen at challenging locations, e.g., weak connections or blurred locations. But not all challenging locations are topologically relevant; for example, pixels near the peripheral of the object of interest can generally be challenging, but not relevant to topology. To truly suppress topological errors, we need to focus on \emph{topologically critical locations}, i.e., challenging locations that are topologically relevant. Without identifying and targeting these locations, neural networks that are optimized for standard pixel-wise losses (e.g., cross-entropy loss or mean-square-error loss) cannot avoid topological errors, even if we increase the training set size. Existing works have targeted these topologically critical locations. The closest method to our work is TopoNet~\cite{hu2019topology}, which is based on the theory of persistent homology \cite{edelsbrunner2000topological,edelsbrunner2010computational}. The main idea is to identify topologically critical locations corresponding to critical points of the likelihood map predicted by the neural network. The selected critical points are reweighed in the training loss to force the neural network to memorize them, and thus to avoid topological errors. But there are two main issues with this approach: {1}) the method is based on the likelihood map, which can be noisy with a large amount of irrelevant critical points. This leads to inefficient optimization during training. {2}) The computation for persistent homology is cubic to the image size. It is too expensive to recompute at every iteration. In this paper, we propose a novel approach to identify topologically critical locations in a more efficient and accurate manner. These locations are penalized in the proposed \emph{homotopy warping loss} to achieve better topological accuracy. Our method is partially inspired by the warping error previously proposed to evaluate the topological accuracy~\cite{jain2010boundary}. Given a binary predicted mask $f_B$ and a ground truth mask $g$, we ``warp'' one towards another without changing its topology. In the language of topology, to warp mask $f_B$ towards $g$, we find a mask $f_B^{*}$ that is homotopy equivalent to $f_B$ and is as close to $g$ as possible \cite{hatcher2002algebraic}. The difference between the warped mask $f_B^{*}$ and $g$ constitutes the topologically critical locations. We can also warp $g$ towards $f_B$ and find another set of topologically critical locations. See Fig.~\ref{fig:warping-synthetic} and Fig.~\ref{fig:error} for illustrations. These locations directly correspond to topological difference between the prediction and the ground truth, tolerating geometric deformations. Our homotopy warping loss targets them to fix topological errors of the model. The warping of a mask is achieved by iteratively flipping labels at pixels without changing the topology of the mask. These flippable pixels are called \emph{simple points/pixels} in the classic theory of digital topology \cite{kong1989digital}. Note that in this paper, we focus on the topology of binary masks, simple points and simple pixels can be used interchangeably. To find the optimal warping of a mask towards another mask is challenging due to the huge search space. To this end, we propose a new heuristic method that is computationally efficient. We filter the image domain with the distance transform and flip simple pixels based on their distance from the mask. This algorithm is proven efficient and delivers high quality locally optimal warping results. Overall, our contributions can be summarized as follows: \begin{itemize} \item We propose a novel \textit{homotopy warping loss}, which penalizes errors on topologically critical locations. These locations are defined by homotopic warping of predicted and ground truth masks. The loss can be incorporated into the training of topology-preserving deep segmentation networks. \item By exploiting distance transforms of binary masks, we propose a novel homotopic warping algorithm to identify topologically critical locations in an efficient manner. This is essential in incorporating the homotopy warping loss into the training of deep nets. \end{itemize} Our loss is a plug-and-play loss function. It can be used to train any segmentation network to achieve better performance in terms of topological accuracy. We conduct experiments on both 2D and 3D benchmarks to demonstrate the efficacy of the proposed method. Our method performs strongly in multiple topology-relevant metrics (e.g., ARI, Warping Error and Betti Error). We also conduct several ablation studies to further demonstrate the efficiency and effectiveness of the technical contributions. \section{Related works} \label{sec:related} \subsection{Deep Image Segmentation} Deep learning methods (CNNs) have achieved satisfying performances for image segmentation~\cite{long2015fully,chen2014semantic,chen2018deeplab,chen2017rethinking,noh2015learning,ronneberger2015u}. By replacing fully connected layer with fully convolutional layers, FCN~\cite{long2015fully} transforms a classification CNNs (e.g., AlexNet~\cite{krizhevsky2012imagenet}, VGG~\cite{simonyan2014very}, or ResNet~\cite{he2016deep}) to fully-convolutional neural networks. In this way, FCN successfully transfers the success of image classification~\cite{krizhevsky2012imagenet,simonyan2014very,szegedy2015going} to dense prediction/image segmentation. Instead of using Conditional Random Field (CRF) as post-processing, Deeplab (v1-v2)~\cite{chen2014semantic,chen2018deeplab} methods add another fully connected CRF after the last CNN layer to make use of global information. Moreover, Deeplab v3~\cite{chen2017rethinking} introduces dilated/atrous convolution to increase the receptive field and make better use of context information to achieve better performance. Except for the methods mentioned above, U-Net~\cite{ronneberger2015u} has also been one of the most popular methods for image segmentation, especially for images with fine structures. U-Net architecture is based on FCN with two major modifications: 1) Similar to encoder-decoder, U-Net is symmetric. The output is with the same size as input images, thus suitable for dense prediction/image segmentation, and 2) Skip connections between downsampling and upsampling paths. The skip connections of U-Net are able to combine low level/local information with high level/global information, resulting in better segmentation performance. Though obtaining satisfying pixel performances, these methods are still prone to structural/topological errors, as they are usually optimized via pixel-wise loss functions, such as mean-square-error loss (MSE) and cross-entropy loss. As illustrated in~Fig.\ref{fig:teaser}, a small amount of pixel errors will affect or even damage the downstream tasks. \subsection{Topology-Aware Segmentation} Topology-aware segmentation methods have been proposed to segment with correct structure/topology. By identifying critical points of the predicted likelihood maps, persistent-homology-based losses~\cite{hu2019topology,clough2020topological} penalize topologically critical locations. However, the identified critical points can be very noisy and often are not relevant to the topological errors. Illustrations are included in Supplementary Material. Moreover, the computation of persistent homology is expensive, making it difficult to evaluate the loss and gradient at every training iteration. Other methods indirectly preserve topology by enhancing the curvilinear structures. VGG-UNet~\cite{mosinska2018beyond} uses the response of pretrained filters to enhance structures locally. But it does not truly preserve the topology, and cannot generalize to higher dimensional topological structures, such as voids. Several methods extract skeletons of the masks and penalize heavily on pixels of the skeletons. This ensures the prediction to be correct along the skeletons, and thus are likely correct in topology. clDice~\cite{shit2021cldice} extracts the skeleton through min/max-pooling operations over the likelihood map. DMT Loss~\cite{hu2021topology} uses the Morse complex of the likelihood map as the skeleton. However, these skeletons are not necessarily topologically critical. The penalization on them may not be relevant to topology. We also note that many deep learning techniques have been proposed to ensure the segmentation output preserves details, and thus preserves topology implicitly~\cite{ronneberger2015u, long2015fully, badrinarayanan2017segnet, ding2019boundary, kervadec2019boundary, karimi2019reducing}. One may also use topological constraints as postprocessing steps once we have the predicted likelihood maps~\cite{han2003topology,le2008self,sundaramoorthi2007global,segonne2008active,wu2017optimal,gao2013segmenting,vicente2008graph,nowozin2009global,zeng2008topology,chen2011enforcing,andres2011probabilistic,stuhmer2013tree,oswald2014generalized,estrada2014tree}. Compared to end-to-end methods, postprocessing methods usually contain self-defined parameters or hand-crafted features, making it difficult to generalize to different situations. Instead of relying on the noisy likelihood maps~\cite{hu2019topology,clough2020topological,shit2021cldice,hu2021topology}, we propose to use the warping of binary masks to identify the topologically critical locations. The identified locations are more likely to be relevant to topological errors. Penalizing on these locations ensures the training efficiency and segmentation quality of our method. Another difference from previous methods is that our method rely on purely local topological computation (i.e., checking whether a pixel is simple within a local patch), whereas previous methods are mostly relying on global topological computation. \section{Method} \label{sec:method} By warping the binary predicted mask to the ground truth mask or conversely, we can accurately and efficiently identity the topologically critical locations. And then we propose a novel homotopy warping loss which targets them to fix topological errors of the model. The overall framework is illustrated in Fig.~\ref{fig:framework}. This section is organized as follows. We will start with necessary definitions and notations. In Sec.~\ref{topology}, we give a concise description of digital topology and simple points. Next, we analyze different types of warping errors in Sec.~\ref{sec:warp_error}. The proposed warping loss is introduced in Sec.~\ref{warpingloss}. And finally, we explain the proposed new warping algorithm in Sec.~\ref{distance}. \begin{figure}[th] \centering \includegraphics[width=1\linewidth]{figures/architecture.pdf} \vspace{-.2in} \caption{The illustration of the proposed \textit{homotopy warping loss} $L_{warp}$. The homotopy warping algorithm tries to identify the topological critical locations via the binary mask instead of noisy likelihood maps. These identified topological critical locations/mask are used to define a new loss which is complementary to standard pixel-wise loss functions. The details of \textit{Homotopy Warping} and \textit{Topological Critical Mask M} can be found in Sec.~\ref{sec:warp_error} and Sec.~\ref{warpingloss}, respectively.} \vspace{-.15in} \label{fig:framework} \end{figure} \subsection{Digital Topology and Simple Points} \label{topology} In this section, we briefly introduce simple point definition from the classic digital topology~\cite{kong1989digital}. We focus on the 2D setting, whereas all definitions generalize to 3D. Details on 3D images will be provided in the Supple. Material. \myparagraph{Connectivities of pixels.} To discuss the topology of a 2D binary image, we first define the connectivity between pixels. See Fig.~\ref{fig:simple} for an illustration. A pixel $p$ has 8 pixels surrounding it. We can either consider the 4 pixels that share an edge with $p$ as $p$'s neighbors (called \emph{4-adjacency}), or consider all 8 pixels as $p$'s neighbors (called \emph{8-adjacency}). To ensure the Jordan closed curve theorem to hold, one has to use one adjacency for foreground (FG) pixels, and the other adjacency for the background (BG) pixels. In this paper, we use 4-adjacency for FG and 8-adjacency for BG. For 3D binary images, we use 6-adjacency for FG and 26-adjacency for BG. Denote by $N_4(p)$ the set of 4-adjacency neighbors of $p$, and $N_8(p)$ the set of 8-adjacency neighbors of $p$. \myparagraph{Simple points.} For a binary image (2D/3D), a pixel/voxel is called a \textit{simple point} if it could be flipped from foreground (FG) to background (BG), or from BG to FG, without changing the topology of the image~\cite{kong1989digital}. The following theorem can be used to determine whether a point is simple: \begin{theorem}[Simple Point Condition~\cite{kong1989digital}] Let $p$ be a point in a 2D binary image. Denote by $F$ the set of FG pixels. Assume 4-adjacency for FG and 8-adjacent for BG. $p$ is a simple point if and only if both of the following conditions hold: 1) $p$ is 4-adjacent to just one FG connected component in $N_8(p)$; and 2) $p$ is 8-adjacent to just one BG connected component in $N_8(p)$. \end{theorem} See Fig.~\ref{fig:simple} for an illustration of simple and non-simple points in 2D case. It is easy to check if a pixel $p$ is simple or not by inspecting its $3 \times 3$ neighboring patch. The theorem can also generalize to 3D setting with 6- and 26-adjacencies for FG and BG, respectively. \begin{figure}[t] \centering \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{figures/4_conn.pdf} \caption{4-adjacent} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{figures/8_conn.pdf} \caption{8-adjacent} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{figures/simple.pdf} \caption{Simple point} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{figures/non_simple.pdf} \caption{Non-simple} \end{subfigure} \vspace{-.in} \caption{Illustration for 4, 8-adjacency, simple and non-simple points. \textbf{(a)}: 4-adjacency. \textbf{(b)}: 8-adjacency. \textbf{(c)}: a simple point $p$. White and grey pixels are FG and BG, respectively. Flipping the label of $p$ will not change the topology. \textbf{(d)}: a non-simple point $p$. Flipping $p$ will change the topology.} \vspace{-.1in} \label{fig:simple} \end{figure} \subsection{Homotopic Warping Error} \label{sec:warp_error} In this section, we introduce the homotopic warping of one mask towards another. We warp a mask through a sequence of flipping of simple points. Since we only flip simple points, by definition the warped mask will have the same topology.\footnote{Note that it is essential to flip these simple points sequentially. The simple/non-simple status of a pixel may change if other adjacent pixels are flipped. Therefore, flipping a set of simple points \textit{simultaneously} is not necessarily topology-preserving.} The operation is called a \textit{homotopic warping}. It has been proven that two binary images with the same topology can always be warped into each other by flipping a sequence of simple points~\cite{rosenfeld1998topology}. \begin{figure}[btp!] \centering \setlength{\tabcolsep}{0.5pt} \begin{tabular}{ccc} \includegraphics[width=.15\textwidth]{figures/Warping/Warping-Red.pdf} & \includegraphics[width=.15\textwidth]{figures/Warping/Warping-Red-Final.pdf} & \includegraphics[width=.15\textwidth]{figures/Warping/SimplePixels-Red.pdf} \\ (a) & (b) & (c) \\ \includegraphics[width=.15\textwidth]{figures/Warping/Warping-White.pdf} & \includegraphics[width=.15\textwidth]{figures/Warping/Warping-White-Final.pdf} & \includegraphics[width=.15\textwidth]{figures/Warping/SimplePixels-White.pdf} \\ (d) & (e) & (f) \end{tabular} \vspace{-.1in} \caption{Illustration of homotopic warping between two masks, red and white. If red is the FG of the prediction, this is a false negative topological error. If red is the FG of the ground truth, this is a false positive error. \textbf{(a-c):} warping the red mask towards the white mask. \textbf{(a):} arrows show the warping direction. \textbf{(b):} the final mask after warping. Only a single-pixel wide gap remains in the middle of the warped red mask. The non-simple/critical pixels are highlighted with red crosses. They correspond to the topological error and will be penalized in the loss. \textbf{(c):} at the beginning of the warping, we highlight (with green crosses) simple points that can be flipped according to our algorithm. \textbf{(d-f):} warping the white mask towards the red mask. Only a single-pixel wide connection remains to ensure the warped white mask is connected. The non-simple/critical pixels are highlighted with the red crosses.} \vspace{-.2in} \label{fig:warping-synthetic} \end{figure} In our algorithm, we take two input masks, the prediction mask and the ground truth mask. We can warp one of them (source mask) into another (target mask) in the best way possible, i.e., the warped mask has the minimal number of difference with the target mask (formally, the minimal Hamming distance). Once the warping is finished, the pixels at which the warped mask is different from the target mask, called \emph{critical pixels}, are a sparse set of pixels indicative of the topological errors of the prediction mask. We will warp in both directions: from the prediction mask to the ground truth mask, and the opposite. They identify different sets of critical pixels for a same topological error. In Fig.~\ref{fig:warping-synthetic}, we show a synthetic example with red and white masks, as well warping in both directions. Warping the red mask towards the white mask ((a) and (b)) results in a single-pixel wide gap. The pixels in the gap (highlighted with red crosses) are critical pixels; flipping any of them will change the topology of the warped red mask. Warping the white mask towards the red mask ((d) and (e)) results in a single pixel wide link connecting the warped white mask. All pixels along the link (highlighted with red crosses) are critical; flipping any of them will change the topology of the warped white mask. Here if the red mask is the prediction mask, then this corresponds to a false negative connection, i.e., a connection that is missed by the prediction. If the red mask is the ground truth mask, then this corresponds to a false positive connection. Note the warping ensures that \emph{only topological errors are represented by the critical pixels}. In the synthetic example (Fig.~\ref{fig:warping-synthetic}), the large area of error in the bottom left corner of the image is completely ignored as it is not topologically relevant. In Fig.~\ref{fig:error}, we show a real example from the satellite image dataset, focusing on the errors related to 1D topological structures (connection). In the figure we illustrate both a false negative connection error (highlighted with a red box) and false positive connection error (highlighted with a green box). If we warp the ground truth mask towards the prediction mask (c), we observe critical pixels forming a link for the false negative connection (d), and a gap for the false positive connection (e). Similarly, we can warp the prediction mask towards the ground truth, and get different sets of critical pixels for the same topological errors (illustrations will be provided in the Supplemental Material). \begin{figure}[t] \centering \begin{subfigure}{0.19\linewidth} \includegraphics[width=1\textwidth]{figures/gt_illu.pdf} \caption{GT} \end{subfigure} \begin{subfigure}{0.19\linewidth} \includegraphics[width=1\textwidth]{figures/pre_illu.pdf} \caption{Prediction} \end{subfigure} \begin{subfigure}{0.19\linewidth} \includegraphics[width=1\textwidth]{figures/gt_warp_illu.pdf} \caption{Warp GT} \end{subfigure} \begin{subfigure}{0.19\linewidth} \includegraphics[width=1\textwidth]{figures/error_zoom.pdf} \caption{Zoom-in} \end{subfigure} \begin{subfigure}{0.19\linewidth} \includegraphics[width=1\textwidth]{figures/error_zoom_2.pdf} \caption{Zoom-in} \end{subfigure} \caption{Illustration of warping in a real world example (satellite image). \textbf{(a)} GT mask. \textbf{(b)} The prediction mask. The red box highlights a \textit{false negative connection}, and the green box highlights a \textit{false positive connection}. \textbf{(c)} Warped GT mask (using the prediction mask as the target). \textbf{(d)} Zoomed-in view of the red box in \textbf{(c)}. \textbf{(e)} Zoomed-in view of the green box in \textbf{(c)}. } \vspace{-.2in} \label{fig:error} \end{figure} Note that for 2D images with fine structures, errors regarding 1D topological structures are the most crucial. They affect the connectivity of the prediction results. For 3D images, errors on 1D or 2D topological structures are both important, corresponding to broken connections for tubular structures and holes in membranes. We will provide more comprehensive characterization of different types of topological structures and errors in the Supplementary Material. \subsection{Homotopy Warping Loss} \label{warpingloss} Next, we formalize the proposed \emph{homotopy warping loss}, which is evaluated on the critical pixels due to homotopic warping. As illustrated in the previous section, the warping can be in both directions, from the prediction mask to the ground truth mask, and the opposite. Formally, we denote by $f$ the predicted likelihood map of a segmentation network, and $f_B$ the corresponding prediction mask (i.e., $f$ thresholded at 0.5). We denote by $g$ the ground truth mask. First, we warp $g$ towards $f_B$, so that the warped mask, $g^\ast$ has the minimal Hamming distance from the target $f_B$. \vspace{-.1in} \begin{equation} \label{eq:warping-g} g^{*} = \argmin\nolimits_{g^{w} \lhd g} ||f_B-g^{w}||_H \end{equation} where $\lhd$ is the homotopic warping operation. The pixels at which $g^\ast$ and $f_B$ disagree are the critical pixels and will be penalized in the loss. We record these critical pixels due to the warping of $g$ with a mask $M_g$, $M_g = f_B \oplus g^{*}$, in which $\oplus$ is the \textit{Exclusive Or} operation. We also warp the prediction mask $f_B$ towards $g$. \vspace{-.05in} \begin{equation} f_B^{*} = \argmin\nolimits_{f_B^w \lhd f_B} ||f_B^{w}-g||_H \label{eq:warping-fb} \end{equation} We use the mask $M_f$ to record the remaining critical pixels after warping $f_B$, $M_f = g \oplus f_B^{*}$. The union of the two critical pixel masks is the complete set of critical pixels corresponding to topological errors, $ M = M_g \cup M_f$. $M$ contains all the locations which are directly related to topological structures. Note this is different from persistent-homology-based method~\cite{hu2019topology}, DMT based method~\cite{hu2021topology} or skeleton based method~\cite{shit2021cldice}, which extract topological locations/structures on the predicted continuous-valued likelihood maps. Our warping loss directly locates the topological critical pixels/structures/locations based on the binary mask. The detected critical pixel set is sparse and less noisy. Use $L_{pixel}$ to denote the pixel-wise loss function (e.g., cross-entropy). The warping loss $L_{warp}$ can be defined as: \vspace{-.05in} \begin{equation} \label{loss} L_{warp} = L_{pixel}(f, g) \odot M \end{equation} where $\odot$ denotes Hadamard product. $L_{warp}$ penalizes the topological critical locations, forcing the neural network to predict better at these locations, and thus are less prone to topological errors. The final loss of our method, $L_{total}$, is given by: \vspace{-.05in} \begin{equation} \label{final_loss} L_{total} = L_{dice} + \lambda_{warp} L_{warp} \end{equation} where $L_{dice}$ denotes the dice loss. And the loss weight $\lambda_{warp}$ is used to balance the two loss terms. \subsection{Distance-Ordered Homotopy Warping} \label{distance} Even though checking whether a pixel is simple or not is easy, finding the optimal warping as in Eq.~\eqref{eq:warping-g} and \eqref{eq:warping-fb} is challenging. There are too many degrees of freedom. At each iteration during the warping, we have to choose a simple point to flip. It is not obvious which simple point will lead to a global optimum. In this section, we provide an efficient heuristic algorithm to find a warping local optimum. We explain the algorithm for warping $g$ towards $f_B$. The algorithm generalizes to the opposite warping direction naturally. Recall the warping algorithm iteratively flips simple points. But there are too many choices at each iteration. It is hard to know which flipping choice will lead to the optimal solution. We need good heuristics for choosing a flippable pixel. Below we explain our main intuitions for designing our algorithm. First, we restrict the warping so it only sweeps through the area where the two masks disagree. In other words, at each iteration, we restrict the candidate pixels for flipping to not only simple, but also pixels on which $g$ and $f_B$ disagree. In Fig.~\ref{fig:warping-synthetic} (c) and (f), we highlight the candidate pixels for flipping at the beginning. Notice that not all simple points are selected as candidates. We only choose simple points within the difference set $\text{Diff}(f_B,g)=f_B \oplus g$. Second, since we want to minimize the difference of the warped and target masks, we propose to flip pixels within the difference region $\text{Diff}(f_B,g)$. To implement this strategy efficiently, we order all pixel within $\text{Diff}(f_B,g)$ according to their distance from the FG/BG, and flip them according to this order. A pixel is skipped if it is not simple. Our algorithm is based on the intuition that a far-away pixel will not become simple until nearby pixels are flipped first. To see this, we first formalize the definition of \emph{distance transform} from the masks, $f_B$ and $g$, denoted by $D^{f_B}$ and $D^{g}$. For a BG pixel of $g$, $p$, its distance value $D^g(p)$ is the shortest distance from $p$ to any FG pixel of $g$, $D^g(p)=\min_{s\in FG_g} \text{dist}(p,s)$. Similarly, for a FG pixel of $g$, $q$, $D^g(q)=\min_{s\in BG_g} \text{dist}(q,s)$. The definition generalizes to $D^{f_B}$. We observe that a pixel cannot be simple unless it has distance 1 from the FG/BG of a warping mask. The proof is straightforward. Formally, \vspace{-.05in} \begin{lemma} Given a 2D binary mask $m$, a pixel $p$ cannot be simple for $m$ if its distance function $D^m(p) >1$. \vspace{-.05in} \label{lemma:distance} \end{lemma} \myparagraph{Proof.} Assume the foreground has a pixel value of 1 and $p$ is a background pixel with a index of $(i,j)$. Consider the $m$-adjacent ($m$=4) for $p$. Since $D^m(p) > 1$, then we have $m(i-1,j) = m(i+1, j) = m(i, j-1) = m(i, j+1) =0$. In this case, $p$ is not 4-adjacent to any FG connected component, violating the \textbf{1)} of \textbf{Theorem 1}. Consequently, pixel $(i,j)$ is not a simple point. This also holds for foreground pixels. This lemma is naturally generalized to 3D case. \qed Lemma \ref{lemma:distance} implies that only after flipping pixels with distance 1, the other misclassified locations should be considered. This observation gives us the intuition of our algorithm. To warp $g$ towards $f_B$, our algorithm is as follows: (1) compute the difference set $\text{Diff}(f_B,g)$ as the candidate set of pixels; (2) sort candidate pixels in a non-decreasing order of the distance transform $D^g$; (3) enumerate through all candidate pixels according to the order. For each iteration, check if it is simple. If yes, flip the pixel's label. It is possible that this algorithm can miss some pixels. They are not simple when the algorithm checks, but they might become simple as the algorithm continues (and their neighboring pixels get flipped). One remedy is to recalculate the distance transform after one round of warping, and go through remaining pixels once more. But in practice we found this is not necessary as this scenario is very rare. \section{Experiments} \label{sec:experiment} We conduct extensive experiments to demonstrate the effectiveness of the proposed method. Sec.~\ref{dataset} introduces the datasets used in this paper, including both 2D and 3D datasets. The benchmark methods are described in Sec.~\ref{baseline}. We mainly focus on topology-aware segmentation methods. Sec.~\ref{metric} describes the evaluation metrics used to assess the quality of the segmentation. To demonstrate the ability to achieve better structural/topological performances, besides standard segmentation metric, such as DICE score, we also use several topology-aware metrics to evaluate all the methods. Several ablation studies are then conducted to further demonstrate the efficiency and effectiveness of the technical contributions (Sec.~\ref{sec:ablation}). \subsection{Datasets} \label{dataset} We conduct several experiments on both 2D and 3D publicly available datasets. The datasets used in paper are listed as follows: \begin{enumerate}[topsep=0pt, partopsep=0pt] \item \textit{RoadTracer}: Roadtracer contains 300 high resolution satellite images, covering urban areas of forty cities from six different countries~\cite{bastani2018roadtracer}. Similar to setting in~\cite{bastani2018roadtracer}, twenty five cities (180 images) are used as training set and the rest fifteen cities (120 images) are used as the validation set. \vspace{-.05in} \item \textit{DeepGlobe}: DeepGlobe contains aerial images of rural areas in Thailand, Indonesia and India~\cite{demir2018deepglobe}. Similar to setting in~\cite{batra2019improved}, we use 4696 images as training set and the rest 1530 images as validation set. \vspace{-.05in} \item \textit{Massachusetts}: The Massachusetts dataset~\cite{mnih2013machine} contains images from both urban and rural areas. Similar to the setting in~\cite{hu2019topology}, we conduct a three cross validation. \vspace{-.05in} \item \textit{CREMI}: The CREMI dataset is a 3D neuron dataset~\footnote{https://cremi.org/}, whose resolution of $4 \times 4 \times 40$ nm. We also conduct a three cross validation. \end{enumerate} \subsection{Baselines} \label{baseline} We compare the results of our method with several state-of-the-art methods. The standard/simple U-Net (2D/3D) is used as a strong baseline and the backbone for other methods, and we mainly focus on topology-aware segmentation methods. The baseline methods are listed as follows: \begin{enumerate}[topsep=0pt, partopsep=0pt] \item \textit{U-Net}~\cite{ronneberger2015u,cciccek20163d}: The standard U-Net trained with dice loss. Though lots of other segmentation methods/backbones have been proposed for image segmentation, U-Net is still one of the most powerful methods for image segmentation with fine-structures. \vspace{-.05in} \item \textit{VGG-UNet}~\cite{mosinska2018beyond}: VGG-UNet uses the response of selected filters from a pretrained CNN to construct a new loss function. This is one of the earliest works trying to deal with correct delineation. \vspace{-.05in} \item \textit{TopoNet}~\cite{hu2019topology}: TopoNet is a recent work which tries to learn to segment with correct topology based on a novel persistent homology based loss function. \vspace{-.05in} \item \textit{clDice}: Another topology aware method for tubular structure segmentation~\cite{shit2021cldice}. The basic idea is to use thinning techniques to extract the skeletons (centerlines) of the likelihood maps and ground truth mask. A new cldice loss is proposed based on the extracted skeletons besides traditional pixel-wise loss. \vspace{-.05in} \item \textit{DMT}: DMT is a topology-aware deep image segmentation method via discrete morse theory~\cite{hu2021topology}. Instead of identifying topological critical pixels/locations, the DMT loss tries to identify the whole morse structures, and the new loss is defined on the identified morse structures. \end{enumerate} \subsection{Evaluation Metrics} \label{metric} We use both pixel-wise and topology-aware metrics to evaluate the performance of the proposed method. And the metrics are listed as follows: \begin{enumerate}[topsep=0pt, partopsep=0pt] \item \textit{DICE}: DICE score (also known as DICE coefficient, DICE similarity index) is one of the most popular evaluation metrics for image segmentation, which measures the overlapping between the predicted and ground truth masks. \vspace{-.05in} \item \textit{Adapted Rand Index (ARI)}: ARI is the maximal F-score of the foreground-restricted Rand index~\cite{rand1971objective}, a measure of similarity between two clusters. The intuition is that the boundaries partition the whole binary mask into several separate regions, and the predicted and ground truth binary masks can be regarded as two different partitions. ARI is used to measure the similarity between these two different partitions. \vspace{-.05in} \item \textit{Warping Error}~\cite{jain2010boundary}: Warping Error is metric that measures topological disagreements instead of simple pixel disagreements. After warping all the simple points of ground truth to the predicted mask, the disagreements left are topological errors. The warping error is defined as the percentage of these topological errors over the image size. \vspace{-.05in} \item \textit{Betti Error}: Betti Error directly calculates the topology difference between the predicted segmentation and the ground truth. We randomly sample patches over the predicted segmentation and compute the average absolute error between their Betti numbers and the corresponding ground truth patches. \end{enumerate} \subsection{Implementation Details} For 2D images, we use $(m, n)$ = (4, 8) to check if a pixel is simple or not; while $(m, n)$ = (8, 26) for 3D images. For 2D datasets, the batch size is set as 16, and the initial learning rate is 0.01. We randomly crop patches with the size of $512 \times 512$ and then feed them into the 2D U-Net. For 3D case, the batch size is also 16, while the input size is $128 \times 128 \times 16$. We perform the data normalization for the single patch based on its mean and standard deviation. We use PyTorch framework (Version: 1.7.1) to implement the proposed method. A simple/standard U-Net (2D or 3D) is a used as baseline and the backbone. For the proposed method, as well as the other loss function based baselines, to make a fair comparison, we use the same U-Net as backbone. And the training strategy is to train the U-Net with dice loss first until converge, and then add the proposed losses to fine tune the models obtained from the initial step. All the experiments are performed on a Tesla V100-SXM2 GPU (32G Memory), and an Intel(R) Xeon(R) Gold 6140 [email protected] GHz. \subsection{Qualitative Results} In Fig.~\ref{fig:Qualitative}, we show qualitative results from different datasets. Compared with baseline U-Net, our method recovers better structures, such as connections, which are highlighted by red circles. Our final loss is a weighted combination of the dice loss and warping-loss term $L_{warp}$. When $\lambda_{warp}$ = 0, the proposed method degrades to a standard U-Net. The recovered better structures (U-Net and \textit{Warping} columns in Fig.~\ref{fig:Qualitative}) demonstrates that our warping-loss helps the deep neural networks to achieve better topological segmentations. More qualitative results are provided in Supplementary Material. \begin{figure}[t] \centering \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/22678915img_crop.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/22678915gt_crop.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/22678915_unet_modify.pdf} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/22678915our_crop.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/748045img_crop.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/748045gt_crop.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/748045_unet_modify.pdf} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/748045our_crop.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/chicago_crop.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/chicago_gt.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/chicago_unet_modify.pdf} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/chicago_our.png} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/6img_crop.png} \caption{Original patch} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/6gt_crop.png} \caption{GT mask} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/modify_cremi.pdf} \caption{U-Net} \end{subfigure} \begin{subfigure}{0.24\linewidth} \includegraphics[width=1\textwidth]{results/6our_crop.png} \caption{ \textit{Warping}} \end{subfigure} \vspace{-.1in} \caption{Qualitative results compared with the standard U-Net. The proposed warping loss can help to correct the topological errors (highlighted by red circles). The sampled patches are from four different datasets.} \label{fig:Qualitative} \end{figure} \subsection{Quantitative Results} Tab.~\ref{roadtracer}.~\ref{deepglobe},~\ref{mass} show quantitative results for three 2D image datasets, RoadTracer, DeepGlobe and The Massachusetts dataset. The best performances are highlighted with bold. The proposed warping-loss usually achieves the best performances in both DICE score and topological accuracy (ARI, Warping Error and Betti Error) over other topology-aware segmentation baselines. Tab.~\ref{cremi} shows quantitative results for the 3D image dataset, CREMI. The proposed warping-loss also outperforms others in terms of topological metrics (ARI, Warping Error and Betti Error). \setlength{\tabcolsep}{5pt} \begin{table}[ht] \vspace{-.05in} \caption{Quantitative results of different methods for RoadTracer.} \vspace{-.2in} \label{roadtracer} \begin{center} \small \begin{tabular}{ccccc} \hline Method & DICE$\uparrow$ & ARI$\uparrow$ & Warping$\downarrow$ & Betti$\downarrow$\\ \hline\hline U-Net~\cite{ronneberger2015u} & 0.587 & 0.544 & 10.412 $\times 10^{-3}$ & 1.591\\ \hline VGG-UNet~\cite{mosinska2018beyond} & 0.576 & 0.536 & 11.231 $\times 10^{-3}$& 1.607 \\ TopoNet~\cite{hu2019topology} & 0.584 & 0.556 & 10.008 $\times 10^{-3}$ & 1.378\\ clDice~\cite{shit2021cldice} & 0.591 & 0.550 &9.192 $\times 10^{-3}$& 1.309\\ DMT~\cite{hu2021topology} & 0.593 & 0.561 & 9.452 $\times 10^{-3}$ & 1.419\\ \hline \textit{Warping} & \textbf{0.603} & \textbf{0.572} & \textbf{8.853} $\times 10^{-3}$& \textbf{1.251}\\ \hline \end{tabular} \vspace{-.2in} \end{center} \end{table} \setlength{\tabcolsep}{5pt} \begin{table}[ht] \caption{Quantitative results of different methods for DeepGlobe.} \vspace{-.2in} \label{deepglobe} \begin{center} \small \begin{tabular}{ccccc} \hline Method & DICE$\uparrow$ & ARI$\uparrow$ & Warping$\downarrow$ & Betti$\downarrow$\\ \hline\hline U-Net~\cite{ronneberger2015u} & 0.764 & 0.758 & 3.212 $\times 10^{-3}$ & 0.827\\ \hline VGG-UNet~\cite{mosinska2018beyond} & 0.742 & 0.748 & 3.371 $\times 10^{-3}$ & 0.867\\ TopoNet~\cite{hu2019topology} & 0.765 & 0.763 & 2.908 $\times 10^{-3}$ & 0.695\\ clDice~\cite{shit2021cldice} & 0.771 & 0.767& 2.874 $\times 10^{-3}$ & 0.711\\ DMT~\cite{hu2021topology} & 0.769 & 0.772 &2.751 $\times 10^{-3}$ & 0.609\\ \hline \textit{Warping} & \textbf{0.780} & \textbf{0.784} & \textbf{2.683}$\times 10^{-3}$ & \textbf{0.569}\\ \hline \end{tabular} \vspace{-.2in} \end{center} \end{table} \setlength{\tabcolsep}{5pt} \begin{table}[ht] \caption{Quantitative results of different methods for the Mass.} \vspace{-.2in} \label{mass} \begin{center} \small \begin{tabular}{ccccc} \hline Method & DICE$\uparrow$ & ARI$\uparrow$ & Warping$\downarrow$ & Betti$\downarrow$\\ \hline\hline U-Net~\cite{ronneberger2015u} & 0.661 & 0.819 & 3.093$\times 10^{-3}$ & 3.439 \\ \hline VGG-UNet~\cite{mosinska2018beyond} & 0.667 & 0.846 & 3.185$\times 10^{-3}$ & 2.781\\ TopoNet~\cite{hu2019topology} & 0.690 & 0.867 & 2.871$\times 10^{-3}$ & 1.275\\ clDice~\cite{shit2021cldice} & 0.682 & 0.862 & 2.552$\times 10^{-3}$ & 1.431\\ DMT~\cite{hu2021topology} & 0.706 & \textbf{0.881} & 2.631 $\times 10^{-3}$ & 0.995 \\ \hline \textit{Warping} &\textbf{0.715} & 0.864 & \textbf{2.440}$\times 10^{-3}$ & \textbf{0.974}\\ \hline \end{tabular} \vspace{-.2in} \end{center} \end{table} \begin{table}[ht] \caption{Quantitative results of different methods for CREMI.} \vspace{-.2in} \label{cremi} \begin{center} \small \begin{tabular}{ccccc} \hline Method & DICE$\uparrow$ & ARI$\uparrow$ & Warping$\downarrow$ & Betti$\downarrow$\\ \hline\hline 3D UNet~\cite{cciccek20163d} & 0.961 & 0.832 & 11.173 $\times 10^{-3}$ & 2.313\\ \hline TopoNet~\cite{hu2019topology} & 0.967 & 0.872 & 10.454 $\times 10^{-3}$ & 1.076\\ clDice~\cite{shit2021cldice} & 0.965 & 0.845 & 10.576 $\times 10^{-3}$ & 0.756\\ DMT~\cite{hu2021topology} & \textbf{0.973} & 0.901 & 10.318 $\times 10^{-3}$ & 0.726\\ \hline \textit{Warping} & 0.967 & \textbf{0.907} & \textbf{9.854} $\times 10^{-3}$ & \textbf{0.711}\\ \hline \end{tabular} \vspace{-.2in} \end{center} \end{table} \subsection{Ablation studies} \label{sec:ablation} To further explore the technical contributions of the proposed method and provide a rough guideline of how to choose the hyperparameters, we conduct several ablation studies. Note that all the ablation studies are conducted on the RoadTracer dataset. \subsubsection{The impact of the loss weights} As seen in Eq.~\ref{final_loss}, our final loss function is a combination of dice loss and the proposed warping loss term $L_{warp}$. The balanced term $\lambda_{warp}$ controls the influence of the warping loss term, and it's a dataset dependent hyper-parameter. The quantitative results for different choices of $\lambda_{warp}$ are illustrated in Tab.~\ref{weight}. For the RoadTracer dataset, the optimal value is $1 \times 10^{-4}$. From Tab.~\ref{weight}, we can find that different choices of $\lambda_{warp}$ do affect the performances. The reason is that, if $\lambda_{warp}$ is too small, the effect of the warping loss term is negligible. However, if $\lambda_{warp}$ is too large, the warping loss term will compete with the $L_{dice}$ and decrease the performance of the other easy-classified pixels. Note that within a reasonable range of $\lambda_{warp}$, all the choices contribute to better performances compared to baseline (row `0', standard U-Net), demonstrating the effectiveness of the proposed loss term. \begin{table}[ht] \vspace{-.1in} \caption{Ablation study for loss weight $\lambda_{warp}$.} \vspace{-.2in} \label{weight} \begin{center} \small \begin{tabular}{ccccc} \hline $\lambda_{warp}$ & DICE$\uparrow$ & ARI$\uparrow$ & Warping$\downarrow$ & Betti$\downarrow$\\ \hline\hline 0 & 0.587 & 0.544 & 10.412 $\times 10^{-3}$ & 1.591\\ $2 \times 10^{-5}$ & \textbf{0.603} & 0.561 & 9.012 $\times 10^{-3}$ & 1.307\\ $5 \times 10^{-5}$ & 0.601 & 0.548 & 9.356 $\times 10^{-3}$& 1.412 \\ $1 \times 10^{-4}$ & \textbf{0.603} & \textbf{0.572} & \textbf{8.853} $\times 10^{-3}$ & \textbf{1.251}\\ $2 \times 10^{-4}$ & 0.602 & 0.565 & 9.131 $\times 10^{-3}$& 1.354\\ \hline \end{tabular} \vspace{-.3in} \end{center} \end{table} \subsubsection{The choice of loss functions} The proposed warping loss is defined on the identified topological critical pixels. Consequently, any pixel-wise loss functions can be used to define the warping loss $L_{warp}/L_{pixel}$. In this section, we'd like the investigate how the choices of loss functions affect the performances. The quantitative results are show in Tab.~\ref{losschoose}. Compared with mean-square-error loss (MSE) or Dice loss, the cross-entropy loss (CE) achieves best performances in terms of topological metrics. On the other hand, all these three choices perform better than baseline method (row `w/o', standard U-Net), which further demonstrates the contribution of the proposed loss term. \begin{table}[ht] \vspace{-.05in} \caption{Ablation study for the choices of loss functions.} \vspace{-.2in} \label{losschoose} \begin{center} \small \begin{tabular}{ccccc} \hline $L_{pixel}$ & DICE$\uparrow$ & ARI$\uparrow$ & Warping$\downarrow$ & Betti$\downarrow$\\ \hline\hline w/o & 0.587 & 0.554 & 10.412 $\times 10^{-3}$ & 1.591\\ MSE & 0.598 & 0.556 & 9.853 $\times 10^{-3}$ & 1.429\\ Dice loss & \textbf{0.606} & 0.563 & 9.471 $\times 10^{-3}$ & 1.368\\ CE & 0.603 & \textbf{0.572} & \textbf{8.853} $\times 10^{-3}$ & \textbf{1.251}\\ \hline \end{tabular} \vspace{-.3in} \end{center} \end{table} \subsubsection{The efficiency of the proposed loss} In this section, we'd like to investigate the efficiency of the proposed method. Our warping algorithm contains two parts, the distance transform and the sorting of distance matrix. The complexity for distance transform is $O(n)$ for a 2D image where $n$ is the size of the 2D image, and $n=H \times W$. $H,W$ are the height and width of the 2D image, respectively. And the complexity for sorting is $mlog(m)$, where $m$ is the number of misclassified pixels. Usually $m \ll n$, so the overall complexity for the warping algorithm is $O(n)$. As a comparison,~\cite{hu2019topology} needs $O(n^3)$ complexity to compute the persistence diagram. And the computational complexity for~\cite{hu2021topology},~\cite{shit2021cldice} are $O(nlog(n))$ and $O(n)$, respectively. The comparison in terms of complexity and training time are illustrated in Tab.\ref{efficiency}. Note that for the proposed method and all the other baselines, we first train a simple/standard U-Net, and then add the additional loss terms to fine-tune the models obtained from the initial step. Here, the training time is only for the fine-tune step. The proposed method takes slightly longer training time than clDice, while achieves the best performance over the others. As all these methods use the same backbone, the inference times are the same. \begin{table}[ht] \caption{Comparison of efficiency.} \vspace{-.2in} \label{efficiency} \begin{center} \begin{tabular}{ccccc} \hline Method & Complexity & Training time\\ \hline\hline TopoNet~\cite{hu2019topology} & $O(n^3)$ & $\approx 12h$\\ clDice~\cite{shit2021cldice} & $O(n)$ & $\approx$ \textbf{3h}\\ DMT~\cite{hu2021topology} & $O(nlogn)$ & $\approx 7h$\\ \textit{Warping} & $O(n)$ & $\approx 4h$\\ \hline \end{tabular} \vspace{-.3in} \end{center} \end{table} \section{Conclusion} In this paper, we propose a novel homotopy warping loss to learn to segment with better structural/topological accuracy. Under the homotopy warping strategy, we can identify the topological critical pixels/locations, and the new loss is defined on these identified pixels/locations. Furthermore, we propose a novel strategy called Distance-Ordered Homotopy Warping to efficiently identity the topological error locations based on distance transform. Extensive experiments have been conducted to demonstrate the efficacy of the proposed method. \clearpage {\small \bibliographystyle{ieee_fullname}
2,869,038,155,810
arxiv
\section*{Appendices} \section{List of all symbols} \label{sec:listofsymbols} \subsection{Experiments} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash lhcb} & \mbox{LHCb}\xspace & \texttt{\textbackslash atlas} & \mbox{ATLAS}\xspace & \texttt{\textbackslash cms} & \mbox{CMS}\xspace \\ \texttt{\textbackslash alice} & \mbox{ALICE}\xspace & \texttt{\textbackslash babar} & \mbox{BaBar}\xspace & \texttt{\textbackslash belle} & \mbox{Belle}\xspace \\ \texttt{\textbackslash belletwo} & \mbox{Belle~II}\xspace & \texttt{\textbackslash besiii} & \mbox{BESIII}\xspace & \texttt{\textbackslash cleo} & \mbox{CLEO}\xspace \\ \texttt{\textbackslash cdf} & \mbox{CDF}\xspace & \texttt{\textbackslash dzero} & \mbox{D0}\xspace & \texttt{\textbackslash aleph} & \mbox{ALEPH}\xspace \\ \texttt{\textbackslash delphi} & \mbox{DELPHI}\xspace & \texttt{\textbackslash opal} & \mbox{OPAL}\xspace & \texttt{\textbackslash lthree} & \mbox{L3}\xspace \\ \texttt{\textbackslash sld} & \mbox{SLD}\xspace & \texttt{\textbackslash cern} & \mbox{CERN}\xspace & \texttt{\textbackslash lhc} & \mbox{LHC}\xspace \\ \texttt{\textbackslash lep} & \mbox{LEP}\xspace & \texttt{\textbackslash tevatron} & Tevatron\xspace & \texttt{\textbackslash bfactories} & \mbox{{\ensuremath{\PB}}\xspace Factories}\xspace \\ \texttt{\textbackslash bfactory} & \mbox{{\ensuremath{\PB}}\xspace Factory}\xspace & \texttt{\textbackslash upgradeone} & \mbox{Upgrade~I}\xspace & \texttt{\textbackslash upgradetwo} & \mbox{Upgrade~II}\xspace \\ \end{tabular*} \subsubsection{LHCb sub-detectors and sub-systems} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash velo} & VELO\xspace & \texttt{\textbackslash rich} & RICH\xspace & \texttt{\textbackslash richone} & RICH1\xspace \\ \texttt{\textbackslash richtwo} & RICH2\xspace & \texttt{\textbackslash ttracker} & TT\xspace & \texttt{\textbackslash intr} & IT\xspace \\ \texttt{\textbackslash st} & ST\xspace & \texttt{\textbackslash ot} & OT\xspace & \texttt{\textbackslash herschel} & \mbox{\textsc{HeRSCheL}}\xspace \\ \texttt{\textbackslash spd} & SPD\xspace & \texttt{\textbackslash presh} & PS\xspace & \texttt{\textbackslash ecal} & ECAL\xspace \\ \texttt{\textbackslash hcal} & HCAL\xspace & \texttt{\textbackslash MagUp} & \mbox{\em Mag\kern -0.05em Up}\xspace & \texttt{\textbackslash MagDown} & \mbox{\em MagDown}\xspace \\ \texttt{\textbackslash ode} & ODE\xspace & \texttt{\textbackslash daq} & DAQ\xspace & \texttt{\textbackslash tfc} & TFC\xspace \\ \texttt{\textbackslash ecs} & ECS\xspace & \texttt{\textbackslash lone} & L0\xspace & \texttt{\textbackslash hlt} & HLT\xspace \\ \texttt{\textbackslash hltone} & HLT1\xspace & \texttt{\textbackslash hlttwo} & HLT2\xspace & \\ \end{tabular*} \subsection{Particles} \subsubsection{Leptons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash electron} & {\ensuremath{\Pe}}\xspace & \texttt{\textbackslash en} & \en & \texttt{\textbackslash ep} & {\ensuremath{\Pe^+}}\xspace \\ \texttt{\textbackslash epm} & \epm & \texttt{\textbackslash emp} & \emp & \texttt{\textbackslash epem} & {\ensuremath{\Pe^+\Pe^-}}\xspace \\ \texttt{\textbackslash muon} & {\ensuremath{\Pmu}}\xspace & \texttt{\textbackslash mup} & {\ensuremath{\Pmu^+}}\xspace & \texttt{\textbackslash mun} & \mun \\ \texttt{\textbackslash mupm} & \mupm & \texttt{\textbackslash mump} & \mump & \texttt{\textbackslash mumu} & {\ensuremath{\Pmu^+\Pmu^-}}\xspace \\ \texttt{\textbackslash tauon} & {\ensuremath{\Ptau}}\xspace & \texttt{\textbackslash taup} & {\ensuremath{\Ptau^+}}\xspace & \texttt{\textbackslash taum} & {\ensuremath{\Ptau^-}}\xspace \\ \texttt{\textbackslash taupm} & {\ensuremath{\Ptau^\pm}}\xspace & \texttt{\textbackslash taump} & {\ensuremath{\Ptau^\mp}}\xspace & \texttt{\textbackslash tautau} & {\ensuremath{\Ptau^+\Ptau^-}}\xspace \\ \texttt{\textbackslash lepton} & {\ensuremath{\ell}}\xspace & \texttt{\textbackslash ellm} & {\ensuremath{\ell^-}}\xspace & \texttt{\textbackslash ellp} & {\ensuremath{\ell^+}}\xspace \\ \texttt{\textbackslash ellpm} & {\ensuremath{\ell^\pm}}\xspace & \texttt{\textbackslash ellmp} & {\ensuremath{\ell^\mp}}\xspace & \texttt{\textbackslash ellell} & \ensuremath{\ell^+ \ell^-}\xspace \\ \texttt{\textbackslash neu} & {\ensuremath{\Pnu}}\xspace & \texttt{\textbackslash neub} & {\ensuremath{\overline{\Pnu}}}\xspace & \texttt{\textbackslash neue} & {\ensuremath{\neu_e}}\xspace \\ \texttt{\textbackslash neueb} & {\ensuremath{\neub_e}}\xspace & \texttt{\textbackslash neum} & {\ensuremath{\neu_\mu}}\xspace & \texttt{\textbackslash neumb} & {\ensuremath{\neub_\mu}}\xspace \\ \texttt{\textbackslash neut} & {\ensuremath{\neu_\tau}}\xspace & \texttt{\textbackslash neutb} & {\ensuremath{\neub_\tau}}\xspace & \texttt{\textbackslash neul} & {\ensuremath{\neu_\ell}}\xspace \\ \texttt{\textbackslash neulb} & {\ensuremath{\neub_\ell}}\xspace & \\ \end{tabular*} \subsubsection{Gauge bosons and scalars} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash g} & {\ensuremath{\Pgamma}}\xspace & \texttt{\textbackslash H} & {\ensuremath{\PH^0}}\xspace & \texttt{\textbackslash Hp} & {\ensuremath{\PH^+}}\xspace \\ \texttt{\textbackslash Hm} & {\ensuremath{\PH^-}}\xspace & \texttt{\textbackslash Hpm} & {\ensuremath{\PH^\pm}}\xspace & \texttt{\textbackslash W} & {\ensuremath{\PW}}\xspace \\ \texttt{\textbackslash Wp} & {\ensuremath{\PW^+}}\xspace & \texttt{\textbackslash Wm} & {\ensuremath{\PW^-}}\xspace & \texttt{\textbackslash Wpm} & {\ensuremath{\PW^\pm}}\xspace \\ \texttt{\textbackslash Z} & {\ensuremath{\PZ}}\xspace & \\ \end{tabular*} \subsubsection{Quarks} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash quark} & {\ensuremath{\Pq}}\xspace & \texttt{\textbackslash quarkbar} & {\ensuremath{\overline \quark}}\xspace & \texttt{\textbackslash qqbar} & {\ensuremath{\quark\quarkbar}}\xspace \\ \texttt{\textbackslash uquark} & {\ensuremath{\Pu}}\xspace & \texttt{\textbackslash uquarkbar} & {\ensuremath{\overline \uquark}}\xspace & \texttt{\textbackslash uubar} & {\ensuremath{\uquark\uquarkbar}}\xspace \\ \texttt{\textbackslash dquark} & {\ensuremath{\Pd}}\xspace & \texttt{\textbackslash dquarkbar} & {\ensuremath{\overline \dquark}}\xspace & \texttt{\textbackslash ddbar} & {\ensuremath{\dquark\dquarkbar}}\xspace \\ \texttt{\textbackslash squark} & {\ensuremath{\Ps}}\xspace & \texttt{\textbackslash squarkbar} & {\ensuremath{\overline \squark}}\xspace & \texttt{\textbackslash ssbar} & {\ensuremath{\squark\squarkbar}}\xspace \\ \texttt{\textbackslash cquark} & {\ensuremath{\Pc}}\xspace & \texttt{\textbackslash cquarkbar} & {\ensuremath{\overline \cquark}}\xspace & \texttt{\textbackslash ccbar} & {\ensuremath{\cquark\cquarkbar}}\xspace \\ \texttt{\textbackslash bquark} & {\ensuremath{\Pb}}\xspace & \texttt{\textbackslash bquarkbar} & {\ensuremath{\overline \bquark}}\xspace & \texttt{\textbackslash bbbar} & {\ensuremath{\bquark\bquarkbar}}\xspace \\ \texttt{\textbackslash tquark} & {\ensuremath{\Pt}}\xspace & \texttt{\textbackslash tquarkbar} & {\ensuremath{\overline \tquark}}\xspace & \texttt{\textbackslash ttbar} & {\ensuremath{\tquark\tquarkbar}}\xspace \\ \end{tabular*} \subsubsection{Light mesons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash hadron} & {\ensuremath{\Ph}}\xspace & \texttt{\textbackslash pion} & {\ensuremath{\Ppi}}\xspace & \texttt{\textbackslash piz} & {\ensuremath{\pion^0}}\xspace \\ \texttt{\textbackslash pip} & {\ensuremath{\pion^+}}\xspace & \texttt{\textbackslash pim} & {\ensuremath{\pion^-}}\xspace & \texttt{\textbackslash pipm} & {\ensuremath{\pion^\pm}}\xspace \\ \texttt{\textbackslash pimp} & {\ensuremath{\pion^\mp}}\xspace & \texttt{\textbackslash rhomeson} & {\ensuremath{\Prho}}\xspace & \texttt{\textbackslash rhoz} & {\ensuremath{\rhomeson^0}}\xspace \\ \texttt{\textbackslash rhop} & {\ensuremath{\rhomeson^+}}\xspace & \texttt{\textbackslash rhom} & {\ensuremath{\rhomeson^-}}\xspace & \texttt{\textbackslash rhopm} & {\ensuremath{\rhomeson^\pm}}\xspace \\ \texttt{\textbackslash rhomp} & {\ensuremath{\rhomeson^\mp}}\xspace & \texttt{\textbackslash kaon} & {\ensuremath{\PK}}\xspace & \texttt{\textbackslash Kbar} & {\ensuremath{\offsetoverline{\PK}}}\xspace \\ \texttt{\textbackslash Kb} & {\ensuremath{\Kbar}}\xspace & \texttt{\textbackslash KorKbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \PK}{}\xspace & \texttt{\textbackslash Kz} & {\ensuremath{\kaon^0}}\xspace \\ \texttt{\textbackslash Kzb} & {\ensuremath{\Kbar{}^0}}\xspace & \texttt{\textbackslash Kp} & {\ensuremath{\kaon^+}}\xspace & \texttt{\textbackslash Km} & {\ensuremath{\kaon^-}}\xspace \\ \texttt{\textbackslash Kpm} & {\ensuremath{\kaon^\pm}}\xspace & \texttt{\textbackslash Kmp} & {\ensuremath{\kaon^\mp}}\xspace & \texttt{\textbackslash KS} & {\ensuremath{\kaon^0_{\mathrm{S}}}}\xspace \\ \texttt{\textbackslash Vzero} & {\ensuremath{V^0}}\xspace & \texttt{\textbackslash KL} & {\ensuremath{\kaon^0_{\mathrm{L}}}}\xspace & \texttt{\textbackslash Kstarz} & {\ensuremath{\kaon^{*0}}}\xspace \\ \texttt{\textbackslash Kstarzb} & {\ensuremath{\Kbar{}^{*0}}}\xspace & \texttt{\textbackslash Kstar} & {\ensuremath{\kaon^*}}\xspace & \texttt{\textbackslash Kstarb} & {\ensuremath{\Kbar{}^*}}\xspace \\ \texttt{\textbackslash Kstarp} & {\ensuremath{\kaon^{*+}}}\xspace & \texttt{\textbackslash Kstarm} & {\ensuremath{\kaon^{*-}}}\xspace & \texttt{\textbackslash Kstarpm} & {\ensuremath{\kaon^{*\pm}}}\xspace \\ \texttt{\textbackslash Kstarmp} & {\ensuremath{\kaon^{*\mp}}}\xspace & \texttt{\textbackslash KorKbarz} & \ensuremath{\KorKbar^0}\xspace & \texttt{\textbackslash etaz} & \ensuremath{\ensuremath{\upeta}\xspace}\xspace \\ \texttt{\textbackslash etapr} & \ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace & \texttt{\textbackslash phiz} & \ensuremath{\Pphi}\xspace & \texttt{\textbackslash omegaz} & \ensuremath{\Pomega}\xspace \\ \end{tabular*} \subsubsection{Charmed mesons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Dbar} & {\ensuremath{\offsetoverline{\PD}}}\xspace & \texttt{\textbackslash D} & {\ensuremath{\PD}}\xspace & \texttt{\textbackslash Db} & {\ensuremath{\Dbar}}\xspace \\ \texttt{\textbackslash DorDbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \PD}\xspace & \texttt{\textbackslash Dz} & {\ensuremath{\D^0}}\xspace & \texttt{\textbackslash Dzb} & {\ensuremath{\Dbar{}^0}}\xspace \\ \texttt{\textbackslash Dp} & {\ensuremath{\D^+}}\xspace & \texttt{\textbackslash Dm} & {\ensuremath{\D^-}}\xspace & \texttt{\textbackslash Dpm} & {\ensuremath{\D^\pm}}\xspace \\ \texttt{\textbackslash Dmp} & {\ensuremath{\D^\mp}}\xspace & \texttt{\textbackslash DpDm} & \ensuremath{\Dp {\kern -0.16em \Dm}}\xspace & \texttt{\textbackslash Dstar} & {\ensuremath{\D^*}}\xspace \\ \texttt{\textbackslash Dstarb} & {\ensuremath{\Dbar{}^*}}\xspace & \texttt{\textbackslash Dstarz} & {\ensuremath{\D^{*0}}}\xspace & \texttt{\textbackslash Dstarzb} & {\ensuremath{\Dbar{}^{*0}}}\xspace \\ \texttt{\textbackslash theDstarz} & {\ensuremath{\D^{*}(2007)^{0}}}\xspace & \texttt{\textbackslash theDstarzb} & {\ensuremath{\Dbar^{*}(2007)^{0}}}\xspace & \texttt{\textbackslash Dstarp} & {\ensuremath{\D^{*+}}}\xspace \\ \texttt{\textbackslash Dstarm} & {\ensuremath{\D^{*-}}}\xspace & \texttt{\textbackslash Dstarpm} & {\ensuremath{\D^{*\pm}}}\xspace & \texttt{\textbackslash Dstarmp} & {\ensuremath{\D^{*\mp}}}\xspace \\ \texttt{\textbackslash theDstarp} & {\ensuremath{\D^{*}(2010)^{+}}}\xspace & \texttt{\textbackslash theDstarm} & {\ensuremath{\D^{*}(2010)^{-}}}\xspace & \texttt{\textbackslash theDstarpm} & {\ensuremath{\D^{*}(2010)^{\pm}}}\xspace \\ \texttt{\textbackslash theDstarmp} & {\ensuremath{\D^{*}(2010)^{\mp}}}\xspace & \texttt{\textbackslash Ds} & {\ensuremath{\D^+_\squark}}\xspace & \texttt{\textbackslash Dsp} & {\ensuremath{\D^+_\squark}}\xspace \\ \texttt{\textbackslash Dsm} & {\ensuremath{\D^-_\squark}}\xspace & \texttt{\textbackslash Dspm} & {\ensuremath{\D^{\pm}_\squark}}\xspace & \texttt{\textbackslash Dsmp} & {\ensuremath{\D^{\mp}_\squark}}\xspace \\ \texttt{\textbackslash Dss} & {\ensuremath{\D^{*+}_\squark}}\xspace & \texttt{\textbackslash Dssp} & {\ensuremath{\D^{*+}_\squark}}\xspace & \texttt{\textbackslash Dssm} & {\ensuremath{\D^{*-}_\squark}}\xspace \\ \texttt{\textbackslash Dsspm} & {\ensuremath{\D^{*\pm}_\squark}}\xspace & \texttt{\textbackslash Dssmp} & {\ensuremath{\D^{*\mp}_\squark}}\xspace & \texttt{\textbackslash DporDsp} & {\ensuremath{\D_{(\squark)}^+}}\xspace \\ \texttt{\textbackslash DmorDsm} & {\ensuremath{\D{}_{(\squark)}^-}}\xspace & \texttt{\textbackslash DpmorDspm} & {\ensuremath{\D{}_{(\squark)}^\pm}}\xspace & \\ \end{tabular*} \subsubsection{Beauty mesons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash B} & {\ensuremath{\PB}}\xspace & \texttt{\textbackslash Bbar} & {\ensuremath{\offsetoverline{\PB}}}\xspace & \texttt{\textbackslash Bb} & {\ensuremath{\Bbar}}\xspace \\ \texttt{\textbackslash BorBbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \PB}\xspace & \texttt{\textbackslash Bz} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bzb} & {\ensuremath{\Bbar{}^0}}\xspace \\ \texttt{\textbackslash Bd} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bdb} & {\ensuremath{\Bbar{}^0}}\xspace & \texttt{\textbackslash BdorBdbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \Bd}\xspace \\ \texttt{\textbackslash Bu} & {\ensuremath{\B^+}}\xspace & \texttt{\textbackslash Bub} & {\ensuremath{\B^-}}\xspace & \texttt{\textbackslash Bp} & {\ensuremath{\Bu}}\xspace \\ \texttt{\textbackslash Bm} & {\ensuremath{\Bub}}\xspace & \texttt{\textbackslash Bpm} & {\ensuremath{\B^\pm}}\xspace & \texttt{\textbackslash Bmp} & {\ensuremath{\B^\mp}}\xspace \\ \texttt{\textbackslash Bs} & {\ensuremath{\B^0_\squark}}\xspace & \texttt{\textbackslash Bsb} & {\ensuremath{\Bbar{}^0_\squark}}\xspace & \texttt{\textbackslash BsorBsbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \Bs}\xspace \\ \texttt{\textbackslash Bc} & {\ensuremath{\B_\cquark^+}}\xspace & \texttt{\textbackslash Bcp} & {\ensuremath{\B_\cquark^+}}\xspace & \texttt{\textbackslash Bcm} & {\ensuremath{\B_\cquark^-}}\xspace \\ \texttt{\textbackslash Bcpm} & {\ensuremath{\B_\cquark^\pm}}\xspace & \texttt{\textbackslash Bds} & {\ensuremath{\B_{(\squark)}^0}}\xspace & \texttt{\textbackslash Bdsb} & {\ensuremath{\Bbar{}_{(\squark)}^0}}\xspace \\ \texttt{\textbackslash BdorBs} & \Bds & \texttt{\textbackslash BdorBsbar} & \Bdsb & \\ \end{tabular*} \subsubsection{Onia} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash jpsi} & {\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi}}}\xspace & \texttt{\textbackslash psitwos} & {\ensuremath{\Ppsi{(2S)}}}\xspace & \texttt{\textbackslash psiprpr} & {\ensuremath{\Ppsi(3770)}}\xspace \\ \texttt{\textbackslash etac} & {\ensuremath{\Peta_\cquark}}\xspace & \texttt{\textbackslash psires} & {\ensuremath{\Ppsi}}\xspace & \texttt{\textbackslash chic} & {\ensuremath{\Pchi_\cquark}}\xspace \\ \texttt{\textbackslash chiczero} & {\ensuremath{\Pchi_{\cquark 0}}}\xspace & \texttt{\textbackslash chicone} & {\ensuremath{\Pchi_{\cquark 1}}}\xspace & \texttt{\textbackslash chictwo} & {\ensuremath{\Pchi_{\cquark 2}}}\xspace \\ \texttt{\textbackslash chicJ} & {\ensuremath{\Pchi_{\cquark J}}}\xspace & \texttt{\textbackslash Upsilonres} & {\ensuremath{\PUpsilon}}\xspace & \texttt{\textbackslash OneS} & {\Y1S} \\ \texttt{\textbackslash TwoS} & {\Y2S} & \texttt{\textbackslash ThreeS} & {\Y3S} & \texttt{\textbackslash FourS} & {\Y4S} \\ \texttt{\textbackslash FiveS} & {\Y5S} & \texttt{\textbackslash chib} & {\ensuremath{\Pchi_{b}}}\xspace & \texttt{\textbackslash chibzero} & {\ensuremath{\Pchi_{\bquark 0}}}\xspace \\ \texttt{\textbackslash chibone} & {\ensuremath{\Pchi_{\bquark 1}}}\xspace & \texttt{\textbackslash chibtwo} & {\ensuremath{\Pchi_{\bquark 2}}}\xspace & \texttt{\textbackslash chibJ} & {\ensuremath{\Pchi_{\bquark J}}}\xspace \\ \texttt{\textbackslash theX} & {\ensuremath{\Pchi_{c1}(3872)}}\xspace & \\ \end{tabular*} \subsubsection{Light Baryons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash proton} & {\ensuremath{\Pp}}\xspace & \texttt{\textbackslash antiproton} & {\ensuremath{\overline \proton}}\xspace & \texttt{\textbackslash neutron} & {\ensuremath{\Pn}}\xspace \\ \texttt{\textbackslash antineutron} & {\ensuremath{\overline \neutron}}\xspace & \texttt{\textbackslash Deltares} & {\ensuremath{\PDelta}}\xspace & \texttt{\textbackslash Deltaresbar} & {\ensuremath{\overline \Deltares}}\xspace \\ \texttt{\textbackslash Lz} & {\ensuremath{\PLambda}}\xspace & \texttt{\textbackslash Lbar} & {\ensuremath{\offsetoverline{\PLambda}}}\xspace & \texttt{\textbackslash LorLbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \PLambda}\xspace \\ \texttt{\textbackslash Lambdares} & {\ensuremath{\PLambda}}\xspace & \texttt{\textbackslash Lambdaresbar} & {\ensuremath{\Lbar}}\xspace & \texttt{\textbackslash Sigmares} & {\ensuremath{\PSigma}}\xspace \\ \texttt{\textbackslash Sigmaz} & {\ensuremath{\Sigmares{}^0}}\xspace & \texttt{\textbackslash Sigmap} & {\ensuremath{\Sigmares{}^+}}\xspace & \texttt{\textbackslash Sigmam} & {\ensuremath{\Sigmares{}^-}}\xspace \\ \texttt{\textbackslash Sigmaresbar} & {\ensuremath{\offsetoverline{\Sigmares}}}\xspace & \texttt{\textbackslash Sigmabarz} & {\ensuremath{\Sigmaresbar{}^0}}\xspace & \texttt{\textbackslash Sigmabarp} & {\ensuremath{\Sigmaresbar{}^+}}\xspace \\ \texttt{\textbackslash Sigmabarm} & {\ensuremath{\Sigmaresbar{}^-}}\xspace & \texttt{\textbackslash Xires} & {\ensuremath{\PXi}}\xspace & \texttt{\textbackslash Xiz} & {\ensuremath{\Xires^0}}\xspace \\ \texttt{\textbackslash Xim} & {\ensuremath{\Xires^-}}\xspace & \texttt{\textbackslash Xiresbar} & {\ensuremath{\offsetoverline{\Xires}}}\xspace & \texttt{\textbackslash Xibarz} & {\ensuremath{\Xiresbar^0}}\xspace \\ \texttt{\textbackslash Xibarp} & {\ensuremath{\Xiresbar^+}}\xspace & \texttt{\textbackslash Omegares} & {\ensuremath{\POmega}}\xspace & \texttt{\textbackslash Omegaresbar} & {\ensuremath{\offsetoverline{\POmega}}}\xspace \\ \texttt{\textbackslash Omegam} & {\ensuremath{\Omegares^-}}\xspace & \texttt{\textbackslash Omegabarp} & {\ensuremath{\Omegaresbar^+}}\xspace & \\ \end{tabular*} \subsubsection{Charmed Baryons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Lc} & {\ensuremath{\Lz^+_\cquark}}\xspace & \texttt{\textbackslash Lcbar} & {\ensuremath{\Lbar{}^-_\cquark}}\xspace & \texttt{\textbackslash Xic} & {\ensuremath{\Xires_\cquark}}\xspace \\ \texttt{\textbackslash Xicz} & {\ensuremath{\Xires^0_\cquark}}\xspace & \texttt{\textbackslash Xicp} & {\ensuremath{\Xires^+_\cquark}}\xspace & \texttt{\textbackslash Xicbar} & {\ensuremath{\Xiresbar{}_\cquark}}\xspace \\ \texttt{\textbackslash Xicbarz} & {\ensuremath{\Xiresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Xicbarm} & {\ensuremath{\Xiresbar{}_\cquark^-}}\xspace & \texttt{\textbackslash Omegac} & {\ensuremath{\Omegares^0_\cquark}}\xspace \\ \texttt{\textbackslash Omegacbar} & {\ensuremath{\Omegaresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Xicc} & {\ensuremath{\Xires_{\cquark\cquark}}}\xspace & \texttt{\textbackslash Xiccbar} & {\ensuremath{\Xiresbar{}_{\cquark\cquark}}}\xspace \\ \texttt{\textbackslash Xiccp} & {\ensuremath{\Xires^+_{\cquark\cquark}}}\xspace & \texttt{\textbackslash Xiccpp} & {\ensuremath{\Xires^{++}_{\cquark\cquark}}}\xspace & \texttt{\textbackslash Xiccbarm} & {\ensuremath{\Xiresbar{}_{\cquark\cquark}^-}}\xspace \\ \texttt{\textbackslash Xiccbarmm} & {\ensuremath{\Xiresbar{}_{\cquark\cquark}^{--}}}\xspace & \texttt{\textbackslash Omegacc} & {\ensuremath{\Omegares^+_{\cquark\cquark}}}\xspace & \texttt{\textbackslash Omegaccbar} & {\ensuremath{\Omegaresbar{}_{\cquark\cquark}^-}}\xspace \\ \texttt{\textbackslash Omegaccc} & {\ensuremath{\Omegares^{++}_{\cquark\cquark\cquark}}}\xspace & \texttt{\textbackslash Omegacccbar} & {\ensuremath{\Omegaresbar{}_{\cquark\cquark\cquark}^{--}}}\xspace & \\ \end{tabular*} \subsubsection{Beauty Baryons} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Lb} & {\ensuremath{\Lz^0_\bquark}}\xspace & \texttt{\textbackslash Lbbar} & {\ensuremath{\Lbar{}^0_\bquark}}\xspace & \texttt{\textbackslash Sigmab} & {\ensuremath{\Sigmares_\bquark}}\xspace \\ \texttt{\textbackslash Sigmabp} & {\ensuremath{\Sigmares_\bquark^+}}\xspace & \texttt{\textbackslash Sigmabz} & {\ensuremath{\Sigmares_\bquark^0}}\xspace & \texttt{\textbackslash Sigmabm} & {\ensuremath{\Sigmares_\bquark^-}}\xspace \\ \texttt{\textbackslash Sigmabpm} & {\ensuremath{\Sigmares_\bquark^\pm}}\xspace & \texttt{\textbackslash Sigmabbar} & {\ensuremath{\Sigmaresbar_\bquark}}\xspace & \texttt{\textbackslash Sigmabbarp} & {\ensuremath{\Sigmaresbar_\bquark^+}}\xspace \\ \texttt{\textbackslash Sigmabbarz} & {\ensuremath{\Sigmaresbar_\bquark^0}}\xspace & \texttt{\textbackslash Sigmabbarm} & {\ensuremath{\Sigmaresbar_\bquark^-}}\xspace & \texttt{\textbackslash Sigmabbarpm} & {\ensuremath{\Sigmaresbar_\bquark^-}}\xspace \\ \texttt{\textbackslash Xib} & {\ensuremath{\Xires_\bquark}}\xspace & \texttt{\textbackslash Xibz} & {\ensuremath{\Xires^0_\bquark}}\xspace & \texttt{\textbackslash Xibm} & {\ensuremath{\Xires^-_\bquark}}\xspace \\ \texttt{\textbackslash Xibbar} & {\ensuremath{\Xiresbar{}_\bquark}}\xspace & \texttt{\textbackslash Xibbarz} & {\ensuremath{\Xiresbar{}_\bquark^0}}\xspace & \texttt{\textbackslash Xibbarp} & {\ensuremath{\Xiresbar{}_\bquark^+}}\xspace \\ \texttt{\textbackslash Omegab} & {\ensuremath{\Omegares^-_\bquark}}\xspace & \texttt{\textbackslash Omegabbar} & {\ensuremath{\Omegaresbar{}_\bquark^+}}\xspace & \\ \end{tabular*} \subsection{Physics symbols} \subsubsection{Decays} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash BF} & {\ensuremath{\mathcal{B}}}\xspace & \texttt{\textbackslash BR} & \BF & \texttt{\textbackslash BRvis} & {\ensuremath{\BR_{\mathrm{{vis}}}}} \\ \texttt{\textbackslash ra} & \ensuremath{\rightarrow}\xspace & \texttt{\textbackslash to} & \ensuremath{\rightarrow}\xspace & \\ \end{tabular*} \subsubsection{Lifetimes} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash tauBs} & {\ensuremath{\tau_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace & \texttt{\textbackslash tauBd} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash tauBz} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace \\ \texttt{\textbackslash tauBu} & {\ensuremath{\tau_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash tauDp} & {\ensuremath{\tau_{{\ensuremath{\D^+}}\xspace}}}\xspace & \texttt{\textbackslash tauDz} & {\ensuremath{\tau_{{\ensuremath{\D^0}}\xspace}}}\xspace \\ \texttt{\textbackslash tauL} & {\ensuremath{\tau_{\mathrm{ L}}}}\xspace & \texttt{\textbackslash tauH} & {\ensuremath{\tau_{\mathrm{ H}}}}\xspace & \\ \end{tabular*} \subsubsection{Masses} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash mBd} & {\ensuremath{m_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash mBp} & {\ensuremath{m_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash mBs} & {\ensuremath{m_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace \\ \texttt{\textbackslash mBc} & {\ensuremath{m_{{\ensuremath{\B_\cquark^+}}\xspace}}}\xspace & \texttt{\textbackslash mLb} & {\ensuremath{m_{{\ensuremath{\Lz^0_\bquark}}\xspace}}}\xspace & \\ \end{tabular*} \subsubsection{EW theory, groups} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash grpsuthree} & {\ensuremath{\mathrm{SU}(3)}}\xspace & \texttt{\textbackslash grpsutw} & {\ensuremath{\mathrm{SU}(2)}}\xspace & \texttt{\textbackslash grpuone} & {\ensuremath{\mathrm{U}(1)}}\xspace \\ \texttt{\textbackslash ssqtw} & {\ensuremath{\sin^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash csqtw} & {\ensuremath{\cos^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash stw} & {\ensuremath{\sin\theta_{\mathrm{W}}}}\xspace \\ \texttt{\textbackslash ctw} & {\ensuremath{\cos\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash ssqtwef} & {\ensuremath{{\sin}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash csqtwef} & {\ensuremath{{\cos}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace \\ \texttt{\textbackslash stwef} & {\ensuremath{\sin\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash ctwef} & {\ensuremath{\cos\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash gv} & {\ensuremath{g_{\mbox{\tiny V}}}}\xspace \\ \texttt{\textbackslash ga} & {\ensuremath{g_{\mbox{\tiny A}}}}\xspace & \texttt{\textbackslash order} & {\ensuremath{\mathcal{O}}}\xspace & \texttt{\textbackslash ordalph} & {\ensuremath{\mathcal{O}(\alpha)}}\xspace \\ \texttt{\textbackslash ordalsq} & {\ensuremath{\mathcal{O}(\alpha^{2})}}\xspace & \texttt{\textbackslash ordalcb} & {\ensuremath{\mathcal{O}(\alpha^{3})}}\xspace & \\ \end{tabular*} \subsubsection{QCD parameters} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash as} & {\ensuremath{\alpha_s}}\xspace & \texttt{\textbackslash MSb} & {\ensuremath{\overline{\mathrm{MS}}}}\xspace & \texttt{\textbackslash lqcd} & {\ensuremath{\Lambda_{\mathrm{QCD}}}}\xspace \\ \texttt{\textbackslash qsq} & {\ensuremath{q^2}}\xspace & \\ \end{tabular*} \subsubsection{CKM, \boldmath {\ensuremath{C\!P}}\xspace violation} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash eps} & {\ensuremath{\varepsilon}}\xspace & \texttt{\textbackslash epsK} & {\ensuremath{\varepsilon_K}}\xspace & \texttt{\textbackslash epsB} & {\ensuremath{\varepsilon_B}}\xspace \\ \texttt{\textbackslash epsp} & {\ensuremath{\varepsilon^\prime_K}}\xspace & \texttt{\textbackslash CP} & {\ensuremath{C\!P}}\xspace & \texttt{\textbackslash CPT} & {\ensuremath{C\!PT}}\xspace \\ \texttt{\textbackslash T} & {\ensuremath{T}}\xspace & \texttt{\textbackslash rhobar} & {\ensuremath{\overline \rho}}\xspace & \texttt{\textbackslash etabar} & {\ensuremath{\overline \eta}}\xspace \\ \texttt{\textbackslash Vud} & {\ensuremath{V_{\uquark\dquark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vcd} & {\ensuremath{V_{\cquark\dquark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vtd} & {\ensuremath{V_{\tquark\dquark}^{\phantom{\ast}}}}\xspace \\ \texttt{\textbackslash Vus} & {\ensuremath{V_{\uquark\squark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vcs} & {\ensuremath{V_{\cquark\squark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vts} & {\ensuremath{V_{\tquark\squark}^{\phantom{\ast}}}}\xspace \\ \texttt{\textbackslash Vub} & {\ensuremath{V_{\uquark\bquark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vcb} & {\ensuremath{V_{\cquark\bquark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vtb} & {\ensuremath{V_{\tquark\bquark}^{\phantom{\ast}}}}\xspace \\ \texttt{\textbackslash Vuds} & {\ensuremath{V_{\uquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vcds} & {\ensuremath{V_{\cquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vtds} & {\ensuremath{V_{\tquark\dquark}^\ast}}\xspace \\ \texttt{\textbackslash Vuss} & {\ensuremath{V_{\uquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vcss} & {\ensuremath{V_{\cquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vtss} & {\ensuremath{V_{\tquark\squark}^\ast}}\xspace \\ \texttt{\textbackslash Vubs} & {\ensuremath{V_{\uquark\bquark}^\ast}}\xspace & \texttt{\textbackslash Vcbs} & {\ensuremath{V_{\cquark\bquark}^\ast}}\xspace & \texttt{\textbackslash Vtbs} & {\ensuremath{V_{\tquark\bquark}^\ast}}\xspace \\ \end{tabular*} \subsubsection{Oscillations} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash dm} & {\ensuremath{\Delta m}}\xspace & \texttt{\textbackslash dms} & {\ensuremath{\Delta m_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash dmd} & {\ensuremath{\Delta m_{{\ensuremath{\Pd}}\xspace}}}\xspace \\ \texttt{\textbackslash DG} & {\ensuremath{\Delta\Gamma}}\xspace & \texttt{\textbackslash DGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash DGd} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace \\ \texttt{\textbackslash Gs} & {\ensuremath{\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash Gd} & {\ensuremath{\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace & \texttt{\textbackslash MBq} & {\ensuremath{M_{{\ensuremath{\PB}}\xspace_{\ensuremath{\Pq}}\xspace}}}\xspace \\ \texttt{\textbackslash DGq} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash Gq} & {\ensuremath{\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash dmq} & {\ensuremath{\Delta m_{{\ensuremath{\Pq}}\xspace}}}\xspace \\ \texttt{\textbackslash GL} & {\ensuremath{\Gamma_{\mathrm{ L}}}}\xspace & \texttt{\textbackslash GH} & {\ensuremath{\Gamma_{\mathrm{ H}}}}\xspace & \texttt{\textbackslash DGsGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}/\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace \\ \texttt{\textbackslash Delm} & {\mbox{$\Delta m $}}\xspace & \texttt{\textbackslash ACP} & {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace & \texttt{\textbackslash Adir} & {\ensuremath{{\mathcal{A}}^{\mathrm{ dir}}}}\xspace \\ \texttt{\textbackslash Amix} & {\ensuremath{{\mathcal{A}}^{\mathrm{ mix}}}}\xspace & \texttt{\textbackslash ADelta} & {\ensuremath{{\mathcal{A}}^\Delta}}\xspace & \texttt{\textbackslash phid} & {\ensuremath{\phi_{{\ensuremath{\Pd}}\xspace}}}\xspace \\ \texttt{\textbackslash sinphid} & {\ensuremath{\sin\!\phid}}\xspace & \texttt{\textbackslash phis} & {\ensuremath{\phi_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash betas} & {\ensuremath{\beta_{{\ensuremath{\Ps}}\xspace}}}\xspace \\ \texttt{\textbackslash sbetas} & {\ensuremath{\sigma(\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stbetas} & {\ensuremath{\sigma(2\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stphis} & {\ensuremath{\sigma(\phi_{{\ensuremath{\Ps}}\xspace})}}\xspace \\ \texttt{\textbackslash sinphis} & {\ensuremath{\sin\!\phis}}\xspace & \\ \end{tabular*} \subsubsection{Tagging} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash edet} & {\ensuremath{\varepsilon_{\mathrm{ det}}}}\xspace & \texttt{\textbackslash erec} & {\ensuremath{\varepsilon_{\mathrm{ rec/det}}}}\xspace & \texttt{\textbackslash esel} & {\ensuremath{\varepsilon_{\mathrm{ sel/rec}}}}\xspace \\ \texttt{\textbackslash etrg} & {\ensuremath{\varepsilon_{\mathrm{ trg/sel}}}}\xspace & \texttt{\textbackslash etot} & {\ensuremath{\varepsilon_{\mathrm{ tot}}}}\xspace & \texttt{\textbackslash mistag} & \ensuremath{\omega}\xspace \\ \texttt{\textbackslash wcomb} & \ensuremath{\omega^{\mathrm{comb}}}\xspace & \texttt{\textbackslash etag} & {\ensuremath{\varepsilon_{\mathrm{tag}}}}\xspace & \texttt{\textbackslash etagcomb} & {\ensuremath{\varepsilon_{\mathrm{tag}}^{\mathrm{comb}}}}\xspace \\ \texttt{\textbackslash effeff} & \ensuremath{\varepsilon_{\mathrm{eff}}}\xspace & \texttt{\textbackslash effeffcomb} & \ensuremath{\varepsilon_{\mathrm{eff}}^{\mathrm{comb}}}\xspace & \texttt{\textbackslash efftag} & {\ensuremath{\etag(1-2\omega)^2}}\xspace \\ \texttt{\textbackslash effD} & {\ensuremath{\etag D^2}}\xspace & \texttt{\textbackslash etagprompt} & {\ensuremath{\varepsilon_{\mathrm{ tag}}^{\mathrm{Pr}}}}\xspace & \texttt{\textbackslash etagLL} & {\ensuremath{\varepsilon_{\mathrm{ tag}}^{\mathrm{LL}}}}\xspace \\ \end{tabular*} \subsubsection{Key decay channels} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash BdToKstmm} & \decay{\Bd}{\Kstarz\mup\mun} & \texttt{\textbackslash BdbToKstmm} & \decay{\Bdb}{\Kstarzb\mup\mun} & \texttt{\textbackslash BsToJPsiPhi} & \decay{\Bs}{\jpsi\phi} \\ \texttt{\textbackslash BdToJPsiKst} & \decay{\Bd}{\jpsi\Kstarz} & \texttt{\textbackslash BdbToJPsiKst} & \decay{\Bdb}{\jpsi\Kstarzb} & \texttt{\textbackslash BsPhiGam} & \decay{\Bs}{\phi \g} \\ \texttt{\textbackslash BdKstGam} & \decay{\Bd}{\Kstarz \g} & \texttt{\textbackslash BTohh} & \decay{\B}{\Ph^+ \Ph'^-} & \texttt{\textbackslash BdTopipi} & \decay{\Bd}{\pip\pim} \\ \texttt{\textbackslash BdToKpi} & \decay{\Bd}{\Kp\pim} & \texttt{\textbackslash BsToKK} & \decay{\Bs}{\Kp\Km} & \texttt{\textbackslash BsTopiK} & \decay{\Bs}{\pip\Km} \\ \texttt{\textbackslash Cpipi} & \ensuremath{C_{\pip\pim}}\xspace & \texttt{\textbackslash Spipi} & \ensuremath{S_{\pip\pim}}\xspace & \texttt{\textbackslash CKK} & \ensuremath{C_{\Kp\Km}}\xspace \\ \texttt{\textbackslash SKK} & \ensuremath{S_{\Kp\Km}}\xspace & \texttt{\textbackslash ADGKK} & \ensuremath{A^{\DG}_{\Kp\Km}}\xspace & \\ \end{tabular*} \subsubsection{Rare decays} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash BdKstee} & \decay{\Bd}{\Kstarz\epem} & \texttt{\textbackslash BdbKstee} & \decay{\Bdb}{\Kstarzb\epem} & \texttt{\textbackslash bsll} & \decay{\bquark}{\squark \ell^+ \ell^-} \\ \texttt{\textbackslash AFB} & \ensuremath{A_{\mathrm{FB}}}\xspace & \texttt{\textbackslash FL} & \ensuremath{F_{\mathrm{L}}}\xspace & \texttt{\textbackslash AT\#1 \textbackslash AT2} & \AT2 \\ \texttt{\textbackslash btosgam} & \decay{\bquark}{\squark \g} & \texttt{\textbackslash btodgam} & \decay{\bquark}{\dquark \g} & \texttt{\textbackslash Bsmm} & \decay{\Bs}{\mup\mun} \\ \texttt{\textbackslash Bdmm} & \decay{\Bd}{\mup\mun} & \texttt{\textbackslash Bsee} & \decay{\Bs}{\epem} & \texttt{\textbackslash Bdee} & \decay{\Bd}{\epem} \\ \texttt{\textbackslash ctl} & \ensuremath{\cos{\theta_\ell}}\xspace & \texttt{\textbackslash ctk} & \ensuremath{\cos{\theta_K}}\xspace & \\ \end{tabular*} \subsubsection{Wilson coefficients and operators} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash C\#1 \textbackslash C9} & \C9 & \texttt{\textbackslash Cp\#1 \textbackslash Cp7} & \Cp7 & \texttt{\textbackslash Ceff\#1 \textbackslash Ceff9 } & \Ceff9 \\ \texttt{\textbackslash Cpeff\#1 \textbackslash Cpeff7} & \Cpeff7 & \texttt{\textbackslash Ope\#1 \textbackslash Ope2} & \Ope2 & \texttt{\textbackslash Opep\#1 \textbackslash Opep7} & \Opep7 \\ \end{tabular*} \subsubsection{Charm} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash xprime} & \ensuremath{x^{\prime}}\xspace & \texttt{\textbackslash yprime} & \ensuremath{y^{\prime}}\xspace & \texttt{\textbackslash ycp} & \ensuremath{y_{\CP}}\xspace \\ \texttt{\textbackslash agamma} & \ensuremath{A_{\Gamma}}\xspace & \texttt{\textbackslash dkpicf} & \decay{\Dz}{\Km\pip} & \\ \end{tabular*} \subsubsection{QM} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash bra[1] \textbackslash bra\{a\}} & \bra{a} & \texttt{\textbackslash ket[1] \textbackslash ket\{b\}} & \ket{b} & \texttt{\textbackslash braket[2] \textbackslash braket\{a\}\{b\}} & \braket{a}{b} \\ \end{tabular*} \subsection{Units (these macros add a small space in front)} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash unit[1] \textbackslash unit\{kg\} } & \unit{kg} & \\ \end{tabular*} \subsubsection{Energy and momentum } \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash tev} & \aunit{Te\kern -0.1em V}\xspace & \texttt{\textbackslash gev} & \aunit{Ge\kern -0.1em V}\xspace & \texttt{\textbackslash mev} & \aunit{Me\kern -0.1em V}\xspace \\ \texttt{\textbackslash kev} & \aunit{ke\kern -0.1em V}\xspace & \texttt{\textbackslash ev} & \aunit{e\kern -0.1em V}\xspace & \texttt{\textbackslash gevgev} & \gevgev \\ \texttt{\textbackslash mevc} & \ensuremath{\aunit{Me\kern -0.1em V\!/}c}\xspace & \texttt{\textbackslash gevc} & \ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace & \texttt{\textbackslash mevcc} & \ensuremath{\aunit{Me\kern -0.1em V\!/}c^2}\xspace \\ \texttt{\textbackslash gevcc} & \ensuremath{\aunit{Ge\kern -0.1em V\!/}c^2}\xspace & \texttt{\textbackslash gevgevcc} & \gevgevcc & \texttt{\textbackslash gevgevcccc} & \gevgevcccc \\ \end{tabular*} \subsubsection{Distance and area (these macros add a small space)} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash km} & \aunit{km}\xspace & \texttt{\textbackslash m} & \aunit{m}\xspace & \texttt{\textbackslash ma} & \ensuremath{\aunit{m}^2}\xspace \\ \texttt{\textbackslash cm} & \aunit{cm}\xspace & \texttt{\textbackslash cma} & \ensuremath{\aunit{cm}^2}\xspace & \texttt{\textbackslash mm} & \aunit{mm}\xspace \\ \texttt{\textbackslash mma} & \ensuremath{\aunit{mm}^2}\xspace & \texttt{\textbackslash mum} & \ensuremath{\,\upmu\nospaceunit{m}}\xspace & \texttt{\textbackslash muma} & \ensuremath{\,\upmu\nospaceunit{m}^2}\xspace \\ \texttt{\textbackslash nm} & \aunit{nm}\xspace & \texttt{\textbackslash fm} & \aunit{fm}\xspace & \texttt{\textbackslash barn} & \aunit{b}\xspace \\ \texttt{\textbackslash mbarn} & \aunit{mb}\xspace & \texttt{\textbackslash mub} & \ensuremath{\,\upmu\nospaceunit{b}}\xspace & \texttt{\textbackslash nb} & \aunit{nb}\xspace \\ \texttt{\textbackslash invnb} & \ensuremath{\nb^{-1}}\xspace & \texttt{\textbackslash pb} & \aunit{pb}\xspace & \texttt{\textbackslash invpb} & \ensuremath{\pb^{-1}}\xspace \\ \texttt{\textbackslash fb} & \ensuremath{\aunit{fb}}\xspace & \texttt{\textbackslash invfb} & \ensuremath{\fb^{-1}}\xspace & \texttt{\textbackslash ab} & \ensuremath{\aunit{ab}}\xspace \\ \texttt{\textbackslash invab} & \ensuremath{\ab^{-1}}\xspace & \\ \end{tabular*} \subsubsection{Time } \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash sec} & \ensuremath{\aunit{s}}\xspace & \texttt{\textbackslash ms} & \ensuremath{\aunit{ms}}\xspace & \texttt{\textbackslash mus} & \ensuremath{\,\upmu\nospaceunit{s}}\xspace \\ \texttt{\textbackslash ns} & \ensuremath{\aunit{ns}}\xspace & \texttt{\textbackslash ps} & \ensuremath{\aunit{ps}}\xspace & \texttt{\textbackslash fs} & \aunit{fs} \\ \texttt{\textbackslash mhz} & \ensuremath{\aunit{MHz}}\xspace & \texttt{\textbackslash khz} & \ensuremath{\aunit{kHz}}\xspace & \texttt{\textbackslash hz} & \ensuremath{\aunit{Hz}}\xspace \\ \texttt{\textbackslash invps} & \ensuremath{\ps^{-1}}\xspace & \texttt{\textbackslash invns} & \ensuremath{\ns^{-1}}\xspace & \texttt{\textbackslash yr} & \aunit{yr}\xspace \\ \texttt{\textbackslash hr} & \aunit{hr}\xspace & \\ \end{tabular*} \subsubsection{Temperature} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash degc} & \ensuremath{^\circ}{\text{C}}\xspace & \texttt{\textbackslash degk} & \aunit{K}\xspace & \\ \end{tabular*} \subsubsection{Material lengths, radiation} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Xrad} & \ensuremath{X_0}\xspace & \texttt{\textbackslash NIL} & \ensuremath{\lambda_{\rm int}}\xspace & \texttt{\textbackslash mip} & MIP\xspace \\ \texttt{\textbackslash neutroneq} & \ensuremath{n_\nospaceunit{eq}}\xspace & \texttt{\textbackslash neqcmcm} & \ensuremath{\neutroneq/\nospaceunit{cm}^2}\xspace & \texttt{\textbackslash kRad} & \aunit{kRad}\xspace \\ \texttt{\textbackslash MRad} & \aunit{MRad}\xspace & \texttt{\textbackslash ci} & \aunit{Ci}\xspace & \texttt{\textbackslash mci} & \aunit{mCi}\xspace \\ \end{tabular*} \subsubsection{Uncertainties} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash sx} & \sx & \texttt{\textbackslash sy} & \sy & \texttt{\textbackslash sz} & \sz \\ \texttt{\textbackslash stat} & \aunit{(stat)}\xspace & \texttt{\textbackslash syst} & \aunit{(syst)}\xspace & \texttt{\textbackslash lumi} & \aunit{(lumi)}\xspace \\ \end{tabular*} \subsubsection{Maths} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash order} & {\ensuremath{\mathcal{O}}}\xspace & \texttt{\textbackslash chisq} & \ensuremath{\chi^2}\xspace & \texttt{\textbackslash chisqndf} & \ensuremath{\chi^2/\mathrm{ndf}}\xspace \\ \texttt{\textbackslash chisqip} & \ensuremath{\chi^2_{\text{IP}}}\xspace & \texttt{\textbackslash chisqvs} & \ensuremath{\chi^2_{\text{VS}}}\xspace & \texttt{\textbackslash chisqvtx} & \ensuremath{\chi^2_{\text{vtx}}}\xspace \\ \texttt{\textbackslash chisqvtxndf} & \ensuremath{\chi^2_{\text{vtx}}/\mathrm{ndf}}\xspace & \texttt{\textbackslash deriv} & \ensuremath{\mathrm{d}} & \texttt{\textbackslash gsim} & \gsim \\ \texttt{\textbackslash lsim} & \lsim & \texttt{\textbackslash mean[1] \textbackslash mean\{x\}} & \mean{x} & \texttt{\textbackslash abs[1] \textbackslash abs\{x\}} & \abs{x} \\ \texttt{\textbackslash Real} & \ensuremath{\mathcal{R}e}\xspace & \texttt{\textbackslash Imag} & \ensuremath{\mathcal{I}m}\xspace & \texttt{\textbackslash PDF} & PDF\xspace \\ \texttt{\textbackslash sPlot} & \mbox{\em sPlot}\xspace & \texttt{\textbackslash sFit} & \mbox{\em sFit}\xspace & \\ \end{tabular*} \subsection{Kinematics} \subsubsection{Energy, Momenta} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash Ebeam} & \ensuremath{E_{\mbox{\tiny BEAM}}}\xspace & \texttt{\textbackslash sqs} & \ensuremath{\protect\sqrt{s}}\xspace & \texttt{\textbackslash sqsnn} & \ensuremath{\protect\sqrt{s_{\scriptscriptstyle\text{NN}}}}\xspace \\ \texttt{\textbackslash pt} & \ensuremath{p_{\mathrm{T}}}\xspace & \texttt{\textbackslash ptsq} & \ensuremath{p_{\mathrm{T}}^2}\xspace & \texttt{\textbackslash ptot} & \ensuremath{p}\xspace \\ \texttt{\textbackslash et} & \ensuremath{E_{\mathrm{T}}}\xspace & \texttt{\textbackslash mt} & \ensuremath{M_{\mathrm{T}}}\xspace & \texttt{\textbackslash dpp} & \ensuremath{\Delta p/p}\xspace \\ \texttt{\textbackslash msq} & \ensuremath{m^2}\xspace & \texttt{\textbackslash dedx} & \ensuremath{\mathrm{d}\hspace{-0.1em}E/\mathrm{d}x}\xspace & \\ \end{tabular*} \subsubsection{PID} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash dllkpi} & \ensuremath{\mathrm{DLL}_{\kaon\pion}}\xspace & \texttt{\textbackslash dllppi} & \ensuremath{\mathrm{DLL}_{\proton\pion}}\xspace & \texttt{\textbackslash dllepi} & \ensuremath{\mathrm{DLL}_{\electron\pion}}\xspace \\ \texttt{\textbackslash dllmupi} & \ensuremath{\mathrm{DLL}_{\muon\pi}}\xspace & \\ \end{tabular*} \subsubsection{Geometry} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash degrees} & \ensuremath{^{\circ}}\xspace & \texttt{\textbackslash murad} & \ensuremath{\,\upmu\nospaceunit{rad}}\xspace & \texttt{\textbackslash mrad} & \aunit{mrad}\xspace \\ \texttt{\textbackslash rad} & \aunit{rad}\xspace & \\ \end{tabular*} \subsubsection{Accelerator} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash betastar} & \ensuremath{\beta^*} & \texttt{\textbackslash lum} & \lum & \texttt{\textbackslash intlum[1] \textbackslash intlum\{2 \,\ensuremath{\fb^{-1}}\xspace\}} & \intlum{2 \,\ensuremath{\fb^{-1}}\xspace} \\ \end{tabular*} \subsection{Software} \subsubsection{Programs} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash bcvegpy} & \mbox{\textsc{Bcvegpy}}\xspace & \texttt{\textbackslash boole} & \mbox{\textsc{Boole}}\xspace & \texttt{\textbackslash brunel} & \mbox{\textsc{Brunel}}\xspace \\ \texttt{\textbackslash davinci} & \mbox{\textsc{DaVinci}}\xspace & \texttt{\textbackslash dirac} & \mbox{\textsc{Dirac}}\xspace & \texttt{\textbackslash evtgen} & \mbox{\textsc{EvtGen}}\xspace \\ \texttt{\textbackslash fewz} & \mbox{\textsc{Fewz}}\xspace & \texttt{\textbackslash fluka} & \mbox{\textsc{Fluka}}\xspace & \texttt{\textbackslash ganga} & \mbox{\textsc{Ganga}}\xspace \\ \texttt{\textbackslash gaudi} & \mbox{\textsc{Gaudi}}\xspace & \texttt{\textbackslash gauss} & \mbox{\textsc{Gauss}}\xspace & \texttt{\textbackslash geant} & \mbox{\textsc{Geant4}}\xspace \\ \texttt{\textbackslash hepmc} & \mbox{\textsc{HepMC}}\xspace & \texttt{\textbackslash herwig} & \mbox{\textsc{Herwig}}\xspace & \texttt{\textbackslash moore} & \mbox{\textsc{Moore}}\xspace \\ \texttt{\textbackslash neurobayes} & \mbox{\textsc{NeuroBayes}}\xspace & \texttt{\textbackslash photos} & \mbox{\textsc{Photos}}\xspace & \texttt{\textbackslash powheg} & \mbox{\textsc{Powheg}}\xspace \\ \texttt{\textbackslash pythia} & \mbox{\textsc{Pythia}}\xspace & \texttt{\textbackslash resbos} & \mbox{\textsc{ResBos}}\xspace & \texttt{\textbackslash roofit} & \mbox{\textsc{RooFit}}\xspace \\ \texttt{\textbackslash root} & \mbox{\textsc{Root}}\xspace & \texttt{\textbackslash spice} & \mbox{\textsc{Spice}}\xspace & \texttt{\textbackslash urania} & \mbox{\textsc{Urania}}\xspace \\ \end{tabular*} \subsubsection{Languages} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash cpp} & \mbox{\textsc{C\raisebox{0.1em}{{\footnotesize{++}}}}}\xspace & \texttt{\textbackslash ruby} & \mbox{\textsc{Ruby}}\xspace & \texttt{\textbackslash fortran} & \mbox{\textsc{Fortran}}\xspace \\ \texttt{\textbackslash svn} & \mbox{\textsc{svn}}\xspace & \texttt{\textbackslash git} & \mbox{\textsc{git}}\xspace & \texttt{\textbackslash latex} & \mbox{\LaTeX}\xspace \\ \end{tabular*} \subsubsection{Data processing} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash kbit} & \aunit{kbit}\xspace & \texttt{\textbackslash kbps} & \aunit{kbit/s}\xspace & \texttt{\textbackslash kbytes} & \aunit{kB}\xspace \\ \texttt{\textbackslash kbyps} & \aunit{kB/s}\xspace & \texttt{\textbackslash mbit} & \aunit{Mbit}\xspace & \texttt{\textbackslash mbps} & \aunit{Mbit/s}\xspace \\ \texttt{\textbackslash mbytes} & \aunit{MB}\xspace & \texttt{\textbackslash mbyps} & \aunit{MB/s}\xspace & \texttt{\textbackslash gbit} & \aunit{Gbit}\xspace \\ \texttt{\textbackslash gbps} & \aunit{Gbit/s}\xspace & \texttt{\textbackslash gbytes} & \aunit{GB}\xspace & \texttt{\textbackslash gbyps} & \aunit{GB/s}\xspace \\ \texttt{\textbackslash tbit} & \aunit{Tbit}\xspace & \texttt{\textbackslash tbps} & \aunit{Tbit/s}\xspace & \texttt{\textbackslash tbytes} & \aunit{TB}\xspace \\ \texttt{\textbackslash tbyps} & \aunit{TB/s}\xspace & \texttt{\textbackslash dst} & \ensuremath{D^{\ast}}\xspace & \\ \end{tabular*} \subsection{Detector related} \subsubsection{Detector technologies} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash nonn} & \ensuremath{\mathrm{{ \mathit{n^+}} \mbox{-} on\mbox{-}{ \mathit{n}}}}\xspace & \texttt{\textbackslash ponn} & \ensuremath{\mathrm{{ \mathit{p^+}} \mbox{-} on\mbox{-}{ \mathit{n}}}}\xspace & \texttt{\textbackslash nonp} & \ensuremath{\mathrm{{ \mathit{n^+}} \mbox{-} on\mbox{-}{ \mathit{p}}}}\xspace \\ \texttt{\textbackslash cvd} & CVD\xspace & \texttt{\textbackslash mwpc} & MWPC\xspace & \texttt{\textbackslash gem} & GEM\xspace \\ \end{tabular*} \subsubsection{Detector components, electronics} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash tell1} & TELL1\xspace & \texttt{\textbackslash ukl1} & UKL1\xspace & \texttt{\textbackslash beetle} & Beetle\xspace \\ \texttt{\textbackslash otis} & OTIS\xspace & \texttt{\textbackslash croc} & CROC\xspace & \texttt{\textbackslash carioca} & CARIOCA\xspace \\ \texttt{\textbackslash dialog} & DIALOG\xspace & \texttt{\textbackslash sync} & SYNC\xspace & \texttt{\textbackslash cardiac} & CARDIAC\xspace \\ \texttt{\textbackslash gol} & GOL\xspace & \texttt{\textbackslash vcsel} & VCSEL\xspace & \texttt{\textbackslash ttc} & TTC\xspace \\ \texttt{\textbackslash ttcrx} & TTCrx\xspace & \texttt{\textbackslash hpd} & HPD\xspace & \texttt{\textbackslash pmt} & PMT\xspace \\ \texttt{\textbackslash specs} & SPECS\xspace & \texttt{\textbackslash elmb} & ELMB\xspace & \texttt{\textbackslash fpga} & FPGA\xspace \\ \texttt{\textbackslash plc} & PLC\xspace & \texttt{\textbackslash rasnik} & RASNIK\xspace & \texttt{\textbackslash elmb} & ELMB\xspace \\ \texttt{\textbackslash can} & CAN\xspace & \texttt{\textbackslash lvds} & LVDS\xspace & \texttt{\textbackslash ntc} & NTC\xspace \\ \texttt{\textbackslash adc} & ADC\xspace & \texttt{\textbackslash led} & LED\xspace & \texttt{\textbackslash ccd} & CCD\xspace \\ \texttt{\textbackslash hv} & HV\xspace & \texttt{\textbackslash lv} & LV\xspace & \texttt{\textbackslash pvss} & PVSS\xspace \\ \texttt{\textbackslash cmos} & CMOS\xspace & \texttt{\textbackslash fifo} & FIFO\xspace & \texttt{\textbackslash ccpc} & CCPC\xspace \\ \end{tabular*} \subsubsection{Chemical symbols} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash cfourften} & \ensuremath{\mathrm{ C_4 F_{10}}}\xspace & \texttt{\textbackslash cffour} & \ensuremath{\mathrm{ CF_4}}\xspace & \texttt{\textbackslash cotwo} & \cotwo \\ \texttt{\textbackslash csixffouteen} & \csixffouteen & \texttt{\textbackslash mgftwo} & \mgftwo & \texttt{\textbackslash siotwo} & \siotwo \\ \end{tabular*} \subsection{Special Text } \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash eg} & \mbox{\itshape e.g.}\xspace & \texttt{\textbackslash ie} & \mbox{\itshape i.e.}\xspace & \texttt{\textbackslash etal} & \mbox{\itshape et al.}\xspace \\ \texttt{\textbackslash etc} & \mbox{\itshape etc.}\xspace & \texttt{\textbackslash cf} & \mbox{\itshape cf.}\xspace & \texttt{\textbackslash ffp} & \mbox{\itshape ff.}\xspace \\ \texttt{\textbackslash vs} & \mbox{\itshape vs.}\xspace & \\ \end{tabular*} \subsubsection{Helpful to align numbers in tables} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l} \texttt{\textbackslash phz} & \phantom{0} & \\ \end{tabular*} \section*{Supplemental Material } \label{sec:supplemental} This section contains the additional figures mentioned in the main text. Figures~\ref{fig:signalplot_run1} and \ref{fig:signalplot_run2} present the same data shown in Figure 1 in the main body, but for all the BDT intervals. Figures~\ref{fig:signalplot_uncut_run1} and \ref{fig:signalplot_uncut_run2} represent the same distributions and subdivisions of the previous plots, but without restricting each to the signal region of the other variable, \mbox{\itshape i.e.}\xspace they contain the full data used for the signal search. Figure~\ref{fig:2dplot} shows the data in the two-dimensional plane of the \ensuremath{m(\mumu)}\xspace and \ensuremath{\Delta m}\xspace variables, as well as the signal regions in each variable. Figure~\ref{fig:bdtcalib} displays the result of the calibration of the BDT output described in the text. Figure \ref{fig:pidcalib} shows the test on the particle identification variable mentioned in the main body. Finally, Figure~\ref{fig:cls} shows the value of the $\rm{CL_s}$ estimator used to compute the upper limit on the \dmumu branching fraction, as a function of the branching fraction itself. \begin{figure}[!bp] \includegraphics[width = 0.5\textwidth]{figs/Fig3a.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig3b.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig3c.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig3d.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig3e.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig3f.pdf} \caption{Distributions of (left) \ensuremath{m(\mumu)}\xspace and (right) \ensuremath{\Delta m}\xspace for the \dmumu candidates for Run~1\xspace data in, top to bottom, the three BDT intervals. The distributions are superimposed with the fit to data. Each of the two distributions is in the signal region of the other variable, see text for details. Untagged and tagged decays are included in a single component for signal and \dpipi background. }\label{fig:signalplot_run1} \end{figure} \begin{figure}[btp] \includegraphics[width = 0.5\textwidth]{figs/Fig4a.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig4b.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig4c.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig4d.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig4e.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig4f.pdf} \caption{Distributions of (left) \ensuremath{m(\mumu)}\xspace and (right) \ensuremath{\Delta m}\xspace for the \dmumu candidates for Run~2\xspace data in, top to bottom, the three BDT bins. The distributions are superimposed with the fit to data. Each of the two distributions is in the signal region of the other variable, see text for details. Untagged and tagged decays are included in a single component for signal and \dpipi background. }\label{fig:signalplot_run2} \end{figure} \begin{figure}[bp] \includegraphics[width = 0.5\textwidth]{figs/Fig5a.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig5b.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig5c.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig5d.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig5e.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig5f.pdf} \caption{Distributions of (left) \ensuremath{m(\mumu)}\xspace and (right) \ensuremath{\Delta m}\xspace for the \dmumu candidates for Run 1 data in, top to bottom, the three BDT bins. The distributions are superimposed with the fit to data. Untagged and tagged decays are included in a single component for signal and \dpipi background. Unlike the correspondent figures in the main body of the Letter, here all events are shown. }\label{fig:signalplot_uncut_run1} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{figs/Fig6a.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig6b.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig6c.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig6d.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig6e.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig6f.pdf} \caption{Distributions of (left) \ensuremath{m(\mumu)}\xspace and (right) \ensuremath{\Delta m}\xspace for the \dmumu candidates for Run 2 data in, top to bottom, the three BDT bins. The distributions are superimposed with the fit to data. Untagged and tagged decays are included in a single component for signal and \dpipi background. Unlike the correspondent figures in the main body of the Letter, here all events are shown. }\label{fig:signalplot_uncut_run2} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{figs/Fig7a.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig7b.pdf} \caption{Two-dimensional distribution of \ensuremath{m(\Dz)}\xspace versus \ensuremath{\Delta m}\xspace for the most sensitive BDT interval in the (left) Run~1 and (right) Run~2 data. Red lines represent the signal region of each variable as defined in the text. }\label{fig:2dplot} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{figs/Fig8a.pdf} \includegraphics[width = 0.5\textwidth]{figs/Fig8b.pdf} \caption{Distribution of the BDT output. The \dpipi candidates in data and simulation are shown and used to weight the simulated \dmumu decays distribution used for the search. }\label{fig:bdtcalib} \end{figure} \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{figs/Fig9.pdf} \caption{Efficiency for two pions to pass a requirement on the $ProbNNmu$ variable (after additional requirements on its identification as muon) in Run 2, as measured from \dpipipi and \dspipipi decays in data and \dpipi decays in simulation. Same sign pions are used in data to avoid contamination for hadronic resonances decaying to two real muons.}\label{fig:pidcalib} \end{figure} \begin{figure} \begin{center} \includegraphics[width = 0.7\textwidth]{figs/Fig10.pdf} \end{center} \caption{Value of $\rm{CL_s}$ as a function of the \dmumu branching fraction. } \label{fig:cls} \end{figure}
2,869,038,155,811
arxiv
\section{I. Phonon calculations}\label{sec:ph} To calculate thermal lattice disorder, we use the ``force constant approach'' \cite{Kresse:epl95s,vanGelderen:prb03s}. In this method, a (quite unrelated) supercell of the material making up the scattering region is constructed, the central atom is displaced by a small amount ${\bm \delta}$ and the forces induced on all of the atoms in the supercell by this displacement are calculated. As long as it is sufficiently small, the forces should be linear in ${\bm \delta}$ allowing them to be differentiated numerically to form second derivatives of the energy, the force constant matrix. By constructing Bloch sums of the force constant matrix for arbitrary wave vector $\mathbf q$, we obtain the dynamical matrix. Because of the strong screening in metals, the force field induced by displacing a central atom is short ranged. Using elements of the force constant matrix calculated with a $5\times5\times5$ supercell yields well converged phonon dispersion relations. In practice, we use the \textsc{quantum espresso} density functional theory code \cite{Giannozzi:jpcm09s} based on plane-waves and pseudopotentials in combination with the Perdew-Burke-Ernzerhof form for the generalized gradient approximation to the exchange-correlation energy \cite{Perdew:prl96s}. The experimental lattice constants were used for all metals. The calculated phonon dispersions for Cu, Pd and Pt are shown in Fig.~\ref{fig:s1}. The measured phonon spectra \cite{Schober:81s} are included for comparison and good agreement between experiment and calculation is found in all three cases. The Cu and Pd phonons were calculated without spin-orbit coupling (SOC). For Pt, we calculated the phonons with (red solid lines) and without (blue dashed lines) SOC. Because the interatomic forces are mainly determined by the Coulomb interaction between electrons and nuclei \cite{Baroni:rmp01s} and the main features of the electronic energy bands are not changed by including SOC, the results (are seen in the figure to) lie on top of one another. \begin{figure}[b] \includegraphics[width=1\columnwidth]{figs1} \caption{Calculated phonon spectra for Cu, Pd and Pt along high-symmetry directions in the fcc Brillouin zone. The experimental data (black dots) are shown for comparison \cite{Schober:81s}. The calculated phonon dispersions of Pt with (solid lines) and without SOC (dashed lines) are nearly the same indicating that SOC has very little effect on phonon modes.} \label{fig:s1} \end{figure} We checked that the density functional perturbation theory \cite{Baroni:prl87s} yields the same phonon modes as the force constant approach. The very good agreement between the calculated and measured phonon dispersions seen in Fig.~\ref{fig:s1} indicates that the harmonic approximation upon which both theoretical methods are based captures the most important physics. Should it be necessary to generate correlated lattice disorder including anharmonic effects as input to a transport calculation, first-principles molecular dynamics calculations could be used. \section{II. Populating phonons and magnons} Having obtained the phonon energies $\omega_{s\mathbf q}$ and polarization vectors $\bm{\varepsilon}_{s\mathbf q}$ by diagonalizing the dynamical matrix, we are able to populate the phonons in a supercell to generate a configuration of lattice disorder for a chosen temperature. Here $s$ denotes a particular normal mode. At a finite temperature $T$, the vibration of an atom $l$ about its equilibrium position $\mathbf R_l$ can be described by a linear superposition of all occupied normal modes, \begin{eqnarray} \!\!\!\!\!\!\!\!\!\! \mathbf u_l(T,t) = \frac{1}{\sqrt{N_qM_l}}\sum_{s\mathbf q}\bm{\varepsilon_{s\mathbf q}}A_{s\mathbf q}(T)e^{i(\mathbf q\cdot\mathbf R_l-\omega_{s\mathbf q}t+\phi_{s\mathbf q})},\label{eq:displace} \end{eqnarray} where $N_q$ is the number of wavevectors $\mathbf q$ compatible with the lateral supercell used for the scattering region and $M_l$ is the mass of atom $l$. $\phi_{s\mathbf q}$ is a random phase of the normal mode $\omega_{s\mathbf q}$; varying it allows us to generate different configurations of thermal disorder. In our frozen thermal disorder picture, we can set the time $t$ to be zero without loss of generality. The vibrational amplitude $A_{s\mathbf q}(T)$ is determined by occupying the phonon mode at the temperature $T$ according to quantum statistics. Specifically, the quantity $\omega^2_{s\mathbf q}A^2_{s\mathbf q}/2$ should equal the total energy contributed by the $s\mathbf q$ phonon mode. Magnon modes for bulk Fe are calculated using the ``frozen magnon method'' introduced by Halilov {\it et al.} \cite{Halilov:epl97s,*Halilov:prb98bs} and populated as a function of temperature to generate snapshots of correlated spin disorder by analogy with the phonon case. At a temperature far below the Curie temperature, the occupation of a magnon mode $\mathbf q$ results in a small polar angle $\theta_{\mathbf q}$ of local magnetic moments with respect to the global quantization axis, i.e., \begin{equation} \frac{M_s}{2g\mu_B}\langle\theta^2_{\mathbf q}\rangle=\frac{n_{\mathbf q}(T)}{N_q}.\label{eq:theta} \end{equation} Here $M_s$ is the saturation magnetization, $g$ is the Land{\'e} $g$ factor for the electron, taken to be 2, $\mu_B$ is the Bohr magneton, $\langle\rangle$ denotes thermal averaging and $N_q$ is the total number of magnon modes. The temperature-dependent occupation of the magnon mode $n_{\mathbf q}(T)$ follows Bose-Einstein statistics. The final polar angle of the magnetic moment on every atom results from the linear superposition of $\theta_{\mathbf q}$ for all contributing magnon modes. \section{III. Numerical details} The Kohn-Sham potentials in the atomic spheres approximation (ASA) are calculated self-consistently without SOC using the tight-binding linear muffin-tin orbital (TB-LMTO) method \cite{Andersen:prl84s,*Andersen:prb86s}. Experimental lattice constants are used throughout. For the slab of collinear Ni$_{80}$Fe$_{20}$ binary alloy sandwiched between Cu leads, ASA potentials for Ni and Fe are calculated without SOC using the coherent potential approximation \cite{Soven:pr67s,Turek:97s} combined with a surface Green's function method \cite{Turek:97s} which is also implemented with TB-LMTOs. In the surface Green's function calculations, the two-dimensional Brillouin zone corresponding to an fcc (111) 1$\times$1 interface unit cell is sampled with a 120$\times$120 grid of k points. SOC makes a negligible contribution to the self-consistent Kohn-Sham potentials and is taken into account adequately in the transport calculation using a Pauli Hamiltonian approach \cite{Daalderop:prb90as}. Such a perturbative treatment has been successfully applied in first-principles calculations of Rashba splitting \cite{Bihlmayer:ss06s}, Dzyaloshinskii-Moriya interaction \cite{Bode:nat07s}, and our own calculations of magnetocrystalline anisotropy \cite{Daalderop:prb90as}, resistivity and magnetization dissipation \cite{Starikov:prl10s,Liu:prl14s,Yuan:prl14s}. For the same reason, the magnon dispersion is calculated without SOC because it is essentially determined by the exchange interactions; the magnetic anisotropy energy of Fe and Ni$_{80}$Fe$_{20}$ is tiny and can be safely neglected. \begin{figure}[b] \includegraphics[width=1\columnwidth]{figs2} \caption{Area resistance of Pt calculated as a function of the length of the diffusive Pt (black dots) using a $5\times 5$ lateral supercell. The disordered region of length $L$ is connected to two semi-infinite perfectly crystalline Pt leads and was constructed by populating phonon modes using $T=300$~K. The red bars show the average values and the standard deviation from averaging over more than five random configurations at every length. The solid blue line is the linear least squares fit. The empty green diamonds are resistances calculated by integrating the configuration averaged transmission over the energy window defined by $-\partial f / \partial \varepsilon$ where $f$ is the Fermi-Dirac distribution function with $T=300$~K. The error bars for the diamonds are smaller than the symbol size and hence not shown. } \label{fig:s2} \end{figure} The scattering matrix is determined using a ``wave-function matching'' scheme \cite{Ando:prb91s} also implemented with TB-LMTOs \cite{Xia:prb06s}. For magnetic materials at a finite temperature, the spin-dependent potentials are rotated in spin space \cite{Wang:prb08s} so that the local quantization axis of every atomic sphere conforms to the required spin disorder. The matrix elements of the Pauli Hamiltonian are evaluated using the local quantization axis. We performed numerical tests with lateral supercell sizes up to 10$\times$10 and found that good convergence could be achieved using 5$\times$5 and 4$\times$4 supercells for transport along fcc [111] and bcc [001] directions, respectively. The two-dimensional Brillouin zones of the 5$\times$5 supercell for fcc (4$\times$4 for bcc) metals are sampled with 32$\times$32 (28$\times$28) k points, which are equivalent to 160$\times$160 (112$\times$112) k points in the corresponding 1$\times$1 Brillouin zone. As a typical example, we plot in Fig.~\ref{fig:s2} the calculated area resistance of Pt as a function of the length of the disordered region. The empty green diamonds show the resistance obtained by configuration averaging the transmission as a function of the energy and then integrating over the energy window defined by the derivative of the Fermi-Dirac distribution function with $T=300$~K (green diamonds) rather than $T=0$~K (red bars). The Fermi smearing has little effect, partly because the lattice disorder already smooths the density of states near the Fermi level so that the conductance varies slowly over an energy range of a few $k_B T$. For this reason, the scattering matrix was only evaluated at the Fermi level in the remainder of this work. \begin{figure}[b] \includegraphics[width=1\columnwidth]{figs3} \caption{Resistivity of Fe calculated with lattice and/or spin disorder. $\rho_{\rm ph}$ (up-pointing, green triangles) is calculated with lattice disorder only, obtained by populating phonon modes while (a) $\rho_{\rm mg}$ (left-pointing, violet triangles) is calculated with spin disorder only, obtained by populating magnon modes. $\rho_{\rm ph+mg}$ (solid blue squares), obtained with both lattice and spin disorder simultaneously, is seen to be greater than the sum $\rho_{\rm ph}+\rho_{\rm mg}$ (empty magenta squares). (b) Instead of calculating spin disorder by populating the magnon spectra, $\rho_{\rm dm}$ (right-pointing, orange triangles) is calculated with spin disorder described using uncorrelated disordered moments. $\rho_{\rm ph+dm}$ (red solid circles) is calculated with lattice disorder described in terms of phonons and spin disorder described in terms of uncorrelated disordered moments. $\rho_{\rm ph+dm}$ is greater than the sum $\rho_{\rm ph}+\rho_{\rm dm}$ (empty black circles). } \label{fig:s3} \end{figure} Fig.~\ref{fig:s2} exhibits Ohmic behavior, i.e. the resistance is proportional to the length of the disordered region and a resistivity value of $\rho=10.0\pm0.3~\mu\Omega$~cm is extracted by a linear least squares fit. The extraction does not depend on the properties of the leads since the Sharvin resistance and other properties of the leads only contribute to the intercept of the linear fit. The computing time scales linearly with the length of the scattering region and quadratically with the size of the lateral supercell. Calculating a single configuration of the longest scattering region shown in Fig.~\ref{fig:s2} requires about one hour on a supercomputer node with 32 cores and 256 GB memory; the calculation parallelized perfectly over the two dimensional 32$\times$32 k points summation. \section{IV. Deviation from Matthiessen's rule} For ferromagnetic Fe, we can examine the validity of Matthiessen's inequality by comparing the sum of the partial resistivities arising from lattice and spin disorder separately to the total resistivity obtained with both types of disorder present simultaneously. $\rho_{\rm ph}$ and $\rho_{\rm mg}$ in Fig.~\ref{fig:s3}(a) are the resistivities calculated with only phonons and magnons populated in the scattering region, respectively. The sum $\rho_{\rm ph}+\rho_{\rm mg}$ is smaller than the resistivity $\rho_{\rm ph+ mg}$ calculated with both phonons and magnons present simultaneously. The same conclusion can be drawn for the case using uncorrelated spin disorder depicted in Fig.~\ref{fig:s3}(b). Specifically, both $(\rho_{\rm ph}+\rho_{\rm mg})/\rho_{\rm ph+ mg}$ and $(\rho_{\rm ph}+\rho_{\rm dm})/\rho_{\rm ph+ dm}$ are about 0.9 in agreement with a very recent calculation \cite{Glasbrenner:prb14s}.
2,869,038,155,812
arxiv
\section{Introduction} Recovering a high dimensional sparse signal by acquiring it through a linear measurement process returning fewer observations than its dimension is a problem often encountered in the digital signal processing literature. The field of research associated with such problems is often known to researchers as \textit{compressed sensing} or \textit{compressive sensing} (CS) \cite{donoho2006compressed}.\\ We define the support of a vector $\bsy{x} \in \mathbb{R}^n$ as $\text{supp} (\bsy{x}) := \lbrace j \in \lbrack n \rbrack : x_j \neq 0\rbrace$ where $\lbrack n \rbrack$ denotes the set $\lbrace 1, 2, \dots, n\rbrace$ and $x_j$ denotes the $j$th entry of $\bsy{x}$. A vector is said to be $s$-sparse whenever its support exhibits a cardinality equal to or lower than $s$. \vspace*{-3mm} \subsection{Signal model} In this paper, we focus on a framework involving \begin{enumerate} \item $K$ sparse signals $\bsy{x}_k \in \mathbb{R}^n$ to be recovered ($1 \leq k \leq K$), \item a common linear measurement process described by the matrix $\bsy{\Phi} \in \mathbb{R}^{m \times n}$, \item $K$ measurement vectors $\bsy{y}_k \in \mathbb{R}^m$ gathering the observations of each sparse signal when acquired through $\bsy{\Phi}$: $\bsy{y}_k = \bsy{\Phi} \bsy{x}_k$. \end{enumerate} To simplify the signal model, we introduce Equation (\ref{eq:sigModel}) to summarize the $K$ equations $\bsy{y}_k = \bsy{\Phi} \bsy{x}_k$ into a single one: \begin{equation}\label{eq:sigModel} \bsy{Y} = \bsy{\Phi} \bsy{X} \end{equation} where $\bsy{Y} = \big(\bsy{y}_1, \dots, \bsy{y}_K \big) \in \mathbb{R}^{m \times K}$ and $\bsy{X} = \big(\bsy{x}_1, \dots, \bsy{x}_K \big) \in \mathbb{R}^{n \times K}$. Using this formulation, the support of $\bsy{X}$, denoted by $\text{supp} (\bsy{X})$, is equal to the joint support $S := \cup_{k \in \lbrack K \rbrack} \text{supp} (\bsy{x}_k)$.\\ When a model involves one measurement vector, it is referred to as a single measurement vector (SMV) model while models incorporating $K > 1$ measurement vectors are multiple measurement vector (MMV) models \cite{eldar2009robust}.\\ The columns of $\bsy{\Phi}$ are often referred to as the \textit{atoms}. This terminology being typically associated with dictionaries, it is worth emphasizing that the problem of recovering a $s$-sparse vector $\bsy{x}$ on the basis of the measurement vector $\bsy{y} = \bsy{\Phi} \bsy{x}$ is equivalent to finding $s$ columns (or atoms) of the (dictionary) matrix $\bsy{\Phi}$ that fully express $\bsy{y}$ when using the proper linear combination. The notion of atom will thus be used in the rest of this paper as it simplifies the mathematical discussions that follow.\\ We now introduce additional notions that are used afterwards. For $0 < p < \infty$ and $\bsy{x} \in \mathbb{R}^n$, we define the norms $\| \bsy{x} \|_p := (\sum_{j = 1}^n | x_j |^p)^{1/p}$ and $\| \bsy{x} \|_{\infty} := \max_{j \in \lbrack n \rbrack} |x_j|$. In this paper, every vector should be considered as a column vector. Also, for $S \subseteq \lbrack n \rbrack$, the quantity $\bsy{x}_S$ denotes the vector formed by the entries of $\bsy{x}$ indexed by $S$. Similarly, for a matrix $\bsy{\Phi} \in \mathbb{R}^{m \times n}$, we define $\bsy{\Phi}_S$ as the matrix formed by the columns of $\bsy{\Phi}$ indexed within $S$. The Moore-Penrose pseudoinverse of any matrix $\bsy{\Phi}$ is denoted by $\bsy{\Phi}^{+}$ and its transpose is given by $\bsy{\Phi}^\mathrm{T}$. Finally, the inner product of two vectors $\bsy{x}$ and $\bsy{y}$ is written as $\langle \bsy{x} , \bsy{y} \rangle$ and is equal to $\bsy{x}^{\mathrm{T}} \bsy{y}$. \subsection{Simultaneous orthogonal matching pursuit} Several algorithms exhibiting varying computational complexities have been investigated to address the problem above. For the SMV case, the greedy algorithm entitled orthogonal matching pursuit (OMP) \cite{pati1993orthogonal, davis1997adaptive} is a classical choice because its complexity is lower than that of other algorithms such as $\ell_1$-minimization \cite{donoho2003optimally}.\\ If the $K$ sparse signals $\bsy{x}_k$ possess similar supports, \textit{i.e.}, their joint support $S := \cup_{k \in \lbrack K \rbrack} \text{supp} (\bsy{x}_k)$ possesses a cardinality that is comparable to those of the individual supports $\text{supp} (\bsy{x}_k)$, then it is interesting to perform a joint and common estimation of their supports \cite{gribonval2008atoms, determe2015simultaneous}. The simultaneous orthogonal matching pursuit (SOMP) algorithm \cite{tropp2006algorithms}, which is described in Algorithm~\ref{alg:SOMP}, is an extension of OMP to the MMV case and performs a joint support recovery. \begin{figure}[!h] \textsc{Algorithm \refstepcounter{algoCounter}\label{alg:SOMP}\arabic{algoCounter}}:\\ Simultaneous orthogonal matching pursuit (SOMP)\\ \vspace{-2mm} \begin{boxedalgorithmic} \small \REQUIRE $\bsy{Y} \in \mathbb{R}^{m \times K}$, $\bsy{\Phi} \in \mathbb{R}^{m \times n}$, $s \geq 1$ \STATE Initialization: $\bsy{R}^{(0)} \leftarrow \bsy{Y}$ and $S_0 \leftarrow \emptyset$ \STATE $t \leftarrow 0$ \WHILE{$t < s$} \STATE Determine the atom of $\bsy{\Phi}$ to be included in the support: \\ $j_t \leftarrow \mathrm{argmax}_{j \in \lbrack n \rbrack} ( \| (\bsy{R}^{(t)})^{\mathrm{T}} \bsy{\phi}_j \|_1 )$ \STATE Update the support : $S_{t+1} \leftarrow S_{t} \cup \left\lbrace j_t \right\rbrace$ \STATE Projection of each measurement vector onto $\mathrm{span}(\boldsymbol{\Phi}_{S_{t+1}})$: \\$\bsy{Y}^{(t+1)} \leftarrow \boldsymbol{\Phi}_{S_{t+1}} \boldsymbol{\Phi}_{S_{t+1}}^{+} \bsy{Y}$ \STATE Projection of each measurement vector onto $\mathrm{span}(\boldsymbol{\Phi}_{S_{t+1}})^{\perp}$~: \\ $\bsy{R}^{(t+1)} \leftarrow \bsy{Y} - \bsy{Y}^{(t+1)}$ \STATE $t \leftarrow t + 1$ \ENDWHILE \RETURN $S_s$ \COMMENT{Support at last step} \end{boxedalgorithmic} \end{figure} As shown in Algorithm \ref{alg:SOMP}, at each iteration $t$, SOMP adds to the estimated support the index $j_t$ of the atom $\bsy{\phi}_{j_t}$ maximizing the metric $\| (\bsy{R}^{(t)})^{\mathrm{T}} \bsy{\phi}_j \|_1 = \sum_{k=1}^{K} | \langle \bsy{\phi}_{j}, \bsy{r}^{(t)}_k \rangle |$ (steps $4$ and $5$) where $\bsy{r}_k^{(t)}$ denotes the $k$th column of the residual matrix $\bsy{R}^{(t)}$. Each measurement vector $\bsy{y}_k$ is then projected onto the orthogonal complement of $\mathrm{span}(\boldsymbol{\Phi}_{S_{t+1}})$, denoted by $\mathrm{span}(\boldsymbol{\Phi}_{S_{t+1}})^{\perp}$, during steps $6$ and $7$. The algorithm terminates when the prescribed number of iterations $s$ has been reached. It is worth noticing that an atom cannot be picked twice as, once chosen, the projection onto $\mathrm{span}(\boldsymbol{\Phi}_{S_{t+1}})^{\perp}$ ensures that $\langle \bsy{\phi}, \bsy{r}_k^{(t+1)}\rangle = 0$ if $\bsy{\phi} \in S_t$. \subsection{Definitions}\label{subsec:definitions} We define the concepts needed to state the results of Section \ref{sec:contrib}. First of all, the matrix $\bsy{\Phi}$ is said to satisfy the restricted isometry property (RIP) \cite{candes2006stable} of order $s$ with restricted isometry constant (RIC) $\delta_s$ (of order $s$) whenever \begin{equation}\label{eq:defRIPRIC} (1 - \delta_s) \| \bsy{u} \|_2^2 \leq \| \bsy{\Phi} \bsy{u} \|_2^2 \leq (1 + \delta_s) \| \bsy{u} \|_2^2 \end{equation} holds for all $s$-sparse vectors $\bsy{u}$. Thus, the RIP ensures that the linear operator $\bsy{\Phi}$ maintains the $\ell_2$-norm of $s$-sparse signals up to a certain extent that is quantified by means of the RIC $\delta_s$. Furthermore, if $\bsy{u}$ is supported onto $S$, the quantity $\| \bsy{\Phi} \bsy{u} \|_2^2$ is equal to $\| \bsy{\Phi}_S \bsy{u}_S \|_2^2 = \bsy{u}_S^{\mathrm{T}} (\bsy{\Phi}_S^{\mathrm{T}} \bsy{\Phi}_S) \bsy{u}_S$. The RIP therefore ensures that $ 1 - \delta_s \leq \lambda_{\mathrm{min}} ( \bsy{\Phi}_S^{\mathrm{T}} \bsy{\Phi}_S ) \leq \lambda_{\mathrm{max}} ( \bsy{\Phi}_S^{\mathrm{T}} \bsy{\Phi}_S) \leq 1 + \delta_s$ for all the supports $S$ of cardinality equal to or lower than $s$ where $\lambda_{\mathrm{min}}$ and $\lambda_{\mathrm{max}}$ denote the minimal and maximal eigenvalues, respectively. Also, it is easy to show that $\delta_s \leq \delta_{s+1}$. \\ The $(\alpha$, $\alpha')$-restricted orthogonality constant (ROC) \cite{cai2010shifting} is defined as the smallest real number $\theta_{\alpha, \alpha'}$ for which \begin{equation}\label{eq:defROC} | \langle \bsy{\Phi} \bsy{c}, \bsy{\Phi} \bsy{c}' \rangle | \leq \theta_{\alpha, \alpha'} \|\bsy{c}\|_2 \|\bsy{c}'\|_2 \end{equation} holds for every $\bsy{c}$, $\bsy{c}' \in \mathbb{R}^{m}$ exhibiting disjoint supports of cardinality $\alpha$ and $\alpha'$, respectively. Thus, the ROC quantifies how vectors with disjoint supports stay approximately orthogonal after projection by $\bsy{\Phi}$.\\ The ROC and the RIP are linked by the inequality \cite[Lemma 2.1]{candes2008restricted} $\theta_{\alpha, \alpha'} \leq \delta_{\alpha + \alpha'}$ which indicates that the RIC can play a role similar to that of the ROC, albeit in a less sharp manner. Another similar inequality has been obtained in \cite[Section 2.3]{wang2012near} and is given by $\theta_{1, \alpha'} \leq \sqrt{\alpha'/(\alpha'-1)} \delta_{\alpha'}$ whenever $\alpha' \geq 2$. Another upper bound of $\theta_{1, \alpha'}$ has been recently obtained in \cite[Lemma II.3]{yang2013coherence} where the so-called $2$-coherence of the dictionnary matrix, denoted by $\nu_{\alpha'}$, is used. They have shown \cite[Lemma II.2]{yang2013coherence} that $\nu_{\alpha'} \leq \delta_{\alpha'+1}$ so that the inequality $\theta_{1, \alpha'} \leq \nu_{\alpha'}$ is sharper than $\theta_{1, \alpha'} \leq \delta_{1 + \alpha'}$. The developments presented hereafter use the RIC-based inequality so that only the RIC intervenes in the final results. However, expressing our results using $\nu_{\alpha'}$ instead of $\delta_{1 + \alpha'}$ is straightforward. \\ Finally, it is worth defining the $\ell_{\infty}$-induced norm for matrices as $\| \bsy{\Phi} \|_{\infty \rightarrow \infty} := \sup_{\| \bsy{\phi} \|_{\infty} = 1} \| \bsy{\Phi} \bsy{\phi} \|_{\infty}$ (where $\bsy{\Phi} \in \mathbb{R}^{m \times n}$) that can be computed as $\| \bsy{\Phi} \|_{\infty \rightarrow \infty} = \max_{i \in \lbrack m \rbrack} \sum_{j=1}^n | \phi_{i,j} |$ \cite[Lemma A.5]{foucart2013mathematical}. This quantity is interesting as it allows to write, for $A \subseteq \lbrack n \rbrack$, $\max_{j \in A} ( \| (\bsy{R}^{(t)})^{\mathrm{T}} \bsy{\phi}_j \|_1 ) = \| \bsy{\Phi}_A^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}$, which is reminiscent of the decision metric of SOMP. Some authors choose to write the $\ell_{\infty}$-induced norm of $\bsy{\Phi}$ as $\| \bsy{\Phi} \|_{\infty}$ but, to avoid confusions, we prefer to emphasize the distinction between the $\ell_{\infty}$-norms for vectors and matrices as both coexist in Section \ref{sec:proofs}. \section{Contribution and related work}\label{sec:contrib} The main contribution of this paper is to extend a recent exact recovery criterion (ERC) for OMP to its MMV counterpart, \textit{i.e.}, SOMP. An ERC is a sufficient condition to ensure that the algorithm commits no mistake. The cornerstone of the results presented in this paper is given by Lemma~\ref{lem:RIPROCLow}. \begin{lemSA}[A RIP and ROC-based lower bound on the maximal residual projection] \label{lem:RIPROCLow} Let $\bsy{X} \in \mathbb{R}^{n \times K}$ possess the support $S$. Let $\bsy{\Phi} \in \mathbb{R}^{m \times n}$ admit the RIC $\delta_{|S| }< 1$ and the $(1, |S|)$-ROC $\theta_{1, |S|} < 1$. Furthermore, $\bsy{P}^{(t)} = \bsy{\Phi}_{S_t} \bsy{\Phi}_{S_t}^+$ denotes the orthogonal projector onto $\mathrm{span}(\bsy{\Phi}_{S_t})$ where $S_t \subseteq S$, \textit{i.e.}, only correct atoms have been included to the estimated support before iteration $t$. Let $\bsy{R}^{(t)}$ be equal to $(\bsy{I} - \bsy{P}^{(t)}) \bsy{Y} = (\bsy{I} - \bsy{P}^{(t)}) \bsy{\Phi} \bsy{X}$. Then, \begin{equation} \dfrac{\| \bsy{\Phi}_S^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}}{\| \bsy{\Phi}_{\overline{S}}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}} \geq \dfrac{1-\delta_{|S|}}{\theta_{1, |S|} \sqrt{|S|}} \end{equation} where $\overline{S}$ is the relative complement of $S$ with respect to $\lbrack n \rbrack$. \end{lemSA} Lemma~\ref{lem:RIPROCLow} establishes a lower bound on the ratio of the SOMP metric obtained for the correct atoms to that obtained for the incorrect ones. In that sense, and as it will be clarified in Theorem \ref{thm:RIPROCERC}, it straightforwardly provides an ERC guaranteeing that SOMP commits no error when picking atoms. We now propose a corollary of Lemma~\ref{lem:RIPROCLow} that only relies on the RIC. \begin{lemSA}[RIP lower bounds on the maximal residual projection] \label{lem:RIPLowBounds} Let $\bsy{X} \in \mathbb{R}^{n \times K}$ possess the support $S$. Let $\bsy{\Phi} \in \mathbb{R}^{m \times n}$ admit the RIC $\delta_{|S| }< 1$. Furthermore, $\bsy{P}^{(t)} = \bsy{\Phi}_{S_t} \bsy{\Phi}_{S_t}^+$ denotes the orthogonal projector onto $\mathrm{span}(\bsy{\Phi}_{S_t})$ where $S_t \subseteq S$, \textit{i.e.}, only correct atoms have been included to the estimated support before iteration $t$. Let $\bsy{R}^{(t)}$ be equal to $(\bsy{I} - \bsy{P}^{(t)}) \bsy{Y} = (\bsy{I} - \bsy{P}^{(t)}) \bsy{\Phi} \bsy{X}$. Then, both inequalities below hold \begin{equation}\label{eq:RIPLowBound1} \dfrac{\| \bsy{\Phi}_S^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}}{\| \bsy{\Phi}_{\overline{S}}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}} \geq \dfrac{1-\delta_{|S|+1}}{\delta_{|S| +1} \sqrt{|S|}} \end{equation} \begin{equation}\label{eq:RIPLowBound2} \dfrac{\| \bsy{\Phi}_S^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}}{\| \bsy{\Phi}_{\overline{S}}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}} \geq \dfrac{(1 - \delta_{|S|}) \sqrt{|S|-1}}{\delta_{|S|} |S|} \end{equation} where $\overline{S}$ is the relative complement of $S$ with respect to $\lbrack n \rbrack$. \end{lemSA} Compared to former works that directly derived the ERC \cite{wang2012improved, dan2014robustness, mo2012remark, wang2012recovery, wang2012near}, we believe that Lemma~\ref{lem:RIPROCLow} and Lemma~\ref{lem:RIPLowBounds} are interesting as they quantify the robustness of the decisions made at each iteration of SOMP in the noiseless case. Such quantities can then be used to produce theoretical analyses of greedy algorithms in a noisy setting (see \cite{cai2011orthogonal, dan2014robustness, determe2015simultaneous}). Other similar works analysing OMP in a noisy environment include \cite{shen2015sparse} and \cite{yang2013coherence}. The analysis presented in \cite{dan2014robustness} actually uses a result fundamentally identical to Lemma \ref{lem:RIPROCLow} for $K = 1$ to conduct a theoretical analysis of OMP inspired from \cite{cai2011orthogonal}. We now state the three ERC deriving from Lemma~\ref{lem:RIPROCLow} and Lemma~\ref{lem:RIPLowBounds}. \begin{thm}[Several RIP and ROC-based ERC for SOMP] \label{thm:RIPROCERC} Let $\bsy{X} \in \mathbb{R}^{n \times K}$ possess the support $S$. Let $\bsy{\Phi} \in \mathbb{R}^{m \times n}$ admit the RIC $\delta_{|S| }< 1$ and the $(1, |S|)$-ROC $\theta_{1, |S|} < 1$. Then, SOMP commits no error and identifies the full support of $\bsy{X}$ at the end of iteration $|S| - 1$ whenever at least one of the three conditions below hold: \begin{equation}\tag{ERC1}\label{eq:ERC1} \dfrac{1-\delta_{|S|}}{\theta_{1, |S|} \sqrt{|S|}} > 1 \end{equation} \begin{equation}\tag{ERC2}\label{eq:ERC2} \delta_{|S|+1} < \dfrac{1}{\sqrt{|S|} + 1} \end{equation} \begin{equation}\tag{ERC3}\label{eq:ERC3} (\mathrm{for } |S| \geq 2) \;\; \delta_{|S|} < \dfrac{\sqrt{|S| - 1}}{\sqrt{|S| - 1} + |S|}. \end{equation} \end{thm} As demonstrated in Section~\ref{sec:proofMainThm}, Theorem~\ref{thm:RIPROCERC} is a straightforward consequence of Lemma~\ref{lem:RIPROCLow} and Lemma~\ref{lem:RIPLowBounds}. The authors of \cite{wang2012improved} and \cite{dan2014robustness} independently obtained (\ref{eq:ERC1}) for OMP. To the best of the authors' knowledge, the second ERC was first obtained simultaneously in \cite{mo2012remark} and \cite{wang2012recovery} while (\ref{eq:ERC3}) was initially published in \cite{wang2012near}, both ERC being derived for OMP.\\ Regarding older works, it is also worth pointing out that the ERC $\delta_{|S|+1} < 1/((1 + \sqrt{2}) \sqrt{|S|})$, first obtained in \cite[Theorem 5.2]{liu2012orthogonal} for OMP, has been shown to remain valid for SOMP in \cite[Corollary 1]{ding2012robustness}. Thereby, the authors of \cite{ding2012robustness} also proved that the older ERC $\delta_{|S|+1} < 1/(3 \sqrt{|S|})$, initially derived in \cite[Theorem 3.1]{davenport2010analysis} for OMP, remains correct for SOMP as $\delta_{|S|+1} < 1/(3 \sqrt{|S|})$ is implied by $\delta_{|S|+1} < 1/((1 + \sqrt{2}) \sqrt{|S|})$. Very recently, (\ref{eq:ERC2}) was extended to SOMP in \cite[Remark 1]{xu2015perturbation}. However, the extension to SOMP of both (\ref{eq:ERC1}) and (\ref{eq:ERC3}) is a novel result. In \cite{mo2015sharp}, the author has derived the ERC $\delta_{|S|+1} < 1/\sqrt{|S|+1}$, which is sharper than (\ref{eq:ERC2}). Combining the ideas developed in \cite{mo2015sharp} and our paper could possibly extend this ERC to SOMP. \\ Finally, we would like to point out that, if any of the considered ERC holds, running $K$ independent executions of OMP instead of a single instance of SOMP would enable one to retrieve the individual supports $\mathrm{supp}(\bsy{x}_k)$ ($1 \leq k \leq K$) and, by extension, the joint support $S$. While it may seem to undermine the interest of this work, the following observations convince otherwise: \begin{enumerate} \item If one of the considered ERC guarantees that each one of the $K$ instances of OMP returns the correct support of each sparse vector $\bsy{x}_k$, then SOMP is also guaranteed to return the correct joint support so that there is no penalty switching from OMP to SOMP, except maybe that SOMP returns a joint support instead of possibly smaller (yet correct) supports for each $\bsy{x}_k$. \item Lemma \ref{lem:RIPROCLow} and Lemma \ref{lem:RIPLowBounds} should be thought of as the central results of this paper as they quantify the robustness of the support recovery in the noiseless case, the resulting ERC being merely direct consequences of the aforementioned lemmas. As mentioned previously, these lemmas can be used to produce theoretical analyses of SOMP for noisy scenarios while it is not the case for the ERC. \end{enumerate} \section{Sharpness of the bounds} In \cite{dan2013sharp}, it is shown that (\ref{eq:ERC1}) is sharp for OMP in the sense that it is possible to construct a measurement matrix $\bsy{\Phi}_{\mathrm{bad}}$ satisfying $(1-\delta_{|S|})/(\theta_{1, |S|} \sqrt{|S|}) = 1$ for which there exists a $|S|$-sparse signal $\bsy{x}_{\mathrm{bad}}$ that OMP fails to recover. The sharpness property is immediately extended to SOMP by noticing that if OMP fails to recover $\bsy{x}_{\mathrm{bad}}$ on the basis of the measurement vector $\bsy{y}_{\mathrm{bad}} = \bsy{\Phi}_{\mathrm{bad}} \bsy{x}_{\mathrm{bad}}$, then SOMP also fails with $\bsy{Y}_{\mathrm{bad}} = \bsy{\Phi}_{\mathrm{bad}} \bsy{X}_{\mathrm{bad}}$ where $\bsy{X}_{\mathrm{bad}} = \big( \bsy{x}_{\mathrm{bad}}, \dots, \bsy{x}_{\mathrm{bad}} \big)$ as both algorithms make the same decisions in this case.\\ Regarding (\ref{eq:ERC2}) and (\ref{eq:ERC3}), it has been shown in \cite{mo2015sharp} that there exists a signal $\bsy{x}_{\mathrm{bad}}$ of support $S$ and a matrix $\bsy{\Phi}_{\mathrm{bad}}$ satisfying $\delta_{|S|+1} = 1 / \sqrt{|S|+1}$ for which OMP fails to recover the support of $\bsy{x}_{\mathrm{bad}}$ on the basis of $\bsy{y}_{\mathrm{bad}} = \bsy{\Phi}_{\mathrm{bad}} \bsy{x}_{\mathrm{bad}}$. Note that earlier works (see \cite{mo2012remark} and \cite{wang2012recovery}) proved that the statement above holds with $\bsy{\Phi}_{\mathrm{bad}}$ satisfying $\delta_{|S|+1} = 1 / \sqrt{|S|}$. Using an approach identical to that of the previous paragraph, one shows that this statement remains true for SOMP with $\bsy{Y}_{\mathrm{bad}} = \bsy{\Phi}_{\mathrm{bad}} \bsy{X}_{\mathrm{bad}}$ and $\bsy{X}_{\mathrm{bad}} = \big( \bsy{x}_{\mathrm{bad}}, \dots, \bsy{x}_{\mathrm{bad}} \big)$. It shows that (\ref{eq:ERC2}) is near-optimal as, for $|S| \rightarrow \infty$, it boils down to the condition $\delta_{|S|+1} < 1 / \sqrt{|S|+1}$. It can be shown that (\ref{eq:ERC3}) is also near-optimal but the discussion is more involved as $\delta_{|S|}$ intervenes instead of $\delta_{|S|+1}$. In \cite[Section 3]{wang2012near}, it is shown that $\delta_{|S|+1} < 1/(|S| + 3 - \sqrt{2})$ implies (\ref{eq:ERC3}), therefore indicating that (\ref{eq:ERC3}) is also at least near-optimal. \section{Proofs}\label{sec:proofs} \subsection{Proof of Lemma \ref{lem:RIPROCLow}} The proof presented in this section is analog to what has been proposed in \cite{wang2012improved, wang2012near, dan2014robustness}, the only difference being the additional quantities needed to deal with the MMV model. \\ The proof is decomposed in three steps: \begin{enumerate} \item Derive an upper bound on $\| \bsy{\Phi}_{\overline{S}}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}$ expressed as $\theta_{1, |S|} \| \bsy{z}^{(t)} \|_2$ where $\bsy{z}^{(t)}$ is to be specified in the detailed development. \item Derive a lower bound on $\| \bsy{\Phi}_{S}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}$ expressed as $(1/\sqrt{|S|}) (1 - \delta_{|S|}) \| \bsy{z}^{(t)} \|_2$ where $\bsy{z}^{(t)}$ is identical for steps $1$) and $2$). \item Compute the ratio of the lower bound to the upper bound and observe that the desired result is obtained due to the cancellation of the quantity $\| \bsy{z}^{(t)} \|_2$. \end{enumerate} Let us first tackle the quantity $ \| \bsy{\Phi}_{\overline{S}}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty} = \max_{j \in \overline{S}} ( \sum_{k=1}^{K} | \langle \bsy{r}_{k}^{(t)}, \bsy{\phi}_j \rangle | )$ and define $j^*(t) := \argmax_{j \in \overline{S}} ( \sum_{k=1}^{K} | \langle \bsy{r}_{k}^{(t)}, \bsy{\phi}_j \rangle | )$. Then, if $c_k^{(t)} := \mathrm{sign} ( \langle \bsy{r}_k^{(t)}, \bsy{\phi}_{j^*(t)} \rangle )$, we have \begin{align*} \| \bsy{\Phi}_{\overline{S}}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty} & = \max_{j \in \overline{S}} \left( \sum_{k=1}^{K} | \langle \bsy{r}_{k}^{(t)}, \bsy{\phi}_j \rangle | \right) \\ & = \left| \sum_{k=1}^{K} c_k^{(t)} \langle \bsy{r}_{k}^{(t)}, \bsy{\phi}_{j^*(t)} \rangle \right| \\ & = \left| \left\langle \sum_{k=1}^{K} c_k^{(t)} \bsy{r}_{k}^{(t)}, \bsy{\phi}_{j^*(t)} \right\rangle \right|. \\ \end{align*} Since $S_t \subseteq S$, $\bsy{r}_k^{(t)} = (\bsy{I} - \bsy{P}^{(t)}) \bsy{r}_k$ belongs to $\mathrm{span} (\bsy{\Phi}_S)$ and can thus be expressed as a linear combination of the atoms whose indexes belong to $S$ by means of $\bsy{r}_k^{(t)} = \bsy{\Phi}_S \bsy{a}_k^{(t)}$ where $\bsy{a}_k^{(t)} \in \mathbb{R}^{|S|}$ contains the coefficients of the linear combination of interest. It is also worth defining the extension $\bsy{\tilde{a}}_k^{(t)}$ of $\bsy{a}_k^{(t)}$ to $\mathbb{R}^{n}$ by ensuring that $\text{supp} (\bsy{\tilde{a}}_k^{(t)}) \subseteq S$ and $( \bsy{\tilde{a}}_k^{(t)} )_S = \bsy{a}_k^{(t)}$. Another relation of interest is $\bsy{\phi}_{j^*(t)} = \bsy{\Phi} \bsy{e}_{j^*(t)}$ where $\bsy{e}_{j^*(t)}$ denotes the $j^*(t)$th vector of the canonical basis of $\mathbb{R}^n$. Hence, using consecutively the equations of this paragraph and the definition of the ROC yields \begin{align*} \| \bsy{\Phi}_{\overline{S}}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty} & = \left| \left\langle \bsy{\Phi} \sum_{k=1}^{K} c_k^{(t)} \bsy{\tilde{a}}_{k}^{(t)}, \bsy{\Phi} \bsy{e}_{j^*(t)} \right\rangle \right| \\ & \leq \theta_{1, |S|} \left\| \sum_{k=1}^{K} c_k^{(t)} \bsy{\tilde{a}}_{k}^{(t)} \right\|_2 \end{align*} where $\| \bsy{e}_{j^*(t)} \|_2$ is equal to $1$. It is worth explicitly pointing out that the ROC definition is applicable in that case because the supports of $\bsy{e}_{j^*(t)}$ and $\sum_{k=1}^{K} c_k^{(t)} \bsy{\tilde{a}}_{k}^{(t)}$ are disjoint as $j^*(t) \in \overline{S}$ and $\text{supp} (\bsy{\tilde{a}}_{k}^{(t)}) \subseteq S$ for $1 \leq k \leq K$.\\ The first step of the proof is now completed and the last problem to be dealt with is deriving a lower bound for $\| \bsy{\Phi}_{S}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}$. For any $d_k^{(t)} \in \lbrace -1; 1\rbrace$, we have $|\langle \bsy{r}_k^{(t)}, \bsy{\phi}_j \rangle| = |d_k^{(t)} \langle \bsy{r}_k^{(t)}, \bsy{\phi}_j \rangle| = | \langle d_k^{(t)} \bsy{r}_k^{(t)}, \bsy{\phi}_j \rangle|$. In particular, it remains true for the choice $d_k^{(t)} = c_k^{(t)}$. Thus, by using the equation above and the triangle inequality, one obtains \begin{align*} \| \bsy{\Phi}_{S}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty} & = \max_{j \in S} \left( \sum_{k=1}^{K} | \langle \bsy{r}_{k}^{(t)}, \bsy{\phi}_j \rangle | \right) \\ & = \max_{j \in S} \left( \sum_{k=1}^{K} | \langle c_k^{(t)} \bsy{r}_{k}^{(t)}, \bsy{\phi}_j \rangle | \right) \\ & \geq \max_{j \in S} \left| \left\langle \sum_{k=1}^{K} c_k^{(t)} \bsy{r}_{k}^{(t)}, \bsy{\phi}_j \right\rangle \right| \\ & = \left\| \bsy{\Phi}_S^{\mathrm{T}} \left( \sum_{k=1}^{K} c_k^{(t)} \bsy{r}_{k}^{(t)} \right) \right\|_{\infty} \\ & \geq \dfrac{1}{\sqrt{|S|}} \left\| \bsy{\Phi}_S^{\mathrm{T}} \left( \sum_{k=1}^{K} c_k^{(t)} \bsy{r}_{k}^{(t)} \right) \right\|_{2} \end{align*} where $\bsy{\Phi}_S^{\mathrm{T}} ( \sum_{k=1}^{K} c_k^{(t)} \bsy{r}_{k}^{(t)} ) \in \mathbb{R}^{|S|}$. Also, we have previously obtained $\bsy{r}_k^{(t)} = \bsy{\Phi}_S \bsy{a}_k^{(t)}$. The lower bound on $\| \bsy{\Phi}_{S}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}$ is thus finally obtained by successively using the two previous relations and the inequality $1 - \delta_{|S|} \leq \lambda_{\mathrm{min}} ( \bsy{\Phi}_S^{\mathrm{T}} \bsy{\Phi}_S )$ resulting from the RIP (see Section \ref{subsec:definitions}) in the following manner: \begin{align*} \| \bsy{\Phi}_{S}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty} & \geq \dfrac{1}{\sqrt{|S|}} \left\| \bsy{\Phi}_S^{\mathrm{T}} \bsy{\Phi}_S \left( \sum_{k=1}^{K} c_k^{(t)} \bsy{a}_{k}^{(t)} \right) \right\|_2 \\ & \geq \dfrac{1 - \delta_{|S|}}{\sqrt{|S|}} \left\| \sum_{k=1}^{K} c_k^{(t)} \bsy{a}_{k}^{(t)} \right\|_2 \end{align*} where $\| \sum_{k=1}^{K} c_k^{(t)} \bsy{a}_{k}^{(t)} \|_2 = \| \sum_{k=1}^{K} c_k^{(t)} \bsy{\tilde{a}}_{k}^{(t)} \|_2$. The final result is now established by expressing the ratio of the lower bound on $\| \bsy{\Phi}_{S}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}$ to the upper bound on $\| \bsy{\Phi}_{\overline{S}}^{\mathrm{T}} \bsy{R}^{(t)} \|_{\infty \rightarrow \infty}$. \qed \subsection{Proof of Lemma \ref{lem:RIPLowBounds}} The proof consists in finding lower bounds on the ratio $(1-\delta_{|S|})/(\theta_{1, |S|} \sqrt{|S|})$ intervening in Lemma~\ref{lem:RIPROCLow}. For the first bound, it is sufficient to use the inequalities $\delta_{|S|} \leq \delta_{|S|+1}$ and $\theta_{1, |S|} \leq \delta_{|S|+1}$ \cite[Lemma 2.1]{candes2008restricted}, for the numerator and the denominator respectively. The second bound is obtained by using the inequality $\theta_{1, |S|} \leq \sqrt{|S|/(|S|-1)} \delta_{|S|}$ on the denominator for $|S| \geq 2$ \cite[Section 2.3]{wang2012near}. \qed \subsection{Proof of Theorem \ref{thm:RIPROCERC}}\label{sec:proofMainThm} Let us first address the proof of (\ref{eq:ERC1}). At iteration $0$, we have $\bsy{R}^{(t)} = \bsy{Y}$ and Lemma~\ref{lem:RIPROCLow} shows that a sufficient condition for SOMP to pick a correct atom is $(1-\delta_{|S|})/(\theta_{1, |S|} \sqrt{|S|}) > 1$ as it means that the highest metric is necessarily obtained for one of the correct atoms. Thus, at iteration $1$, the condition $S_1 \subseteq S$ is verified and Lemma~\ref{lem:RIPROCLow} shows, once again, that a correct decision will be made. By repeatedly applying the same train of thought, one proves the theorem by induction. The remaining ERC are obtained in an identical manner by using the two bounds provided by Lemma~\ref{lem:RIPLowBounds} instead of that of Lemma~\ref{lem:RIPROCLow}. \qed \section*{Acknowledgments} The authors would like to thank the Belgian ``Fonds de la recherche scientifique'' for having funded this research. \newpage \nocite{*} \bibliographystyle{abbrv}
2,869,038,155,813
arxiv
\part*{Appendices} \counterwithin{figure}{section} \counterwithin{table}{section} \counterwithin{equation}{section} \section{Additional Results} \subsection{Additional Results from the Paper} \begin{figure}[H] \centering \includegraphics[width=\linewidth]{figures/online_ft_curve.pdf} \vspace{-0.4cm} \caption{Learning curves for \textbf{online fine-tuning on unseen game variants}. The dotted horizontal line shows the performance of a single-game DQN agent trained for 50M frames (16x more data than our methods). See Figure~\ref{fig:online_ft} for visualization of the variants.} \label{fig:lr_curves_online_ft} \end{figure} \begin{figure}[h] \vspace{-0.5cm} \centering \includegraphics[width=0.6\linewidth]{figures/full_data_results.pdf} \vspace{-0.25cm} \caption{\footnotesize{\textbf{Offline scaled conservative Q-learning vs other prior methods} with near-optimal data. Scaled QL outperforms the best DT model, attaining an IQM human-normalized score of \textbf{114.1\%} and a median human-normalized score of \textbf{98.9\%} compared to 111.8\% and 78.2\% for DT, respectively.}} \label{fig:full_data_results} \vspace{-0.2cm} \end{figure} \subsection{{{Results for Scaling Discrete-BCQ}}} \label{app:discrete_bcq} {To implement discrete BCQ, we followed the official implementation from \citet{fujimoto2019benchmarking}. We first trained a model of the behavior policy, $\widehat{\pi}_\beta(\mathbf{a}|\mathbf{s})$, using an architecture identical to that of the Q-function, using negative log-likelihood. Then, following \citet{fujimoto2019benchmarking}, we updated the Bellman backup to only perform the maximization over actions that attain a high likelihood under the probabilities learned by the behavior policy, as shown below:} \begin{align*} {y(\mathbf{s}, \mathbf{a}) := r(\mathbf{s}, \mathbf{a}) + \gamma \max_{\mathbf{a}': \widehat{\pi}_\beta(\mathbf{a}'|\mathbf{s}') \geq \tau \cdot \max_{\mathbf{a}''} \widehat{\pi}_\beta(\mathbf{a}''|\mathbf{s}')} \bar{Q}(\mathbf{s}', \mathbf{a}')}, \end{align*} {where $\tau$ is a hyperparameter. To tune the value of $\tau$, we ran a preliminary initial sweep over $\tau=\{0.05, 0.1, 0.3\}$. When using C51 in our setup, we had to use a smaller CQL $\alpha$ of 0.05 (instead of 0.1 for the MSE setting from \citet{kumar2021dr3}), possibly because a discrete representation of Q-values used by C51 is less prone to overestimation. Therefore, in the case of discrete-BCQ, we chose to perform an initial sweep over $\tau$ values that were smaller than or equal to (i.e., less conservative) the value of $\tau=0.3$ used in \citet{fujimoto2019benchmarking}.} {Since BCQ requires an additional policy network, it imposes a substantial memory overhead and as such, we performed a sweep for initial 20 iterations to pick the best $\tau$. We found that in these initial experiments, $\tau=0.05$ performed significantly worse, but $\tau=0.1$ and $\tau=0.3$ performed similarly. So, we utilized $\tau=0.3$ for reporting these results.} \begin{figure} \centering \includegraphics[width=0.5\linewidth]{appendix_figures/bcq_vs_cql.pdf} \caption{\footnotesize{{\textbf{Performance of scaling CQL and BCQ in terms of IQM human-normalized score.} We perform this comparison on the six-game setting for 100 epochs (note that these results are after 2x longer training than other ablations in Table~\ref{tab:ablation_dr3}). Observe that for discrete-BCQ the performance improves from ResNet 34 to ResNet 50, indicating that it does scale favorably as network capacity increases.}}} \label{tab:ablation_bcq} \vspace{-0.3cm} \end{figure} {We ran these scaling experiments with ResNet 34 and ResNet 50 in the six-game setting and report human-normalized IQM performance after 100 epochs = 6.25M gradient steps in Figure~\ref{tab:ablation_bcq}. We also present the results for CQL on the side for comparisons. Observe that we find favorable scaling trends for BCQ: average performance over all games increases as the network size increases, indicating that other offline RL algorithms such as BCQ can scale as we increase network capacity.} \vspace{-0.2cm} \subsection{{{Ablation for Backbone Architecture}}} \label{app:backbone_ablation} \vspace{-0.2cm} {In this section, we present some results ablating the choice of the backbone architecture. For this ablation, we ablate the choice of the spatial embedding while keeping group normalization fixed in both cases. We perform this study for the 40-game setting. Observe that using the learned spatial embedding results in better performance, and improves in 27 out of 40 games compared to not using the learned embeddings.} \begin{table}[h] \centering \centering \vspace{-0.4cm} \caption{\footnotesize{{\textbf{Ablations for the backbone architecture in the 40-game setting} with ResNet 101. Observe that learned spatial embeddings leads to around 80\% improvement in performance.}}} \label{tab:ablation_backbone_40_game} \vspace{0.25cm} \begin{tabular}{l@{}cc@{}} \toprule & \textbf{Scaled QL without backbone} & \textbf{Scaled QL w/ backbone} \\ \midrule \textbf{Median human-normalized score} & 54.9\% & \textbf{98.9}\% \\ \textbf{IQM human-normalized score} & 68.9\% & \textbf{114.1}\% \\ \midrule \textbf{Num. games with better performance} & 13 / 40 & \textbf{27 / 40} \\ \bottomrule \vspace{-0.15in} \end{tabular} \end{table} {Regarding the choice of group normalization vs batch normalization, note that we have been operating in a setting where the size of the batch per device / core is only 4. Particularly, we use Cloud TPU v3 accelerators with 64 / 128 cores, and bigger batch sizes than 4 do not fit in memory, especially for larger-capacity ResNets. This means that if we utilized batch normalization, we would be computing batch statistics over only 4 elements, which is known to be unstable even for standard computer vision tasks, for example, see Figure 1 in \citet{wu2018group}.} \vspace{-0.2cm} \subsection{{{Results for Scaled QL Without Pessimism}}} \label{app:no_pessimism} \vspace{-0.2cm} {In Table~\ref{tab:ablation_no_pessimism}, we present the results of running scaled Q-learning with no conservatism, i.e., by setting the value of $\alpha$ in Equation~\ref{eqn:cql_training} to 0.0 in the six game setting. We utilize the entire DQN-replay dataset~\citep{agarwal2019optimistic} for each of these six games that would be present in the full 40-game dataset, to preserve the per-game dataset diversity.} {Observe that while utilizing no conservatism does still learn, the performance of scaled QL without conservatism is notably worse than standard scaled QL. Interestingly, on \textsc{Asterix}, the performance without pessimism is better than performance with pessimism, whereas, the use of pessimism in \textsc{SpaceInvaders} and \textsc{Seaquest} leads to at least 2x improvement in performance.} \begin{table}[h] \centering \centering \vspace{-0.4cm} \caption{\footnotesize{{\textbf{Performance of scaled QL with and without conservatism in terms of IQM human-normalized score} in the six-game setting for 100 epochs (2x longer training compared to other ablations in Table~\ref{tab:ablation_dr3}) performed with a ResNet 50. Observe that utilizing conservatism via CQL is beneficial. We also present per-game raw scores in this table. Observe that while in one games no pessimism with such data can outperform CQL, we do find that overall, conservatism performs better.}}} \label{tab:ablation_no_pessimism} \vspace{0.35cm} \begin{tabular}{l@{}cc@{}} \toprule & \textbf{Scaled QL without CQL} & \textbf{Scaled QL w/ CQL} \\ \midrule \textsc{Asterix} & 38000 & 35200 \\ \textsc{Breakout} & 322 & 410 \\ \textsc{Pong} & 12.6 & 19.8 \\ \textsc{Qbert} & 13800 & 15500 \\ \textsc{Seaquest} & 1378 & 3694 \\ \textsc{SpaceInvaders} & 1675 & 3819 \\ \midrule \textbf{IQM human-normalized score} & 188.3\% & \textbf{223.4\%} \\ \bottomrule \vspace{-0.15in} \end{tabular} \end{table} {We also present some results without pessimism in the complete 40-game setting in Table~\ref{tab:ablation_no_pessimism_40_game}. Unlike the smaller six game setting, we find a much larger difference between no pessimism (without CQL) and utilizing pessimism via CQL. In particular, we find that in 6 games, not using pessimism leads to slightly better performance, but this strategy hurts in all other games, giving rise to an agent that performs worse than random in many of these 34 games. This indicates that pessimism is especially deisrable as the diversity of tasks increases.} \begin{table}[h] \centering \centering \vspace{-0.3cm} \caption{\footnotesize{{\textbf{Scaled QL with and without conservatism in terms of IQM human-normalized score in the 40-game setting} with ResNet 101. Observe that utilizing conservatism via CQL is still beneficial.}}} \label{tab:ablation_no_pessimism_40_game} \vspace{0.25cm} \resizebox{0.85\linewidth}{!}{\begin{tabular}{lcc} \toprule & \textbf{Scaled QL without CQL} & \textbf{Scaled QL w/ CQL} \\ \midrule \textbf{Median human-normalized score} & 11.1\% & 98.9\% \\ \textbf{IQM human-normalized score} & 13.5\% & 114.1\% \\ \midrule \textbf{Num. games with better performance} & 6 / 40 & \textbf{34 / 40} \\ \bottomrule \vspace{-0.2in} \end{tabular}} \end{table} \section{Implementation Details and Hyper-parameters} In this section, we will describe some of the implementation details behind our method and will provide implementation details for our approach, including the details of the network architectures, the details of feature normalization and the details of our training and evaluation protocol. \subsection{Network Architecture} \label{sec:arch} In our primary experiments, we consider variants of ResNet architectures for scaled Q-Learning. The vision backbone in these architectures mimic the corresponding ResNet architectures from \citet{resnet}, however, we utilize group normalization~\citep{wu2018group} (with a group size of 4) instead of batch normalization, and instead of applying global mean pooling to aggregate the outputs of the ResNet, we utilize learned spatial embeddings~\citep{anonymous2021ptr}, that learn a matrix that point-wise multiplies the output feature map of the ResNet. The output volume is then flattened to be passed as input to the feed-forward part of the network. The feed-forward layer part of the network begins with a layer of size 2048, and then applies layer norm on the network. After this we apply 3 feed-forward layers with hidden dimension 1024 with ReLU activations, to obtain the representation of the image observation. Then, we apply feature normalization to the representation, by applying a normalization layer which divides the representation of a given observation by its $\ell_2$ norm. Note that we do pass gradients through this normalization term. Now, this representation is passed into different heads that are supposed to predict the Q-values. The total number of heads is equal to the number of games we train on. Each head consists of a linear layer that maps the 1024-dimensional normalized representation to a vector of $K$ elements, where $K = |\mathcal{A}|$ (i.e., the size of the action space) for the standard real-valued parameterization of Q-values, and $K = |\mathcal{A}| \times 51$ for C51. The network does not apply any output activation in either case, and the Q-values are treated as logits for C51. \subsection{Details of C51} \label{sec:c51_details} For the main results in the paper, we utilize C51. The main hyperparameter in C51 is the size of the support set of Q-values. Unlike the paper from \citet{bellemare2017distributional} which utilizes a support set of $[-10, 10]$, we utilize a support set of $[-20, 20]$ to allow for flexibility of CQL: Applying the CQL regularizer can underestimate or overestimate Q-values, and this additional flexibility aids such scenarios. Though, we still utilize only 51 atoms in our support set, and the average dataset Q-value in our training runs is generally always smaller, around $\sim 8-9$. \subsection{Training and Evaluation Protocols and Hyperparameters} We utilize the initial 20\% (sub-optimal) and 100\% (near-optimal) datasets from \citet{agarwal2019optimistic} for our experiments. These datasets are generated from runs of standard online DQN on stochastic dynamics Atari environments that utilize sticky actions, \emph{i.e.},\ \ there is a 25\% chance at every time step that the environment will execute the agents previous action again, instead of the new action commanded. The majority of the training details are identical to a typical run of offline RL on single-game Atari. We discuss the key differences below. We trained our ResNet 101 network for $10M$ gradient steps with a batch size of 512. The agent hasn't converged yet, and the performance is still improving gradually. When training on multiple games, we utilize a stratified batch sampling scheme with a total batch size of $512$. To obtain the batch at any given training iteration, we first sample 128 game indices from the set all games (40 games in our experiments) with replacement, and then sample $4$ transitions from each game. This scheme does not necessarily produce an equal number of transitions from each game in a training batch, but it does make sure that all games are seen in expectation throughout training. Since we utilize a larger batch size, that is 16 times larger than the standard batch size of 32 on Atari, we scale up the learning rate from $5e-05$ to $0.0002$, but keep the target network update period fixed to the same value of 1 target update per $2000$ gradient steps as with single-task Atari. We also utilize $n$-step returns with $n=3$ by default, with both our MSE and C51 runs. \begin{table*}[t] \small \caption{\textbf{Hyperparameters used by multi-game training.} Here we report the key hyperparameters used by the multi-game training. The differences from the standard single-game training setup are highlighted in red.} \vspace{0.05cm} \centering \begin{tabular}{lrr} \toprule Hyperparameter & \multicolumn{2}{r}{Setting (for both variations)} \\ \midrule Eval Sticky actions && \textcolor{red}{No} \\ Grey-scaling && True \\ Observation down-sampling && (84, 84) \\ Frames stacked && 4 \\ Frame skip~(Action repetitions) && 4 \\ Reward clipping && [-1, 1] \\ Terminal condition && Game Over \\ Max frames per episode && 108K \\ Discount factor && 0.99 \\ Mini-batch size && \textcolor{red}{512} \\ Target network update period & \multicolumn{2}{r}{every 2000 updates} \\ Training environment steps per iteration && 62.5k \\ Update period every && 1 environment steps \\ Evaluation $\epsilon$ && 0.001 \\ Evaluation steps per iteration && 125K \\ Learning rate && \textcolor{red}{0.0002} \\ n-step returns ($n$) && 3 \\ CQL regularizer weight $\alpha$ && 0.1 for MSE, 0.05 for C51\\ \bottomrule \end{tabular} \label{table:hyperparams_atari} \end{table*} \textbf{Evaluation Protocol.} Even though we train on Atari datasets with sticky actions, we evaluate on Atari environments that do not enable sticky actions following the protocol from \citet{lee2022multi}. This allows us to be comparable to this prior work in all of our comparisons, without needing to re-train their model, which would have been too computationally expensive. Following standard protocols on Atari, we evaluate a noised version of the policy with an epsilon-greedy scheme, with $\varepsilon_\text{eval} = 0.001$. Following the protocol in \citet{castro2018dopamine}, we compute average return over 125K training steps. \subsection{Fine-Tuning Protocol} \label{sec:finetuning} \textbf{For offline fine-tuning} we fine-tuned the parameters of the pre-trained policy on the new domain using a batch size of 32, and identical hyperparameters as those used during pre-training. We utilized $\alpha=0.05$ for fine-tuning, but with the default learning rate of $5e-05$ (since the batch size was the default 32). We attempted to use other CQL $\alpha$ values $\{0.07, 0.02, 0.1\}$ for fine-tuning but found that retaining the value of $\alpha = 0.05$ for pre-training worked the best. For reporting results, we reported the performance of the algorithm at the end of 300k gradient steps. \textbf{For online fine-tuning}, we use the C51 algorithm~\citep{bellemare2017distributional}, with $n$-step$=3$ and all other hyperparameters from the C51 implementation in the Dopamine library ~\citep{castro2018dopamine}. We swept over two learning rates, $\{1e-05, 5e-05\}$ for all the methods and picked the best learning rate per-game for all the methods. For the MAE implementation, we used the Scenic library~\citep{dehghani2021scenic} with the typical configuration used for ImageNet pretraining, except using $84\times84\times4$ sized Atari observations, instead of images of size $224 \times 224 \times 3$. We train the MAE for 2 epochs on the entire multi-task offline Atari dataset and we observe that the reconstruction loss plateaus to a low value. \subsection{{Details of Multi-Task Impala DQN}} \label{sec:online_mt_dqn} {The ``MT Impala DQN'' comparison in Figures~\ref{fig:suboptimal_offline} \& \ref{fig:main_results} is a multi-task implementation of online DQN, evaluated at 5x many gradient steps as the size of the sub-optimal dataset. This comparison is taken directly from \citet{lee2022multi}. To explain this baseline briefly, this baseline runs C51 in conjunction with n-step returns with $n=4$, with an IMPALA architecture that uses three blocks with 64, 128, and 128 channels. This baseline was trained with a batch size of 128 and update period of 256.} \section{Raw Training Scores for different Models} \begin{table*}[h] \caption{Raw scores on 40 training Atari games in the sub-optimal multi-task Atari dataset (51\% human-normalized IQM). Scaled QL uses the ResNet-101 architecture.}\label{tab:sub_opt} \vspace{0.2cm} \centering \resizebox{0.99\textwidth}{!}{\begin{tabular}{lrrrrrr} \toprule Game & DT (200M) & DT (40M) & Scaled QL (80M) & BC (80M) & MT Impala-DQN* & Human \\ \midrule Amidar & 72.9 & 82.2 & 33.1 & 14.5 & 629.8 & 1719.5 \\ Assault & 392.9 & 124.7 & 1380.8 & 1060.0 & 1338.7 & 742.0 \\ Asterix & 1518.8 & 2256.2 & 9967.3 & 745.3 & 2949.1 & 8503.3 \\ Atlantis & 10525.0 & 13125.0 & 485200.0 & 2494.1 & 976030.4 & 29028.1 \\ BankHeist & 13.1 & 15.6 & 18.6 & 87.6 & 1069.6 & 753.1 \\ BattleZone & 3750.0 & 7687.5 & 8500.0 & 1550.0 & 26235.2 & 37187.5 \\ BeamRider & 1535.8 & 1397.5 & 5856.5 & 327.2 & 1524.8 & 16926.5 \\ Boxing & 71.4 & 74.2 & 95.2 & 95.4 & 68.3 & 12.1 \\ Breakout & 38.8 & 38.2 & 351.1 & 274.7 & 32.6 & 30.5 \\ Carnival & 993.8 & 791.2 & 199.3 & 792.7 & 2021.2 & 3800.0 \\ Centipede & 2645.4 & 3026.9 & 2711.4 & 2260.8 & 4848.0 & 12017.0 \\ ChopperCommand & 1006.2 & 1093.8 & 752.2 & 336.7 & 951.4 & 7387.8 \\ CrazyClimber & 85487.5 & 86050.0 & 122933.3 & 121394.4 & 146362.5 & 35829.4 \\ DemonAttack & 2269.7 & 1049.4 & 14229.8 & 765.8 & 446.8 & 1971.0 \\ DoubleDunk & -14.5 & -20.2 & -12.4 & -13.6 & -156.2 & -16.4 \\ Enduro & 336.5 & 266.2 & 2297.6 & 638.7 & 896.3 & 860.5 \\ FishingDerby & 15.9 & 16.8 & 13.7 & -88.1 & -152.3 & -38.7 \\ Freeway & 16.2 & 20.5 & 24.4 & 0.1 & 30.6 & 29.6 \\ Frostbite & 1014.4 & 776.2 & 2324.5 & 234.8 & 2748.4 & 4334.7 \\ Gopher & 1137.5 & 1251.2 & 1041.0 & 231.5 & 3205.6 & 2412.5 \\ Gravitar & 237.5 & 193.8 & 260.3 & 248.8 & 492.5 & 3351.4 \\ Hero & 6741.2 & 6295.3 & 4011.9 & 7485.8 & 26568.8 & 30826.4 \\ IceHockey & -8.8 & -11.1 & -3.7 & -10.8 & -10.4 & 0.9 \\ Jamesbond & 378.1 & 312.5 & 58.7 & 7.1 & 264.6 & 302.8 \\ Kangaroo & 1975.0 & 2687.5 & 5796.6 & 307.1 & 7997.1 & 3035.0 \\ Krull & 6913.8 & 4377.5 & 9333.7 & 9585.3 & 8221.4 & 2665.5 \\ KungFuMaster & 17575.0 & 14743.8 & 24320.0 & 15778.6 & 29383.1 & 22736.3 \\ NameThisGame & 4396.9 & 4502.5 & 6759.6 & 2756.8 & 6548.8 & 8049.0 \\ Phoenix & 3560.0 & 2813.8 & 12770.0 & 762.9 & 3932.5 & 7242.6 \\ Pooyan & 1053.8 & 1394.7 & 1264.5 & 718.7 & 4000.0 & 4000.0 \\ Qbert & 8371.9 & 5917.2 & 14877.9 & 5759.6 & 4226.5 & 13455.0 \\ Riverraid & 6191.9 & 4265.6 & 9602.7 & 6657.2 & 7306.6 & 17118.0 \\ Robotank & 14.9 & 12.8 & 17.4 & 5.7 & 9.2 & 11.9 \\ Seaquest & 781.9 & 512.5 & 1021.8 & 113.9 & 1415.2 & 42054.7 \\ TimePilot & 2512.5 & 2700.0 & 767.3 & 3841.1 & -883.1 & 5229.2 \\ UpNDown & 5288.8 & 5456.2 & 35541.3 & 8395.2 & 8167.6 & 11693.2 \\ VideoPinball & 1277.4 & 1953.1 & 40.0 & 2650.3 & 85351.0 & 17667.9 \\ WizardOfWor & 237.5 & 881.2 & 107.0 & 495.3 & 975.9 & 4756.5 \\ YarsRevenge & 11867.4 & 10436.8 & 11482.4 & 17755.5 & 18889.5 & 54576.9 \\ Zaxxon & 287.5 & 337.5 & 1.4 & 0.0 & -0.1 & 9173.3 \\ \bottomrule \end{tabular}} \end{table*} \begin{table*}[t] \caption{Raw scores on 40 training Atari games in the near-optimal multi-task Atari dataset. Scaled QL uses the ResNet 101 architecture.} \vspace{0.2cm} \centering \resizebox{0.99\textwidth}{!}{\begin{tabular}{l@{}rrrrrr} \toprule Game & DT (200 M) & DT (40M) & BC (200M) & MT Impala-DQN* & Scaled QL (80M) & Human \\ \midrule Amidar & 101.5 & 1703.8 & 101.0 & 629.8 & 21.0 & 1719.5 \\ Assault & 2385.9 & 1772.2 & 1872.1 & 1338.7 & 3809.6 & 742.0 \\ Asterix & 14706.3 & 4575.0 & 5162.5 & 2949.1 & 34278.9 & 8503.3 \\ Atlantis & 3105342.3 & 304931.2 & 4237.5 & 976030.4 & 881980.0 & 29028.1 \\ BankHeist & 5.0 & 40.0 & 63.1 & 1069.6 & 33.9 & 753.1 \\ BattleZone & 17687.5 & 17250.0 & 9250.0 & 26235.2 & 8812.5 & 37187.5 \\ BeamRider & 8560.5 & 3225.5 & 4948.4 & 1524.8 & 10301.0 & 16926.5 \\ Boxing & 95.1 & 92.1 & 90.9 & 68.3 & 99.5 & 12.1 \\ Breakout & 290.6 & 160.0 & 185.6 & 32.6 & 415.0 & 30.5 \\ Carnival & 2213.8 & 3786.9 & 2986.9 & 2021.2 & 926.1 & 3800.0 \\ Centipede & 2463.0 & 2867.5 & 2262.8 & 4848.0 & 3168.2 & 12017.0 \\ ChopperCommand & 4268.8 & 3337.5 & 1800.0 & 951.4 & 832.2 & 7387.8 \\ CrazyClimber & 126018.8 & 113425.0 & 123350.0 & 146362.5 & 140500.0 & 35829.4 \\ DemonAttack & 23768.4 & 3629.4 & 7870.6 & 446.8 & 56318.3 & 1971.0 \\ DoubleDunk & -10.6 & -12.5 & -1.5 & -156.2 & -13.1 & -16.4 \\ Enduro & 1092.6 & 770.8 & 793.2 & 896.3 & 2345.8 & 860.5 \\ FishingDerby & 11.8 & 19.2 & 5.6 & -152.3 & 23.8 & -38.7 \\ Freeway & 30.4 & 32.8 & 29.8 & 30.6 & 31.9 & 29.6 \\ Frostbite & 2435.6 & 934.4 & 782.5 & 2748.4 & 3566.4 & 4334.7 \\ Gopher & 9935.0 & 3827.5 & 3496.2 & 3205.6 & 3776.9 & 2412.5 \\ Gravitar & 59.4 & 75.0 & 12.5 & 492.5 & 262.3 & 3351.4 \\ Hero & 20408.8 & 19667.2 & 13850.0 & 26568.8 & 20470.6 & 30826.4 \\ IceHockey & -10.1 & -5.2 & -8.3 & -10.4 & -1.5 & 0.9 \\ Jamesbond & 700.0 & 712.5 & 431.2 & 264.6 & 483.6 & 302.8 \\ Kangaroo & 12700.0 & 11581.2 & 12143.8 & 7997.1 & 2738.6 & 3035.0 \\ Krull & 8685.6 & 8295.6 & 8058.8 & 8221.4 & 10176.9 & 2665.5 \\ KungFuMaster & 15562.5 & 16387.5 & 4362.5 & 29383.1 & 25808.3 & 22736.3 \\ NameThisGame & 9056.9 & 7777.5 & 7241.9 & 6548.8 & 11647.0 & 8049.0 \\ Phoenix & 5295.6 & 4744.4 & 4326.9 & 3932.5 & 5264.0 & 7242.6 \\ Pooyan & 2859.1 & 1191.9 & 1677.2 & 4000.0 & 2020.1 & 4000.0 \\ Qbert & 13734.4 & 12534.4 & 11276.6 & 4226.5 & 15946.0 & 13455.0 \\ Riverraid & 14755.6 & 11330.6 & 9816.2 & 7306.6 & 18494.8 & 17118.0 \\ Robotank & 63.2 & 50.9 & 44.6 & 9.2 & 53.2 & 11.9 \\ Seaquest & 5173.8 & 3112.5 & 1175.6 & 1415.2 & 414.1 & 42054.7 \\ TimePilot & 2743.8 & 3487.5 & 1312.5 & -883.1 & 4220.5 & 5229.2 \\ UpNDown & 16291.3 & 9306.9 & 10454.4 & 8167.6 & 55512.9 & 11693.2 \\ VideoPinball & 1007.7 & 9671.4 & 1140.8 & 85351.0 & 285.7 & 17667.9 \\ WizardOfWor & 187.5 & 687.5 & 443.8 & 975.9 & 301.6 & 4756.5 \\ YarsRevenge & 28897.9 & 25306.3 & 20738.9 & 18889.5 & 24393.9 & 54576.9 \\ Zaxxon & 275.0 & 4637.5 & 50.0 & -0.1 & 2.1 & 9173.3 \\ \bottomrule \end{tabular}} \end{table*} \section{Discussion} \vspace{-0.2cm} This work shows, for the first time (to the best of our knowledge), that offline Q-learning can scale to high-capacity models trained on large, diverse datasets. As we hoped, by scaling up model capacity, we unlocked analogous trends to those observed in vision and NLP. We found that scaled Q-learning trains policies that exceed the average dataset performance and prior methods, especially when the dataset does not already contain expert trajectories. Furthermore, by training a large-capacity model on a diverse set of tasks, we show that Q-learning alone is sufficient to recover general-purpose representations that enable rapid learning of novel tasks. Although we detailed an approach that is sufficient to scale Q-learning, this is by no means optimal. The scale of the experiments limited the number of alternatives we could explore, and we expect that future work will greatly improve performance. Given the strong performance of transformers, we suspect that offline Q-learning with a transformer architecture is a promising future direction. For example, contrary to DT~\citep{lee2022multi}, we did not use data augmentation in our experiments, which we believe can provide significant benefits. While we did a preliminary attempt to perform online fine-tuning on an entirely new game~($\textsc{SpaceInvaders}$), we found that this did not work well for any of the pretrained representations~(see Figure~\ref{fig:lr_curves_online_ft}). Addressing this is an important direction for future work. We speculate that this challenge is related to designing methods for learning better exploration from offline data, which is not required for offline fine-tuning. Another important avenue for future work is to scale offline Q-learning on other RL domains such as robotic navigation, manipulation, locomotion, education, etc. This would require building large-scale tasks, and we believe that scaled QL would provide for a good starting point for scaling in these domains. Finally, in line with \citet{agarwal2022beyond}, we'd release our pretrained models, which we hope would enable subsequent methods to build upon. \section*{Author Contributions} \vspace{-0.2cm} AK conceived and led the project, developed scaled QL, decided and ran most of the experiments. RA discussed the experiment design and project direction, helped set up and debug the training pipeline, took the lead on setting up and running the MAE baseline and the online fine-tuning experiments. XG helped with design choices for some experiments. GT advised the project and ran baseline DT experiments. SL advised the project and provided valuable suggestions AK, RA, GT, SL all contributed to writing and editing the paper. \vspace{-0.3cm} \section*{Acknowledgements} \vspace{-0.3cm} We thank several members of the Google Brain team for their help, support and feedback on this paper. We thank Dale Schuurmans, Dibya Ghosh, Ross Goroshin, Marc Bellemare and Aleksandra Faust for informative discussions. We thank Sherry Yang, Ofir Nachum, and Kuang-Huei Lee for help with the multi-game decision transformer codebase; Anurag Arnab for help with the Scenic ViT codebase. We thank Zoubin Ghahramani and Douglas Eck for leadership support. \section{Experimental Evaluation} \label{sec:exps} \vspace{-0.2cm} \begin{figure}[t] \centering \begin{minipage}{0.65\linewidth} \includegraphics[width=\linewidth]{figures/percent_improvement_over_DT.pdf} \end{minipage} \hfill \begin{minipage}{0.34\linewidth} \vspace{-0.2cm} \includegraphics[width=\linewidth]{figures/pp_profile_ql_dt.pdf} \end{minipage} \vspace{-0.4cm} \caption{\textbf{Comparing Scaled QL to DT} on all training games on the sub-optimal dataset.} \label{fig:percent_improvement} \vspace{-0.5cm} \end{figure} In our experiments, we study how our approach, scaled Q-learning, can simultaneously learn from sub-optimal and optimal data collected from 40 different Atari games. We compare the resulting multi-task policies to behavior cloning~(BC) with same architecture as scaled QL, and the prior state-of-the-art method based on decision transformers (DT)~\citep{chen2021decision}, which utilize return-conditioned supervised learning with large transformers~\citep{lee2022multi}, and have been previously proposed for addressing this task. We also study the efficacy of the multi-task initialization produced by scaled Q-learning in facilitating rapid transfer to new games via both offline and online fine-tuning, in comparison to state-of-the-art self-supervised representation learning methods and other prior approaches. Our goal is to answer the following questions: \textbf{(1)} How do our proposed design decisions impact performance scaling with high-capacity models?, \textbf{(2)} Can scaled QL more effectively leverage higher model capacity compared to na\"ive instantiations of Q-learning?, \textbf{(3)} Do the representations learned by scaled QL transfer to new games? We will answer these questions in detail through multiple experiments in the coming sections, but we will first summarize our main results below. \textbf{Main empirical findings.} Our main results are summarized in Figures~\ref{fig:suboptimal_offline} and \ref{fig:main_results}. These figures show the performance of scaled QL, multi-game decision transformers~\citep{lee2022multi} (marked as ``DT''), a prior method based on supervised learning via return conditioning, and standard behavioral cloning baselines (marked as ``BC'') in the two settings discussed previously, where we must learn from: (i) near optimal data, and (ii) sub-optimal data obtained from the initial 20\% segment of the replay buffer~(see Section~\ref{sec:prelims} for problem setup). See Figure~\ref{fig:percent_improvement} for a direct comparison between DT and BC. \begin{wrapfigure}{r}{0.5\linewidth} \vspace{-0.5cm} \centering \includegraphics[width=0.95\linewidth]{combnined_data_results_iqm.pdf} \vspace{-0.25cm} \caption{\footnotesize{\textbf{Offline scaled conservative Q-learning vs other prior methods} with near-optimal data and sub-optimal data. Scaled QL outperforms the best DT model, attaining an IQM human-normalized score of \textbf{114.1\%} on the near-optimal data and \textbf{77.8\%} on the sub-optimal data, compared to 111.8\% and 30.6\% for DT, respectively.}} \label{fig:main_results} \vspace{-0.5cm} \end{wrapfigure} In the more challenging sub-optimal data setting, scaled QL attains a performance of \textbf{77.8\%} IQM human-normalized score, although trajectories in the sub-optimal training dataset only attain 51\% IQM human-normalized score. Scaled QL also outperforms the prior DT approach by \textbf{2.5 times} on this dataset, even though the DT model has more than twice as many parameters and uses data augmentation, compared to scaled QL. In the $2^{nd}$ setting with near-optimal data, where the training dataset already contains expert trajectories, scaled QL with 80M parameters still outperforms the DT approach with 200M parameters, although the gap in performance is small (3\% in IQM performance, and 20\% on median performance). Overall, these results show that scaled QL is an effective approach for learning from large multi-task datasets, for a variety of data compositions including sub-optimal datasets, where we must stitch useful segments of suboptimal trajectories to perform well, and near-optimal datasets, where we should attempt to mimic the best behavior in the offline dataset. To the best of our knowledge, these results represent the largest performance improvement over the average performance in the offline dataset on such a challenging problem. We will now present experiments that show that offline Q-learning scales and generalizes. \begin{figure}[h] \centering \vspace{-0.2cm} \includegraphics[width=0.85\linewidth]{figures/scaling_plot_params_with_dt.pdf} \vspace{-0.2cm} \caption{\footnotesize{\textbf{Scaling trends for offline Q-learning.} Observe that while the performance of scaled QL instantiated with IMPALA architectures~\citep{espeholt2018impala} degrades as we increase model size, the performance of scaled QL utilizing the ResNets described in Section~\ref{sec:method} continues to increase as model capacity increases. This is true for both an MSE-style TD error as well as for the categorical TD error used by C51 (which performs better on an absolute scale). The CQL + IMPALA performance numbers are from~\citep{lee2022multi}.} } \label{fig:scaling} \vspace{-0.2cm} \end{figure} \vspace{-0.05cm} \subsection{Does Offline Q-Learning Scale Favorably?} \vspace{-0.15cm} One of the primary goals of this paper was to understand if scaled Q-learning is able to leverage the benefit of higher capacity architectures. Recently, \citet{lee2022multi} found that the performance of CQL with the IMPALA architecture does not improve with larger model sizes and may even degrade with larger model sizes. To verify if scaled Q-learning can address this limitation, we compare our value-based offline RL approach with a variety of model families: \textbf{(a)} IMPALA family~\citep{espeholt2018impala}: three IMPALA models with varying widths ($4, 8, 16$) whose performance numbers are taken directly from \citet{lee2022multi} (and was consistent with our preliminary experiments), \textbf{(b)} ResNet 34, 50, 101 and 152 from the ResNet family, modified to include group normalization and learned spatial embeddings These architectures include both small and large networks, spanning a wide range from 1M to 100M parameters. As a point of reference, we use the scaling trends of the multi-game decision transformer and BC transformer approaches from \citet{lee2022multi}. Observe in Figure~\ref{fig:scaling} that the performance of scaled Q-learning improves as the underlying Q-function model size grows. Even though the standard mean-squared error formulation of TD error results in worse absolute performance than C51 (blue vs orange), for both of these versions, the performance of scaled Q-learning increases as the models become larger. This result indicates that value-based offline RL methods can scale favorably, and give rise to better results, but this requires carefully picking a model family. This also explains the findings from \citet{lee2022multi}: while this prior work observed that CQL with IMPALA scaled poorly as model size increases, they also observed that the performance of return-conditioned RL instantiated with IMPALA architectures also degraded with higher model sizes. Combined with the results in Figure~\ref{fig:scaling} above, this suggests that poor scaling properties of offline RL can largely be attributed to the choice of IMPALA architectures, which may not work well in general even for supervised learning methods (like return-conditioned BC). \vspace{-0.2cm} \subsection{Can Offline RL Learn Useful Initializations that Enable Fine-Tuning?} \label{sec:ft_off_on} \vspace{-0.2cm} Next, we study how multi-task training on multiple games via scaled QL can learn general-purpose representations that can enable \emph{rapid} fine-tuning to new games. We study this question in two scenarios: fine-tuning to a new game via offline RL with a small amount of held-out data (1\% uniformly subsampled datasets from DQN-Replay~\citep{agarwal2019optimistic}), and finetuning to a new game mode via sample-efficient online RL initialized from our multi-game offline Q-function. For finetuning, we transfer the weights from the visual encoder and reinitialize the downstream feed-forward component (Figure~\ref{fig:overview}). For both of these scenarios, we utilize a ResNet101 Q-function trained via the methodology in Section~\ref{sec:method}, using C51 and feature normalization. \begin{figure}[t] \centering \includegraphics[width=0.99\linewidth]{figures/offline_ft.pdf} \vspace{-0.2cm} \caption{\footnotesize{\textbf{Offline fine-tuning} performance on unseen games trained with 1\% of held-out game's data, measured in terms of DQN-normalized score, following \citep{lee2022multi}. On average, pre-training with scaled QL outperforms other methods by \textbf{82\%}. Furthermore, scaled QL improves over scaled QL (scratch) by 45\%, indicating that the representations learned by scaled QL during multi-game pre-training are useful for transfer. Self-supervised representation learning (CPC, MAE) alone does not attain good fine-tuning performance.}} \label{fig:offline_ft} \vspace{-0.3cm} \end{figure} \textbf{Scenario 1 (Offline fine-tuning)}: First, we present the results for fine-tuning in an offline setting: following the protocol from \citet{lee2022multi}, we use the pre-trained representations to rapidly learn a policy for a novel game using limited offline data (1\% of the experience of an online DQN run). In Figure~\ref{fig:offline_ft}, we present our results for offline fine-tuning on 5 games from \citet{lee2022multi}, \textsc{Alien, MsPacman, Space Invaders, StarGunner} and \textsc{Pong}, alongside the prior approach based on decision transformers (``DT (pre-trained)''), and fine-tuning using pre-trained representations learned from state-of-the-art self-supervised representation learning methods such as contrastive predictive coding (CPC)~\citep{oord2018representation} and masked autoencoders (MAE)~\citep{he2111masked}. For CPC performance, we use the baseline reported in \citet{lee2022multi}. MAE is a more recent self-supervised approach that we find generally outperformed CPC in this comparison. For MAE, we first pretrained a vision transformer~(ViT-Base)~\citep{dosovitskiy2020image} encoder with 80M parameters trained via a reconstruction loss on observations from multi-game Atari dataset and freeze the encoder weights as done in prior work~\citep{xiao2022masked}. Then, with this frozen visual encoder, we used the same feed forward architecture, Q-function parameterization, and training objective (CQL with C51) as scaled QL to finetune the MAE network. We also compare to baseline methods that do not utilize any multi-game pre-training (DT (scratch) and Scaled QL (scratch)). \textbf{Results.} Observe in Figure~\ref{fig:offline_ft} that multi-game pre-training via scaled QL leads to the best fine-tuning performance and improves over prior methods, including decision transformers trained from scratch. Importantly, we observe \emph{positive transfer} to new games via scaled QL. Prior works~\citep{badia2020agent57} running multi-game Atari (primarily in the online setting) have generally observed negative transfer across Atari games. We show for the first time that pre-trained representations from Q-learning enable positive transfer to novel games that significantly outperforms return-conditioned supervised learning methods and dedicated representation learning approaches. \textbf{Scenario 2 (Online fine-tuning)}: Next, we study the efficacy of the learned representations in enabling online fine-tuning. While deep RL agents on ALE are typically trained on default game modes~(referred to as $m0d0$), we utilize new \emph{variants} of the ALE games designed to be challenging for humans~\citep{machado18sticky,farebrother2018generalization} for online-finetuning. We investigate whether multi-task training on the 40 default game variants can enable fast online adaptation to these never-before-seen variants. In contrast to offline fine-tuning (Scenario 1), this setting tests whether scaled QL can also provide a good initialization for online data collection and learning, for closely related but different tasks. Following \citet{farebrother2018generalization}, we use the same \emph{variants} investigated in this prior work: $\textsc{Breakout}$, $\textsc{Hero}$, and $\textsc{Freeway}$, which we visualize in Figure~\ref{fig:online_ft}~(left). To disentangle the performance gains from multi-game pre-training and the choice of Q-function architecture, we compare to a baseline approach (``scaled QL (scratch)'') that utilizes an identical Q-function architecture as pre-trained scaled QL, but starts from a random initialization. As before, we also evaluate fine-tuning performance using the representations obtained via masked auto-encoder pre-training~\citep{he2111masked,xiao2022masked}. We also compare to a single-game DQN performance attained after training for 50M steps, $16\times$ more transitions than what is allowed for scaled QL, as reported by \citet{farebrother2018generalization}. \textbf{Results}. Observe in Figure~\ref{fig:online_ft} that fine-tuning from the multi-task initialization learned by scaled QL significantly outperforms training from scratch as well as the single-game DQN run trained with \textbf{16x} more data. Fine-tuning with the frozen representations learned by MAE performs poorly, which we hypothesize is due to differences in game dynamics and subtle changes in observations, which must be accurately accounted for in order to learn optimal behavior~\citep{dean2022don}. Our results confirm that offline Q-learning can both effectively benefit from higher-capacity models and learn multi-task initializations that enable sample-efficient transfer to new games. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/legend_online_ft.pdf} \vspace{-0.1cm} \includegraphics[width=0.39\linewidth]{figures/atari_modes_3games.pdf} \includegraphics[width=0.54\linewidth]{figures/online_ft_3_games.pdf} \vspace{-0.2cm} \caption{\footnotesize{\textbf{Online fine-tuning} results on unseen game \emph{variants}. \textbf{Left}. The top row shows default variants and the bottom row shows unseen variants evaluated for transfer: Freeway’s mode 1 adds buses, more vehicles, and increases velocity; Hero’s mode 1 starts the agent at level 5; Breakout’s mode 12 hides all bricks unless the ball has recently collided with a brick. \textbf{Right}. We fine-tune all methods except single-game DQN for 3M online frames (as we wish to test fast online adaptation). Error bars show minimum and maximum scores across 2 runs while the bar shows their average. Observe that scaled QL significantly outperforms learning from scratch and single-game DQN with 50M online frames. Furthermore, scaled QL also outperforms RL fine-tuning on representations learned using masked auto-encoders. See Figure~\ref{fig:lr_curves_online_ft} for learning curves.}} \label{fig:online_ft} \vspace{-0.3cm} \end{figure} \vspace{-0.25cm} \subsection{Ablation Studies} \vspace{-0.25cm} Finally, in this section we perform controlled ablation studies to understand how crucial the design decisions introduced in Section~\ref{sec:method} are for the success of scaled Q-learning. In particular, we will attempt to understand the benefits of using C51 and feature normalization. \textbf{MSE vs C51:} We ran scaled Q-learning with identical network architectures (ResNet 50 and ResNet 101), with both the conventional squared error formulation of TD error, and compare it to C51, which our main results utilize. Observe in Table~\ref{tab:ablation_mse} that C51 leads to much better performance for both ResNet 50 and ResNet 101 models. The boost in performance is the largest for ResNet 101, where C51 improves by over \textbf{39\%} as measured by median human-normalized score. This observation is surprising since prior work~\citep{agarwal2021deep} has shown that C51 performs on par with standard DQN with an Adam optimizer, which all of our results use. One hypothesis is that this could be the case as TD gradients would depend on the scale of the reward function, and hence some games would likely exhibit a stronger contribution in the gradient. This is despite the fact that our implementation of MSE TD-error already attempts to correct for this issue by applying the unitary scaling technique from \citep{kurin2022defense} to standardize reward scales across games. That said, we still observe that C51 performs significantly better. \begin{table}[t] \centering \centering \vspace{-0.3cm} \caption{\footnotesize{\textbf{Performance of Scaled QL with the standard mean-squared TD-error and C51} in the offline 40-game setting aggregated by the median human-normalized score. Observe that for both ResNet 50 and ResNet 101, utilizing C51 leads to a drastic improvement in performance.} \label{tab:ablation_mse} \vspace{0.1cm} \resizebox{0.6\linewidth}{!}{\begin{tabular}{lcc} \toprule & \textbf{Scaled QL (ResNet 50)} & \textbf{Scaled QL (ResNet 101)} \\ \midrule \textbf{with MSE} & 41.1\% & 59.5\% \\ \midrule \textbf{with C51} & 53.5\% (+12.4\%) & 98.9\% (+39.4\%) \\ \bottomrule \vspace{-0.25in} \end{tabular}} \end{table} \textbf{Importance of feature normalization:} We ran small-scale experiments with and without feature normalization (Section~\ref{sec:method}). In these experiments, we consider a multi-game setting with only 6 games: \textsc{Asterix}, \textsc{Breakout}, \textsc{Pong}, \textsc{SpaceInvaders}, \textsc{Seaquest}, and we train with the initial 20\% data for each game. We report aggregated median human-normalized score across the 6 games in Table~\ref{tab:ablation_dr3} for three different network architectures (ResNet 34, ResNet 50 and ResNet 101). Observe that the addition of feature normalization significantly improves performance for all the models. Motivated by this initial empirical finding, we used feature normalization in all of our main experiments. \textbf{To summarize}, these ablation studies validate the efficacy of the two key design decisions introduced in this paper. However, there are several avenues for future investigation: 1) it is unclear if C51 works better because of the distributional formulation or the categorical representation and experiments with other distributional formulations could answer this question, 2) we did not extensively try alternate feature normalization schemes which may improve results. \begin{table}[ht] \centering \centering \vspace{-0.2cm} \caption{\footnotesize{\textbf{Performance of Scaled QL with and without feature normalization in the 6 game setting} reported in terms of the median human-normalized score. Observe that with models of all sizes, the addition of feature normalization improves performance.}} \label{tab:ablation_dr3} \vspace{0.1cm} \resizebox{\linewidth}{!}{\begin{tabular}{lccc} \toprule & \textbf{Scaled QL (ResNet 34)} & \textbf{Scaled QL (ResNet 50)} & \textbf{Scaled QL (ResNet 101)} \\ \midrule \textbf{without feature normalization} & 50.9\% & 73.9\% & 80.4\% \\ \midrule \textbf{with feature normalization} & 78.0\% (+28.9\%) & 83.5\% (+9.6\%) & 98.0\% (+17.6\%) \\ \bottomrule \vspace{-0.2in} \end{tabular} } \end{table} \textbf{Additional ablations:} We also conducted ablation studies for the choice of the backbone architecture (spatial learned embeddings) in Appendix~\ref{app:backbone_ablation}, and observed that utilizing spatial embeddings is better. We also evaluated the performance of scaled QL without conservatism to test the importance of utilizing pessimism n our setting with diverse data in Appendix~\ref{app:no_pessimism}, and observe that pessimism is crucial for attaining good performance on an average. We also provide some scaling studies for another offline RL method (discrete BCQ) in Appendix~\ref{app:discrete_bcq}. \subsubsection*{Acknowledgments} \section{Introduction} \vspace{-0.2cm} High-capacity neural networks trained on large, diverse datasets have led to remarkable models that can solve numerous tasks, rapidly adapt to new tasks, and produce general-purpose representations in NLP and vision~\citep{brown2020language,he2111masked}. The promise of offline RL is to leverage these advances to produce polices with broad generalization, emergent capabilities, and performance that exceeds the capabilities demonstrated in the training dataset. Thus far, the only offline RL approaches that demonstrate broadly generalizing policies and transferable representations are heavily-based on supervised learning~\citep{reed2022generalist,lee2022multi}. However, these approaches are likely to perform poorly when the dataset does not contain expert trajectories~\citep{kumar2021should}. Offline Q-learning performs well across dataset compositions in a variety of simulated~\citep{gulcehre2020rl, fu2020d4rl} and real-world domains~\citep{chebotar2021actionable, soarespulserl}, however, these are largely centered around small-scale, single-task problems where broad generalization and learning general-purpose representations is not expected. \emph{Scaling these methods up to high-capcity models on large, diverse datasets is the critical challenge.} Prior works hint at the difficulties: on small-scale, single-task deep RL benchmarks, scaling model capacity can lead to instabilities or degrade performance~\citep{van2018deep,sinha2020d2rl,ota2021training} explaining why decade-old tiny 3-layer CNN architectures~\citep{mnih2013playing} are still prevalent. Moreover, works that have scaled architectures to millions of parameters~\citep{espeholt2018impala,teh2017distral, vinyals2019grandmaster,schrittwieser2021online} typically focus on \emph{online} learning and employ many sophisticated techniques to stabilize learning, such as supervised auxiliary losses, distillation, and pre-training. Thus, it is unclear whether offline Q-learning can be scaled to high-capacity models trained on a large, diverse dataset. In this paper, we demonstrate that with careful design decisions, \emph{offline Q-learning can scale} to high-capacity models trained on large, diverse datasets from many tasks, leading to policies that not only generalize broadly, but also learn representations that effectively transfer to new downstream tasks and exceed the performance in the training dataset. Crucially, we make three modifications motivated by prior work in deep learning and offline RL. First, we find that a modified ResNet architecture~\citep{resnet} substantially outperforms typical deep RL architectures and follows a power-law relationship between model capacity and performance, unlike common alternatives. Second, a discretized representation of the return distribution with a distributional cross-entropy loss~\citep{bellemare2017distributional} substantially improves performance compared to standard Q-learning, that utilizes mean squared error. Finally, feature normalization on the intermediate feature representations stabilizes training and prevents feature co-adaptation~\citep{kumar2021dr3}. To systematically evaluate the impact of these changes on scaling and generalization, we train a single policy to play 40 Atari games~\citep{bellemare2013arcade, agarwal2019optimistic}, similarly to \citet{lee2022multi}, and evaluate performance when the training dataset contains expert trajectories \emph{and} when the data is sub-optimal. This problem is especially challenging because of the diversity of games with their own unique dynamics, reward, visuals, and agent embodiments. Furthermore, the sub-optimal data setting requires the learning algorithm to ``stitch together'' useful segments of sub-optimal trajectories to perform well. To investigate generalization of learned representations, we evaluate offline fine-tuning to \emph{never-before-seen} games and fast online adaptation on new \emph{variants} of training games~(Section~\ref{sec:ft_off_on}). With our modifications, \begin{itemize} \item Offline Q-learning learns policies that attain more than 100\% human-level performance on most of these games, about \textbf{2x} better than prior supervised learning~(SL) approaches for learning from sub-optimal offline data (51\% human-level performance). \item Akin to scaling laws in SL~\citep{kaplan2020scaling}, offline Q-learning performance scales favorably with model capacity~(Figure~\ref{fig:scaling}). \item Representations learned by offline Q-learning give rise to more than 80\% better performance when fine-tuning on new games compared to representations learned by state-of-the-art return-conditioned supervised~\citep{lee2022multi} and self-supervised methods~\citep{he2111masked,oord2018representation}. \end{itemize} By scaling Q-learning, we realize the promise of offline RL: learning policies that broadly generalize and exceed the capabilities demonstrated in the training dataset. We hope that this work encourages large-scale offline RL applications, especially in domains with large sub-optimal datasets. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{figures/fig_overview.pdf} \vspace{-0.3cm} \caption{\footnotesize{An overview of the training and evaluation setup. Models are trained offline with potentially sub-optimal data. We adapt CQL to the multi-task setup via a multi-headed architecture. The pre-trained visual encoder is reused in fine-tuning (the weights are either frozen or fine-tuned), whereas the downstream fully-connected layers are reinitialized and trained.}} \label{fig:overview} \vspace{-0.5cm} \end{figure} \vspace{-0.1cm} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figures/training_fig_legend_20p.pdf} \vspace{-0.1cm} \includegraphics[width=0.52\linewidth]{figures/suboptimal_data_results.pdf} \hfill \includegraphics[width=0.47\linewidth]{figures/perf_profile_20p.pdf} \vspace{-0.5cm} \caption{\footnotesize{Offline multi-task performance on 40 games with sub-optimal data. \textbf{Left}. Scaled QL significantly outperforms the previous state-of-the-art method, DT, attaining about a \textbf{2.5x} performance improvement in normalized IQM score. To contextualize the absolute numbers, we include online multi-task Impala DQN~\citep{espeholt2018impala} trained on 5x as much data. \textbf{Right}. Performance profiles~\citep{agarwal2021deep} showing the distribution of normalized scores across all 40 training games (higher is better). Scaled QL stochastically dominates other offline RL algorithms and achieves superhuman performance in 40\% of the games. ``Behavior policy'' corresponds to the score of the dataset trajectories. {Online MT DQN (5X), taken directly from \citet{lee2022multi}, corresponds to running multi-task online RL for 5x more data with IMPALA (details in Appendix~\ref{sec:online_mt_dqn}).}}} \label{fig:suboptimal_offline} \vspace{-0.3cm} \end{figure} \section{Our Approach for Scaling Offline RL} \label{sec:method} \vspace{-0.3cm} In this section, we describe the critical modifications required to make CQL effective in learning highly-expressive policies from large, heterogeneous datasets. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figures/network_figure.pdf} \vspace{-0.3cm} \caption{\footnotesize{\textbf{An overview of the network architecture.} The key design decisions are: (1) the use of ResNet models with learned spatial embeddings and group normalization, (2) use of a distributional representation of return values and cross-entropy TD loss for training (i.e., C51~\citep{bellemare2017distributional}), and (3) feature normalization to stablize training.}} \label{fig:architecture} \vspace{-0.45cm} \end{figure} \textbf{Parameterization of Q-values and TD error.} In the single game setting, both mean-squared TD error and distributional TD error perform comparably online~\citep{agarwal2021deep} and offline~\citep{kumar2020conservative,kumar2021dr3}. In contrast, we observed, perhaps surprisingly, that mean-squared TD error does not scale well, and performs much worse than using a \textcolor{brown}{\textbf{categorical distributional representation of return values}}~\citep{bellemare2017distributional} when we train on many Atari games. We hypothesize that this is because even with reward clipping, Q-values for different games often span different ranges, and training a single network with shared parameters to accurately predict all of them presents challenges pertaining to gradient interference along different games~\citep{hessel2019popart, yu2020gradient}. While prior works have proposed to use adaptive normalization schemes~\citep{hessel2019popart,kurin2022defense}, preliminary experiments with these approaches were not effective to close the gap. \textbf{Q-function architecture.} Since large neural networks has been crucial for scaling to large, diverse datasets in NLP and vision~\citep[e.g.,][]{tan2019efficientnet, brown2020language, kaplan2020scaling}), we explore using bigger architectures for scaling offline Q-learning. We use standard feature extractor backbones from vision, namely, the Impala-CNN architectures~\citep{espeholt2018impala} that are fairly standard in deep RL and ResNet $34$, $50$ and $101$ models from the ResNet family~\citep{resnet}. We make modifications to these networks following recommendations from prior work~\citep{anonymous2021ptr}: we utilize group normalization instead of batch normalization in ResNets, and utilize point-wise multiplication with a learned spatial embedding when converting the output feature map of the vision backbone into a flattened vector which is to be fed into the feed-forward part of the Q-function. To handle the multi-task setting, we use a multi-headed architecture where the Q-network outputs values for each game separately. The architecture uses a shared encoder and feedforward layers with separate linear projection layers for each game (Figure~\ref{fig:architecture}). The training objective (Eq.~\ref{eqn:cql_training}) is computed using the Q-values for the game that the transition originates from. In principle, explicitly injecting the task-identifier may be unnecessary and its impact could be investigated in future work. \textcolor{brown}{\textbf{Feature Normalization via DR3~\citep{kumar2021dr3}.}} While the previous modifications lead to significant improvements over na\"ive CQL, our preliminary experiments on a subset of games did not attain good performance. In the single-task setting, \citet{kumar2021dr3} proposes a regularizer that stabilizes training and allows the network to better use capacity, however, it introduces an additional hyperparameter to tune. Motivated by this approach, we regularize the magnitude of the learned features of the observation by introducing a ``normalization'' layer in the Q-network. This layer forces the learned features to have an $\ell_2$ norm of 1 by construction, and we found that this this speeds up learning, resulting in better performance. We present an ablation study analyzing this choice in Table~\ref{tab:ablation_dr3}. We found this sufficient to achieve strong performance, however, we leave exploring alternative feature normalization schemes to future work. \begin{tcolorbox}[colback=blue!6!white,colframe=black,boxsep=0pt,top=3pt,bottom=5pt] \textbf{To summarize,} the primary modifications that enable us to scale CQL are: \textbf{(1)} use of large ResNets with learned spatial embeddings and group normalization, \textbf{(2)} use of a distributional representation of return values and cross-entropy loss for training (i.e., C51~\citep{bellemare2017distributional}), and \textbf{(3)} feature normalization at intermediate layers to prevent feature co-adaptation, motivated by \citet{kumar2021dr3}. For brevity, we call our approach \textbf{Scaled Q-learning}. \end{tcolorbox} \section{Preliminaries and Problem Setup} \label{sec:prelims} \vspace{-0.2cm} We consider sequential-decision making problems~\citep{SuttonBook} where on each timestep, an agent observes a state $\mathbf{s}$, produces an action $\mathbf{a}$, and receives a reward $r$. The goal of a learning algorithm is to maximize the sum of discounted rewards. Our approach is based on conservative Q-learning (CQL)~\citep{kumar2020conservative}, an offline Q-learning algorithm. CQL uses a sum of two loss functions to combat value overestimation on unseen actions: \textbf{(i)} standard TD-error that enforces Bellman consistency, and \textbf{(ii)} a regularizer that minimizes the Q-values for unseen actions at a given state, while maximizing the Q-value at the dataset action to counteract excessive underestimation. Denoting $Q_\theta(\mathbf{s}, \mathbf{a})$ as the learned Q-function, the training objective for CQL is given by: \begin{align} \label{eqn:cql_training} \min_{\theta}~ \alpha\!\left( \mathbb{E}_{\mathbf{s} \sim \mathcal{D}} \left[\log \left(\sum_{\mathbf{a}'} \exp(Q_\theta(\mathbf{s}, \mathbf{a}')) \right) \right]\! -\! \mathbb{E}_{\mathbf{s}, \mathbf{a} \sim \mathcal{D}}\left[Q_\theta(\mathbf{s}, \mathbf{a})\right] \right) + \mathsf{TDError}(\theta; \mathcal{D}), \end{align} where $\alpha$ is the regularizer weight, which we fix to $\alpha=0.05$ based on preliminary experiments unless noted otherwise. \citet{kumar2020conservative} utilized a distributional $\mathsf{TDError}(\theta; \mathcal{D})$ from C51~\citep{bellemare2017distributional}, whereas \citep{kumar2021dr3} showed that similar results could be attained with the standard mean-squared TD-error. \citet{lee2022multi} use the distributional formulation of CQL and found that it underperforms alternatives and performance does not improve with model capacity. In general, there is no consensus on which formulation of TD-error must be utilized in Equation~\ref{eqn:cql_training}, and we will study this choice in our scaling experiments. \textbf{Problem setup.} Our goal is to learn a single policy that is effective at multiple Atari games and can be fine-tuned to new games. For training, we utilize the set of 40 Atari games used by \citet{lee2022multi}, and for each game, we utilize the experience collected in the DQN-Replay dataset~\citep{agarwal2019optimistic} as our offline dataset. We consider two different dataset compositions: \vspace{-0.1cm} \begin{enumerate} \item \textbf{Sub-optimal} dataset consisting of the initial 20\% of the trajectories (10M transitions) from DQN-Replay for each game, containing 400 million transitions overall with average human-normalized interquartile-mean~(IQM)~\citep{agarwal2021deep} score of 51\%. Since this dataset does not contain optimal trajectories, we do not expect methods that simply copy behaviors in this dataset to perform well. On the other hand, we would expect methods that can combine useful segments of sub-optimal trajectories to perform well. \item \textbf{Near-optimal} dataset, used by \citet{lee2022multi}, consisting of all the experience~(50M transitions) encountered during training of a DQN agent including human-level trajectories, containing 2 billion transitions with average human-normalized IQM score of 93.5\%. \end{enumerate} \vspace{-0.1cm} \textbf{Evaluation}. We evaluate our method in a variety of settings as we discuss in our experiments in Section~\ref{sec:exps}. Due to excessive computational requirements of running huge models, we are only able to run our main experiments with one seed. Prior work~\citep{lee2022multi} that also studied offline multi-game Atari evaluated models with only one seed. That said, to ensure that our evaluations are reliable, for reporting performance, we follow the recommendations by \citet{agarwal2021deep}. Specifically, we report interquartile mean~(IQM) normalized scores, which is the average scores across middle 50\% of the games, as well as performance profiles for qualitative summarization. \section{Related Work} \vspace{-0.2cm} Prior works have sought to train a single generalist policy to play multiple Atari games simultaneously from environment interactions, either using off-policy RL with online data collection~\citep{espeholt2018impala, hessel2019multi, song2019v}, or policy distillation~\citep{teh2017distral, rusu2015policy} from single-task policies. While our work also focuses on learning such a generalist multi-task policy, it investigates whether we can do so by scaling offline Q-learning on suboptimal offline data, analogous to how supervised learning can be scaled to large, diverse datasets. Furthermore, prior attempts to apply transfer learning using RL-learned policies in ALE~\citep{rusu2015policy, parisotto2015actor, mittel2019visual} are restricted to a dozen games that tend to be similar and generally require an “expert”, instead of learning how to play all games concurrently. Closely related to our work, recent work train Transformers~\citep{vaswani2017attention} on purely offline data for learning such a generalist policy using supervised learning~(SL) approaches, namely, behavioral cloning~\citep{reed2022generalist} or return-conditioned behavioral cloning~\citep{lee2022multi}. While these works focus on large datasets containing expert or near-human performance trajectories, our work focuses on the regime when we only have access to highly diverse but sub-optimal datasets. We find that these SL approaches perform poorly with such datasets, while offline Q-learning is able to substantially extrapolate beyond dataset performance~(Figure~\ref{fig:suboptimal_offline}). Even with near-optimal data, we observe that scaling up offline Q-learning outperforms SL approaches with 200 million parameters using as few as half the number of network parameters~(Figure~\ref{fig:scaling}). There has been a recent surge of offline RL algorithms that focus on mitigating distribution shift in single task settings~\citep{fujimoto2018off,kumar2019stabilizing,liu2020provably,wu2019behavior,fujimoto2021minimalist, siegel2020keep,peng2019advantage,nair2020accelerating, LiuSAB19, SwaminathanJ15, nachum2019algaedice, kumar2020conservative,kostrikov2021offline,kidambi2020morel,yu2020mopo,yu2021combo}. Complementary to such work, our work investigates scaling offline RL on the more diverse and challenging multi-task Atari setting with data from 40 different games~\citep{agarwal2019optimistic, lee2022multi}. To do so, we use CQL~\citep{kumar2020conservative}, due to its simplicity as well as its efficacy on offline RL datasets with high-dimensional observations.
2,869,038,155,814
arxiv
\section{Introduction} The properties of a variety of astrophysical objects are essentially determined by the high-density equation of state (EOS) of hydrogen \cite{G_05,F_09}. Hydrogen is fully ionized at these high densities independent of the temperature. In this metallic phase, the electrons can be of arbitrary degeneracy including the highly degenerate limit ($T \!=\! 0$) and the ions are strongly coupled. The thermodynamics is thus strongly influenced by the well-pronounced short-range order in the proton subsystem although the major contributions stem from the electron gas. Metallic hydrogen occurs in the interior of giant gas planets like Jupiter, Saturn and similar extrasolar planets \cite{G_05,miljup,netjup}. The temperatures encountered along the isentrope of giant planets are in the order of a few electron volts. At densities comparable to solids and above, a $T \!=\! 0$ description of the electrons is possible for colder planets, whereas temperature related corrections might be needed for hotter planets \cite{b_97}. Higher temperatures and densities are found in white dwarf stars and the crust of neutron stars \cite{DLFB_07,sj_99}. As most elements are fully ionized under these conditions, matter behaves hydrogen-like and the EOS is again determined by a combination of degenerate electrons and strongly coupled ions. Beyond astrophysical applications, the EOS of dense hydrogen is also required to model inertial confinement fusion experiments as the compression path of the fuel runs through the parameter space considered here \cite{lindl,hu}. Although at much higher densities, the fully compressed DT-pellet has similar properties when considering electron degeneracy and ionic coupling strength. Most astrophysical objects exhibit an isentrope that covers several phases. For example the internal structure of giant gas planets is determined by phase transitions as the molecular to metallic or atomic to metallic transition in dense fluid hydrogen (sometimes named plasma phase transition \cite{ppt1,ppt2}). After many investigations leaving the nature of this transition unclear \cite{weir,fortov,beule,scandolo,delany,tamblyn,VTMB_07,Holst,wpmd}, recent first principle simulations showed that a first order phase transition with surprisingly small volume change is indeed likely \cite{morales,lohore}. However, present {\em ab initio} simulations like DFT-MD, path integral Monte Carlo (PIMC), or coupled electron-ion Monte Carlo (CEIMC) cover a limited parameter space only. For a consistent description of the EOS, one has to require that i) different techniques agree in overlapping regions and ii) the simulation data merge with well-founded theories in limiting cases. The second point has been achieved only in the low density region where PIMC simulations match density and fugacity expansions of the EOS perfectly \cite{greeneos}. In this paper, we focus on the high-density limit of the equation of state of fluid hydrogen. As quantum Monte Carlo schemes are not available for very high densities, we rely on DFT-MD simulations here. However, first principle simulations like DFT-MD have so far been unable to provide results that converge into the high density, i.e. Gell-Mann \& Brueckner, $T \!=\! 0$ limit. We resolve this issue by carefully performed DFT-MD simulations for higher densities and by employing an analytic approach that extends the $T \!=\! 0$ limiting law to parameters with finite temperatures. We can then demonstrate agreement in overlapping regions of density, so that the goal of a combined EOS of hydrogen, that is solely based on methods in the physical picture and covers the entire density range for temperatures of a few electron volts, is reached. The analytic EOS theory we apply for high densities is a two-fluid model based on a perturbation with respect to the electron-ion interaction. It keeps all contributions from correlations in the ion subsystem and is applicable for arbitrary degeneracy of the electrons. Thus, our two-fluid model is valid for fully ionized plasmas with, compared to thermal energies and/or ion-ion interactions, weak electron-ion interactions. After an introduction of the model in the next section, we present results and comparisons with data from first principle quantum simulations. In particular, we show which steps are necessary to reach agreement between our two-fluid model and the simulations and also give limits for the applicability of both. \section{Analytical EOS Approach} We consider fully ionized plasmas consisting of protons and electrons. To characterize the interaction strength between the particles, we define the classical coupling parameter \begin{equation} \Gamma = \frac{e^2}{d \, k_B T} \qquad \mbox{with} \qquad d = \left(\frac{4\pi}{3} n \right)^{-1/3} \,. \end{equation} In the quantum regime, the classical kinetic energy scale has to be replaced by its quantum analog: $k_B T \to \frac{2}{3}\langle K_a\rangle$. The mean kinetic energy $\langle K_a\rangle$ can be calculated via a Fermi integral \cite{bluebook} which recovers both the classical as well as the fully degenerate (Fermi energy) limits. Note that {\em classically} all coupling parameters are identical for hydrogen while the electron-electron and the electron-ion coupling are strongly reduced in plasmas with highly degenerate electrons as $\langle K_e \rangle \!\gg\! k_B T$ holds here. In the quantum limit, the electron coupling parameter $\Gamma_e$ becomes smaller than unity due to quantum degeneracy. Here, the Brueckner parameter $r_s=d/a_B$ is commonly used to describe the coupling strength. \subsection{Two-Fluid Model} For the parameters considered here, the electron-proton interactions are weak ($\Gamma_{ei} \!\ll\! 1$) while the proton-proton coupling is usually strong ($\Gamma_{ii} \!\ge\! 1$). The weak interactions between electrons and protons allows us to apply a Born-Oppenheimer approximation and treat the electrons in linear response to the fields created by the ions. As a result, one can rewrite the full two-component problem as a one-component system for the ions which interact via effective potentials. More precisely, this will be called the two-fluid model where the fully correlated electron and ion fluids interact only weakly with each other. Of course, this procedure eliminates the ability to describe bound states which is however unnecessary in the high density limit. Applying this two-fluid model, the pressure is given by \cite{Hansen, Ashcroft} \begin{eqnarray} \frac{\beta p}{n_i}&=&1+\frac{\beta}{V}\frac{d F(n_i)}{d n_i} \label{yeos}\\ &-&\frac{1}{2}\beta n_i\int d{\bf r}\; \left[g_{ii}(r)-1\right] \left(\frac{r}{3}\frac{\partial}{\partial r}- n_i\frac{\partial}{\partial n_i} \right)v_{ii}^{\rm eff}(r)\,.\nonumber \end{eqnarray} with \begin{equation} F=F_{eg}+ \frac{N_i}{2}\int \frac{d{\bf k}}{(2\pi)^3} \, v^2_{ei}(k)\, \chi_{ee}(k)\;. \label{vnullint} \end{equation} A similar model was also used by, e.g., Chabrier \& Potekhin \cite{Chabrier1, Chabrier2}. The first term in Eq.~(\ref{yeos}) represents the ideal contribution from the classical ions. In the second term, the density derivative of the free energy $F$ produces the contribution of the correlated electron gas via the free energy of the isolated electron subsystem $F_{eg}$. The integral over the electron density response function $\chi_{ee}$, to be taken in random phase approximation (RPA), is an electron-ion cross term that arises from the linear response treatment of the electrons in the two-fluid description. The third term in Eq.~(\ref{yeos}) accounts for ionic correlations described by the pair distribution $g_{ii}$. The polarizability of the electron gas is here to be taken into account via an effective ion-ion potential $v_{ii}^{\rm eff}$. This potential must also be applied when calculating the ionic pair distribution. The density derivative arises as the effective ion-ion potential is density dependent via the screening function. The internal energy can be calculated similarly via \begin{eqnarray} \label{yeos_U} \frac{\beta U}{V}&=&\frac{3}{2} n_i+\frac{\beta}{V} U_0(n_i)\label{yueos}\\ &+&\frac{1}{2}\beta n_i^2\int d{\bf r}\; \left[g_{ii}(r)-1\right] \left[ v_{ii}^{\rm eff}(r)-T\frac{\partial}{\partial T}v_{ii}^{\rm eff}(r)\right].\nonumber \end{eqnarray} Again, we have first the ideal ion contribution and $U_0$ denotes the internal energy of the electron subsystem. It may be approximated by the free energy $F$~(\ref{vnullint}) for highly degenerate systems near $T \!=\! 0$; otherwise it is given by $U_0=F-T\partial F/\partial T$. Ionic correlations are accounted for by the integral term. The additional temperature derivative is due to the fact that the effective ion-ion potential is also temperature-dependent. \subsection{Properties of the Ion Subsystem} The effective ion-ion interaction consistent with the model above is given by \begin{eqnarray} v_{ii}^{\rm eff}(k)&=&v_{ii}(k)+[v_{ei}(k)]^2\chi_{ee}(k)\,,\nonumber\\ &=&\frac{4\pi e^2}{k^2} \varepsilon^{-1}_e(k)\,. \label{viieff} \end{eqnarray} Here, the electron part in the static dielectric function in RPA, $\varepsilon_e^{-1} \!=\! 1 \!+\! v_{ee}\chi_{ee}$, was introduced. $v_{ee}=4\pi e^2/k^2$ is the Coulomb potential between electrons \cite{bluebook}. The bare Coulomb interaction between the protons $v_{ii}$ is thus linearly screened by the electrons. More simple approximations for the effective potential can be obtained for small $k$, where the Debye or Yukawa potential, $v_{ii}^{\rm eff}(k) \!=\! 4\pi e^2 /(k^2 + \kappa_e^2)$ with $\kappa_e^2=(4e^2m_e/\pi\hbar^3)\int_0^{\infty} dp f_e(p)$, follows. The derivation of the two-fluid description (\ref{yeos}) and (\ref{yeos_U}) clearly shows that the ionic pair distribution $g_{ii}$ must be obtained from a one-component description of the ions where the forces are given by the effective interaction $v_{ii}^{\rm eff}$. Possible methods to determine $g_{ii}$ are classical Monte Carlo and molecular dynamics simulations or techniques based on integral equations like the hypernetted chain equations (HNC) \cite{at,hnc1,hnc2,wuensch,schwarz,wuensch2}. \subsection{Contributions of the Electron Gas} To describe the electron gas, we employ the quantum statistical method of thermodynamic Green's functions \cite{greenbook,bluebook}. Its advantage is the ability to describe systems with arbitrary temperatures including the correct $T \!=\! 0$ physics, the transition to Boltzmann statistics, and the correct high temperature (Debye-H\"uckel) law. Using this technique, a perturbation expansion in the interaction strength can be established \cite{greeneos,bluebook}. Including terms up to the second order, one obtains \begin{eqnarray} p_{ee}(T_e,\mu_e)&=&p_{e}^{id}(T_e,\mu_e)+p_{e}^{HF}(T_e,\mu_e)\nonumber\\ &&+p_{ee}^{MW}(T_e,\mu_e)+ p_{e}^{e^4n}(T_e,\mu_e)\,.\label{uecorr} \end{eqnarray} The terms are the ideal gas law, the Hartree-Fock (HF) quantum exchange term, the direct Montroll-Ward (MW) term, and quantum exchange contributions of the second order ($e^4n$), respectively. As this perturbation expansion exists in the grand canonical ensemble, the density $n_e$ is related to the chemical potential $\mu_e$ via \begin{equation} n_e(T_e,\mu_e)=\frac{\partial p_{ee}}{\partial \mu_e}\label{chempot}\,. \end{equation} An inversion within {\em golden rule} is performed in order to obtain the pressure as function of density \cite{greeneos}. This means that correlation contributions of the free energy as function of density are taken to be equal to the negative excess pressure as function of the chemical potential. The ideal pressure is given by \begin{equation} p_{e}^{id}(T_e,\mu_e) = \frac{2k_BT_e}{\Lambda_e^3}\mbox{I}_{3/2}(\mu_e/k_BT_e) \,. \end{equation} Here, $\Lambda_e \!=\! \sqrt{2\pi \hbar^2/m_ek_BT_e}$ is the thermal deBroglie wavelength and $I_{3/2}$ is the Fermi integral of order $3/2$ \cite{bluebook}. First order exchange contributions are contained in the HF term \begin{equation} p_e^{HF}(T_e,\mu_e)=\frac{(2\sigma_e+1)e^2}{\Lambda_e^4} \int\limits_{-\infty}^{\mu_e/k_BT_e}d \alpha\,\mbox{I}_{-1/2}^2(\alpha)\,, \end{equation} which is given by an integral over a Fermi integral of the order $-1/2$. The Montroll-Ward term can be computed by a double integral over the dielectric function of the electron gas $\varepsilon_e$ \begin{eqnarray} \!\!\! p_e^{MW}(T_e,\mu_e)&=&\frac{-1}{4\pi^3}\int\limits_0^{\infty} \! dp\,p^2\, {\cal P}\!\! \int\limits_{\pm 0}^{\infty}d\omega\,\coth\left(\frac{\omega}{2k_BT_e}\right) \nonumber\\ &\times&\!\!\!\left[\arctan\frac{\mbox{Im}\varepsilon_e(p,\omega)}{\mbox{Re}\varepsilon_e(p,\omega)} -\mbox{Im}\varepsilon_e(p,\omega)\right]\,. \end{eqnarray} It is consistent with the expansion (\ref{uecorr}) to use the RPA dielectric function $\varepsilon_e$. The normal $e^4$ exchange term, accounting for exchange effects of second order, can be written as an integral over Fermi functions, $f_p \!=\! [\exp(\beta p^2/2m_e \!-\! \beta\mu_e) \!+\! 1]^{-1}$, and Pauli-blocking factors, defined by $\bar{f}_p \!=\! [1 \!-\! f_p]$, \begin{eqnarray} p_{e}^{e^4n}(T_e,\mu_e)&=&m_e\! \int\frac{d{\bf p}d{\bf q}_1d{\bf q}_2}{(2\pi)^9} v_{ee}(p)v_{ee}({\bf p}+{\bf q}_1+{\bf q}_2)\nonumber\\ &\times& \frac{f_{q_1}f_{q_2}\bar{f}_{{\bf q}_1+{\bf p}}\bar{f}_{{\bf q}_2+{\bf p}}-f_{{\bf q}_1+{\bf p}}f_{{\bf q}_2+{\bf p}}\bar{f}_{q_1}\bar{f}_{q_2}} { q_1^2+q_2^2-({\bf p}+{\bf q}_1)^2-({\bf p}+{\bf q}_2)^2}\,.\nonumber\\ && \end{eqnarray} Here, $v_{ee}$ is the bare electron-electron Coulomb potential. The expansion (\ref{uecorr}) accounts for direct correlations and dynamic screening, incorporates collective oscillations (plasmons) as well as quantum diffraction and exchange in the electron subsystem. This expression is valid for weakly coupled electrons of arbitrary degeneracy and includes in particular the low and high temperature limiting cases of Debye-H\"uckel and Gell-Mann \& Brueckner, respectively \cite{greeneos}. Within the same quantum approach, protons can be incorporated and an EOS for hydrogen can be calculated \cite{greeneos}. The advantage of an EOS of hydrogen fully based on Green's functions is its capability to describe quantum effects in the proton subsystem correctly. However, a hydrogen EOS based on expansion (\ref{uecorr}) is restricted to weak coupling in the proton subsystem as well, a limitation avoided within the two-fluid approach presented here. \subsection{Limiting Behavior and Applicability} The condition of weak electron-ion coupling as used to derive Eqs.~(\ref{yeos}) and (\ref{yueos}) is fulfilled not only for the high density limit of highly degenerate electrons and strongly coupled ions, but also in the high temperature and low density limits. Here, all interactions are weak and of the same order and the EOS is given by the Debye-H\"uckel law \cite{bluebook} \begin{equation} \beta p=\beta p^{id}+\beta p^{DH} \!=\! \sum_a n_a - \frac{\kappa^3}{24\pi} \,. \label{dh} \end{equation} The first term is the ideal classical gas contribution, the second term is the Debye-H\"uckel correction determined by the total inverse screening length $\kappa^2 \!=\! \sum_a 4\pi Z_a e_a n_a \beta$. The sums in Eq.(\ref{dh}) and in the definition of $\kappa$ run over all species $a=\{e,i\}$. This limiting law is not fully reached by our two-fluid model. However, it deviates only by a tiny fraction of $p^{DH}/p^{2-fluid} \!=\! 16\sqrt{2}/23 \!=\! 0.99$ (similar for the internal energy and other thermodynamic functions). The slight disagreement can be traced back to the neglect of the influence of the ions on the electrons. Note that this result can only be obtained if the contributions of the electron gas are evaluated for finite temperatures and not just in the ground state. Our quantum treatment via thermodynamic Green's functions also ensures that the electron contribution reaches the correct high density $T=0$ limit. Moreover, the electron contribution is the by far largest term in the thermodynamic functions for high densities. Thus, the two-fluid model also recovers the high density $T=0$ limit (with an error given by the ratio of electron to ion mass). Consequently, the two-fluid model constitutes a valid approximation to the EOS of fully ionized and weakly coupled hydrogen plasmas with arbitrarily degenerate electrons. Both conditions are fulfilled for temperatures above $T \!=\! 2.5 \!\times\! 10^5\,$K in the entire density range, where bound states do not occur and the coupling is sufficiently weak for the entire density range. \subsection{Results of the Two-Fluid Model} \begin{figure}[t] \includegraphics[width=0.48\textwidth,clip=true]{fig_01_color.eps} \caption{(Color online) Lower panel: pressure of hydrogen as predicted by the two-fluid model (\ref{yeos}) normalized by the ideal contribution. The high temperature (Debye-H\"uckel - DH) and high density ($T \!=\! 0$) limits are given for comparison. Upper panel: ratio of the two-fluid model and the quantum statistical perturbation theory using the Montroll-Ward approximation for hydrogen \cite{greeneos,bluebook}. } \label{pict_hd} \end{figure} Figure \ref{pict_hd} gives a general overview of the high-density EOS of hydrogen as calculated within the two-fluid model. Due to the normalization by the ideal pressure, the lines all approach unity for very high densities where the pressure is also independent of the temperature. At intermediate densities, the correlations yield a reduction compared to the ideal pressure. These correlations can result from the occurrence of bound states or occur between free particles. Bound states can be excluded for the two highest temperatures in Fig.~\ref{pict_hd}. Still, the improved description of ion-ion correlations within the two-fluid model yields a 10\% correction compared to a description of the hydrogen EOS based entirely on Green's function theory. The two-fluid model of Potekhin \& Chabrier gives almost identical results to our approach in the parameter range where such a description can be expected to be valid, i.e. for metallic hydrogen and the high temperature low density case \cite{potekhin}. This is a nontrivial finding as the electron theories used in this paper and by Potekhin \& Chabrier are quite different. In the case of the Green's function EOS \cite{greeneos,bluebook}, the correct high-density limit is an intrinsic feature related to the quantum treatment of both electrons and ions. In the two-fluid model, only the electrons are treated fully quantum statistically. Still, the two fluid system shows a similar behavior as the high density limit is dominated by the electron contribution. This fact holds although the ion-ion correlations strongly increases with density since the electron terms grow significantly faster. In addition, this behavior also minimizes the importance of the partition function of the strongly correlated ion fluid. For instance, small differences between Monte Carlo and HNC treatment of the ions at $\Gamma_{ii}=130$ give a deviation of $2\%$ in the ion system which will change the result of the total equation of state by $0.16\%$. The approach to the exact high temperature limit (here given by the Montroll-Ward approximation of the Green's function theory) is demonstrated in the upper panel of Fig.~\ref{pict_hd}. The two-fluid model can reproduce the exact law within $1\%$ for $T \!=\! 2 \!\times\! 10^6\,$K. The essential ingredient to achieve this result is the finite temperature description of the electrons. Moreover, it shows that the neglected influence of the ions on the electrons is very weak. \section{DFT-MD Simulations} DFT-MD simulations are a well-suited technique for the description of metallic hydrogen as it is encountered for example in the inner regions of giant gas planets. So far, there has been no overlap between an EOS calculated by DFT-MD and {\color{red} the correct $T=0$} high-density models. To close this gap and to allow for a direct comparison of first principle DFT-MD simulations to our two-fluid model, we carried out DFT-MD calculations for very high densities which require special adjustments. We used the DFT-MD programs VASP, CPMD, and abinit \cite{vasp1,vasp2,vasp3,cpmd,abinit1,abinit2}. The number of electrons and protons in the simulations was $N \!=\! 128 \ldots 432$. The temperature of the protons was controlled by a Nos\'e-Hoover thermostat \cite{nose}. The time step in the MD simulation was chosen to be $\Delta t \!=\! 8\,\mbox{a.u.} \!=\! 0.194\,$fs. Every run covers at least $2$\,ps after an initial equilibration. The exchange correlation functional was of Perdev-Burke-Ernzerhof type \cite{pbe}. In VASP runs, the electron-ion pseudopotential was the standard (hard) projector augmented wave (PAW) potential as provided with the package \cite{paw1,paw2} and used here with a plane wave cutoff of $35$\,Ha. For densities higher than $n \!=\! 2.2 \!\times\! 10^{24}\,$cm$^{-3}$ ($r_s \!\le\! 0.9$), this was found to be too soft. Accordingly, a new harder GGA norm conserving local pseudo\-potential was generated using the optimized pseudopotential method as included in the Opium package \cite{opium}. This new potential was then used in the CPMD code. It has a cutoff radius of $r_c \!=\! 0.5\,$a$_B$ ($q_c=15$, 6 Bessel functions) and requires a plane wave cutoff of $100\,$Ha to yield convergent results \footnote{A very hard pseudopotential is also used in Ref. \cite{rec}, although with a capped energy-cutoff.}. MD runs were usually performed using only $\Gamma$-point sampling of the Brillouin zone. Corrections due to k-point sampling can become important in the metallic region of hydrogen. Here, contributions of at most 2\% were determined at randomly chosen snapshots where the pressure was reevaluated using $2 \times 2 \times 2$ and $4 \times 4 \times 4$ Monkhorst-Pack grids of k-points in abinit \cite{MP}. Effects due to finite electron temperatures (Fermi smearing) have not been found for the high densities needed to achieve overlapping with the analytical theory. \begin{figure}[t] \includegraphics[width=0.48\textwidth,clip=true]{fig_02_color.eps} \caption{(Color Online) Proton-proton pair distributions in hydrogen at a temperature of $T \!=\! 10^4\,$K for three different densities. Results of VASP (black dashed), CPMD (blue dashed, long spaces) and HNC using a linearly screened Yukawa potential (red solid) are compared. The black vertical lines indicate half the box length for the VASP runs. The lines for $r_s \!=\! 0.5$ and $r_s \!=\! 0.7$ have been shifted along the y-axis to improve readability. } \label{pict_gr} \end{figure} As a first comparison between properties of our two-fluid model and {\em ab initio} DFT-MD simulations, Fig.~\ref{pict_gr} shows ion-ion pair distributions. The results using VASP and CPMD deviate due to differences in the number of particles considered and the type of pseudopotential used. In the present study, all VASP runs have been performed with 128 electrons and 432 electrons were used in CPMD simulations. In addition, the CPMD runs use a much harder electron-ion pseudo\-potential having a core radius of $r_c \!=\! 0.5\,$a$_B$ whereas $r_c \!=\! 0.8\,$a$_B$ is used in the VASP runs. For the densities with $r_s \!=\! 0.9$ and $r_s \!=\! 0.7$, these differences are insignificant. However, both the harder pseudopotential and the increase in the number of particles / box size are important for the highest density with $r_s \!=\! 0.5$. Here, VASP predicts a first correlation peak that is too high which can be traced back to the too large core radius of the pseudopotential. Furthermore, a box with 128 particles is too small to allow for the computation of the structure at larger distances as it is seen in the tail of the pair distribution. The pair distribution functions obtained from solutions of the hypernetted chain equations (HNC) are consistent with the approximations made to derive the two-fluid model. The HNC solver uses a linearly screened ion-ion potential and keeps the nonlinear correlations within the ion subsystem only. The comparison with the DFT-MD results shows that this is an appropriate model for the high densities shown (note that better agreement is obtained with the CPMD data for $r_s \!=\! 0.5$). Here, the electron-ion coupling strength is sufficiently low to justify the linear response formalism. For the lowest density with $r_s \!=\! 0.9$, there is some deviation in the first slope and in the strength of correlation that are a result of the linear screening approximation used. The comparison with the DFT-MD data demonstrate that the model of linear screening is beyond its limit of applicability for this density. \section{Results for the EOS} We have already established that our two-fluid model merges with the exact $T=0$ limiting law at the high density site of the EOS. In the following, we will show that this model and improved DFT-MD simulations give results in agreement with each other over a range of densities and temperatures. As DFT-MD and further Quantum Monte Carlo simulations already meet the exact low density limit (Debye-H\"uckel or density/fugacity expansions), the entire density range can now be described by theories in the pysical picture. \begin{figure}[t] \includegraphics[width=0.48\textwidth,clip=true]{fig_03_color.eps} \caption{(Color Online) Ratio of pressure to ideal pressure in hydrogen at $T \!=\! 10^4\,$K versus density. Data shown are obtained using VASP \cite{VTMB_07}, VASP\cite{Holst} up to $r_s=0.8$, CEIMC \cite{CEIMC}, Saumon \& Chabrier EOS \cite{sc1,sc2}, and SESAME \cite{sesame}. The two-fluid model uses an effective ion-ion potential in RPA. } \label{pict_1e4} \end{figure} Figure \ref{pict_1e4} and Table~\ref{table1} include a comparison of EOS data obtained by the different simulation techniques. CEIMC results were taken from recent work \cite{CEIMC}. Unlike earlier CEIMC results \cite{CEIMC_old}, they give almost identical values for the pressure as obtained from DFT-MD. The data points span almost an order of magnitude in density and cover the region of metallic hydrogen which is most difficult to describe. This excellent agreement gives large confidence in both first principle simulations. As both data sets have a similar slope, we can expect that either both or none merge with the high-density description of the two-fluid model. \begin{table}[t] \caption{Comparison of the hydrogen EOS as obtained by VASP \cite{VTMB_07,Holst}, CEIMC \cite{CEIMC}, CPMD (this work, using an improved electron-proton pseudopotential) and the two-fluid model (last two columns) using two effective proton-proton potentials as indicated.} \label{table1} \begin{tabular}{|c|c|c|c|c|c|c|} \hline &3500\,K&\multicolumn{5}{c|}{p/p$^{id}$}\\ $r_s$ &p$^{id}$[GPa] & VASP & CPMD & CEIMC & ~RPA~ & Hulth\'en \\ \hline 1.20&2.125e3& 0.321 & - & - &0.272&0.303\\ 1.15&2.624e3& - & - & 0.341 &0.302&0.329\\ 1.10&3.272e3& 0.369 & - & 0.368 &0.332&0.359\\ 1.05&4.122e3& - & - & 0.393 &0.362&0.388\\ 1.00&5.253e3& 0.419 & - & - &0.395&0.418\\ 0.90&8.870e3& 0.472 & - & - &0.451&0.472\\ 0.70&3.101e4& - & 0.584 & - &0.575&0.588\\ 0.50&1.659e5& - & 0.702 & - &0.696&0.71\\\hline\hline &$10^4\,$K&\multicolumn{5}{c|}{p/p$^{id}$}\\ $r_s$ &p$^{id}$[GPa] & VASP & CPMD & CEIMC & ~RPA~ & Hulth\'en\\ \hline 1.20&2.212e3& 0.364 & - & - &0.326&0.363\\ 1.15&2.721e3& - & - & 0.385 &0.348&0.384\\ 1.10&3.380e3& 0.405 & - & 0.403 &0.377&0.404\\ 1.05&4.247e3& - & - & 0.427 &0.405&0.425\\ 1.00&5.400e3& 0.448 & 0.453 & - &0.431&0.447\\ 0.90&9.060e3& 0.497 & 0.497 & - &0.486&0.495\\ 0.80&1.623e4& 0.547 & - & - &0.538&0.547\\ 0.70&3.141e4& 0.602 & 0.597 & - &0.589&0.602\\ 0.50&1.671e5& 0.749 & 0.708 & - &0.700&0.713\\\hline\hline &$2 \!\times\! 10^4\,$K&\multicolumn{5}{c|}{p/p$^{id}$}\\ $r_s$ &p$^{id}$[GPa] & VASP & CPMD & CEIMC & ~RPA~ & Hulth\'en \\ \hline 1.20&2.357e3& 0.412 & - & - &0.384&0.403\\ 1.10&3.568e3& 0.445 & - & - &0.423&0.440\\ 1.00&5.641e3& 0.481 & - & - &0.466&0.481\\ 0.90&9.396e3& 0.522 & - & - &0.510&0.523\\ 0.80&1.666e4& 0.571 & - & - &0.557&0.570\\ 0.70&3.208e4& 0.619 & 0.614 & - &0.607&0.619\\ 0.50&1.687e5& 0.761 & 0.717 & - &0.710&0.722\\\hline \end{tabular} \end{table} Figure \ref{pict_1e4} also shows a comparison of data obtained by {\em ab initio} simulations and results of the two-fluid model. At the low density side where CEIMC data exist, one finds considerable deviations. These differences are due to the decoupling of electrons and ions in the two-fluid EOS which is not applicable here. To test the merging of the simulation data into the two-fluid description, we have to consider higher densities where the electron-ion coupling is weaker. For this task, we turn to DFT-MD simulations. In the case of VASP, this is not straightforward as one is not free in the choice of the electron-ion pseudopotential. Particularly, for densities above $r_s \!=\! 0.7$, the particles are closer together than the core radius of the pseudopotential. As we have already shown for the ion-ion pair distribution (see Fig.~\ref{pict_gr}), this causes VASP to yield unreliable results for $r_s \!\le\! 0.7$. As a result, the pressure obtained from VASP first approaches the results of the two-fluid model and then starts to deviate again for high densities. The behavior of the VASP data demonstrates the need for a harder pseudopotential with a smaller core radius. Moreover, finite size effects become important for higher densities with larger correlation lengths. Both issues have been resolved in CPMD runs using a new pseudopotential with a core radius of $r_c \!=\! 0.5\,$a$_B$ and $N \!=\! 432$ electrons and protons. The result show a smooth merging with the two-fluid model at high densities. Figure \ref{pict_1e4} shows also results from EOS models often used in planetary modelling namely the SESAME tables and the EOS model constructed by Saumon \& Chabrier \cite{sc1}. The latter perfectly merges into our two-fluid model, but deviates from the quantum simulations at lower densities. The SESAME data, on the other hand, disagree with both {\em ab initio} simulations and the two-fluid model over the whole density range considered. \section{Extension and Discussion} \subsection{Nonlinear Electron-Ion Interactions} \begin{figure}[t] \includegraphics[width=0.48\textwidth,clip=true]{fig_04_color.eps} \caption{(Color Online) Pressure of dense hydrogen at $T \!=\! 3500\,$K normalized by the ideal pressure. The symbols mark simulation data similar to Fig.~\ref{pict_1e4} (CEIMC data are interpolated for this temperature). The two curves show results of the two-fluid model with different ion-ion potentials: the linearly screened Yukawa potential (solid, red line) and the nonlinear Hulth\'en potential (red, dash-dotted line). } \label{pict_3500} \end{figure} A limitation of the two-fluid model arises from the use of first order perturbation theory to describe the response of the electrons to the fields created by the ions. One can however argue that the good agreement of the two-fluid model and the fully nonlinear simulations is a result of the fact that quadratic response terms cancel to a large extent \cite{louis}. An improved treatment including the fully nonlinear response (see, e.g., Refs.~\cite{cenni,richardson,gravel}) is beyond the scope of this paper. Here, we estimate nonlinear effects in the electron-ion interaction by applying the nonlinear Hulth\'en potential \cite{hulthen} \begin{equation} v_{ii}^{\rm H}(r)=\frac{e^2\kappa_e}{e^{\kappa_e r}-1} \label{viih} \end{equation} as an ad-hoc model for the effective ion-ion interaction. Interestingly, the results of the two-fluid model agree rather well with the data from the quantum simulations for $3500$K in Fig.\ref{pict_3500} if this nonlinear potential is used. In the density range with $1.1 \!\le\! r_s \!<\! 0.7$, the agreement is much better than for the case of the linearly screened Yukawa potential. Indeed, Potekhin \& Chabrier include local field corrections in the screening for the effective electron-proton interaction \cite{potekhin}. For the parameters of Fig.\ref{pict_3500}, our curve using the Hulth\'en potential and the curve according to their model are indistinguishable. However, there is no effect due to nonlinear electron-ion interactions for temperatures above $5000$K and densities larger than $r_s=0.9$. \subsection{Quantum Effects in the Ion Subsystem} For the highest densities considered, quantum effects may also become important for the ions. To estimate quantum effects on the protons, we first consider a potential that accounts for quantum diffraction effects of order $e^2$ \cite{klimkra} \begin{eqnarray} v_{ii}^{\rm KK}(r)\!&=&\!\frac{Z^2e^2\sqrt{\pi}}{2\lambda_i\kappa_e r} \left\{\exp\left(-\kappa_e r+\frac{\lambda_i^2\kappa_e^2}{4} \right)\right.\label{viiq}\\ &\times&\left[\Phi\left(\frac{r}{\lambda_i}-\frac{\lambda_i\kappa_e}{2}\right) +2\Phi\left(\frac{\lambda_i\kappa_e}{2}\right)-1\right] \nonumber\\ \!&+&\!\left.\exp\!\left(\kappa_e r+\frac{\lambda_i^2\kappa_e^2}{4}\right) \left[1-\Phi\left(\frac{r}{\lambda_i}+\frac{\lambda_i\kappa_e}{2} \right)\right]\right\}.\nonumber \end{eqnarray} Here, $\kappa_e$ is the inverse screening length of the electrons, $\lambda_i \!=\! \sqrt{\hbar/(2m_{ii}k_BT)}$ is the thermal wavelength of the ions with the reduced ion mass $m_{ii} \!=\! m_i/2$, and $\Phi(x)$ denotes the error function \cite{math}. For large distances, the quantum potential (\ref{viiq}) approaches the screened effective ion-ion potential (\ref{viieff}). At the origin, it has the finite value \begin{equation} \lim_{r\to 0} v_{ii}^{\rm KK}=\frac{Z^2e^2\sqrt{\pi}}{\lambda_i} \exp\left(\frac{\lambda_i^2\kappa_e^2}{2}\right) \left[1-\Phi\left(\frac{\lambda_i\kappa_e}{2}\right)\right] \,, \end{equation} which reflects quantum diffraction effects. Moreover, quantum exchange effects can be important for high densities. These can be included by adding the following exchange potential \cite{CPP_PNP} \begin{eqnarray} \label{viix} v_{ii}^{\rm ex}(r)&=&\frac{1}{2} \exp\left(-r^2/\lambda_i^2\right) \\ &\times& \left[k_BT-\frac{Z^2e^2\pi}{4r} \int\limits_0^1\frac{d\alpha}{\alpha} \, \Phi\left(\frac{r\alpha}{\lambda_i\sqrt{1-\alpha}}\right) \right] \,. \nonumber \end{eqnarray} The first term accounts for ideal exchange in an averaged way whereas the second term gives the $e^2$ contribution. The quantum potentials (\ref{viiq}) and (\ref{viix}) are derived from Slater sums, and are exact in the sense of perturbation theory. They give the correct quantum thermodynamic functions in the weakly coupled and weakly degenerate limit, i.e., for systems with $\Gamma_{ii} \!\ll\! 1$ and $n\Lambda_i^3 \!=\! n(2\pi\hbar/m_ik_BT)^{3/2} \!\ll\! 1$. We use these potentials here to estimate for which densities quantum effects start to influence the ion properties. It should be emphasized that the quantum potentials (\ref{viiq}) and (\ref{viix}) are used {\em only} in the classical method employed to determine the pair distribution function $g_{ii}$ (here, in the HNC solver). The thermodynamic expressions (\ref{yeos}) and (\ref{yueos}) are valid in quantum as well as in the classical case. Hence, the effective ion-ion potential (\ref{viieff}) must be used without any quantum corrections in these formulas. \begin{figure}[t] \includegraphics[width=0.48\textwidth,clip=true]{fig_05_color.eps} \caption{(Color Online) Ion-ion pair distributions for $T \!=\! 10^4\,$K and three densities calculated by HNC using the screened potential (\ref{viieff}) (solid lines, labelled Y) and the screened quantum potential with exchange (\ref{viiq}) plus (\ref{viix}) (dash-dotted lines, KK+ex). The ion degeneracy is $n\Lambda_i^3 \!=\! 0.26$, 0.52, 1.05 for the three densities, respectively. The ion-ion coupling is $\Gamma_{ii} \!=\! 99(96)$, 125(119), 157(144), respectively, where the quantum coupling strength is given in brackets. The upper panel gives the ratio of the pair distributions with and without quantum effects. } \label{pict_grkkx} \end{figure} Figure \ref{pict_grkkx} shows ion pair distributions obtained using the usual screened potential (\ref{viieff}) and the quantum pseudo\-potential. Deviations become obvious for plasmas with $n\Lambda_i^3 \!>\! 0.6$. It can be observed that the quantum effects weaken the short range order in the proton system. It is however remarkable that these effects only set in at these high values of degeneracy. The reason for this behavior is found in the strong Coulomb forces at high densities that create a large correlation hole. Consequently, the protons are too far apart to experience short range diffraction and quantum exchange effects. The emerging quantum effects in the ion structure yield only small changes in the ion contribution to the thermodynamic functions. We determine a reduction of less than 1\% in the correlation contribution to the ion pressure for plasmas with $n\Lambda_i^3 \!\le\! 1$. At higher densities, the deviations from the classical result can be large, but their exact calculation requires PIMC methods. The only quantum effect outside our control are zero point oscillations of the ions. In strongly coupled liquids, caging of the ions occurs and, for the time the cage is stable, the ions perform oscillations as in a solid \cite{donko}. PIMC calculations in the solid and fluid phases of a Yukawa system did indeed find contributions due to the zero point motion, but for very high densities of $n \!=\! 10^{27}\,$cm$^{-3}$ \cite{graham}. Moreover, the overall change in energy due to quantum oscillations is below 1\% for the parameters considered. Furthermore, DFT-MD simulations, that do not include quantum oscillations of the ions, and CEIMC results, that include them, agree well in the considered parameter range which again indicates that these effects are small. \subsection{Hydrogen EOS for Arbitrary Densities} The two-fluid model, based on the quantum statistical Green's function approach for the electron contributions and classical HNC methods for the ion properties, is well applicable in the density range that is neither covered by present first principle simulations nor by the $T=0$ limit. After examining the high-density part of the EOS in detail, it is worth noting that theories and simulations in the physical picture can be used to cover the entire parameter range {\em and} show agreement in overlapping regions of their applicability. \begin{figure}[t] \includegraphics[width=0.48\textwidth,clip=true]{fig_06_color.eps} \caption{(Color Online) Pressure over ideal pressure of a hydrogen plasma with $T \!=\! 10^4\,$K for a wide density range covering the ideal classical plasma state, the atomic gas, the molecular gas, and the metallic fluid (from left to right). The lines show the fugacity expansion from Ref.~\cite{bluebook}, the two-fluid model (this work) using an effective ion-ion potential in RPA, and the $T \!=\! 0$ limit. The symbols mark simulation data from RPIMC \cite{PIMC}, CEIMC \cite{CEIMC}, and DFT-MD from this work and Refs. \cite{VTMB_07,Holst}. } \label{pict_1e4all} \end{figure} Figure \ref{pict_1e4all} demonstrates this fact for an isotherm that covers the low density ideal plasma, the atomic fluid, the molecular fluid, and the metallic fluid to the border of the fully degenerate electron-proton system. Although each phase requires well-suited theories or simulations, one finds a smooth EOS that is often based on several methods and that does not involve any interpolation. Such a combined EOS can serve as an excellent basis for the modelling of gas planets and other compact objects dominated by hydrogen. \section{Conclusions} We have applied a two-fluid model to investigate the hydrogen EOS at high densities. Our approach combines the advantages of a quantum theory based on thermodynamic Green's functions for the electrons with a classical description of the structure in the ion fluid. Thus, it is able to account for electron degeneracy, finite temperatures, and strong forces between the ions. Its only limitation is the requirement of weak electron-ion interactions which naturally excludes bound states. The two-fluid model agrees very well with a number of exact limiting laws: i) at low densities it almost exactly merges with the Debye-H\"uckel law; ii) at high energies it practically coincides with the $T \!=\! 0$ limit as long as quantum effects on the proton subsystem are negligible; and iii) for high plasma temperatures it agrees with a perturbation expansion in terms of interaction strength for the entire density range. In the high density region, we find excellent agreement of the two fluid model with first principle simulations (DFT-MD). Our DFT-MD simulations agree also well with recent CEIMC results. At lower densities ($r_s \!>\! 0.7$), the request of weak electron-ion coupling is not fulfilled and one finds increasing deviations between the results of the two-fluid model and the quantum simulations. For densities with $r_s \!\leq\! 0.7$, the two-fluid model is however a reliable and computationally cheap alternative to full scale quantum simulations. From the agreement in pressure and ion structure between simulations and two-fluid model, we can conclude that the exchange correlation functional used in DFT-MD and our Green's function approach give the same electronic contributions for the relevant parameters. The two-fluid model also bridges the parameter space where neither DFT-MD simulations nor the $T \!=\! 0$ law can be applied. This success means that hydrogen can confidently be described in large parts of the phase diagram by techniques using the physical picture. For temperatures of a few electron volts relevant for astrophysics and inertial fusion, this includes the entire density range. Although the two-fluid model was applied here only for hydrogen, it is also applicable for any fully ionized system with higher ion charge states or very stable inner shell configurations with small modifications. The essential test is if the electron-ion interaction can be considered to be small. Accordingly, systems like the fluid region in white dwarfs can be confidently described by the two-fluid model presented here as well. \section*{Acknowledgement} The authors are grateful to B.~Holst, W.~Lorenzen, and R.~Redmer for valuable remarks and comparisons with their DFT-MD results. We also thank M.~Schlanges for stimulating discussions. Financial support from the UK's Engineering and Physical Sciences Research Council is gratefully acknowledged.
2,869,038,155,815
arxiv
2,869,038,155,816
arxiv
\section{Introduction} \label{sec1} A large amount of work has been devoted in the last few decades to the study of granular matter (materials composed by many mesoscopic particles that collide inelastically). When the granular material is externally excited (rapid flow conditions), the behavior of solid particles is dominated by the particle collisions and kinetic theory tools can be used to describe granular flows. Thus, from the point of view of fundamental kinetic theory, the study of granular gases is interesting because it involves the generalization of classical kinetic equations (such as the Boltzmann, Enskog or Boltzmann-Lorentz equations, for instance) to dissipative dynamics. On the other hand, the fact that collisions are inelastic gives rise to a decreasing time evolution of the total kinetic energy and so, one has to inject energy into the system to keep it under rapid flow conditions. When the injected energy compensates for the collisional loss of energy, the system reaches a \emph{non-equilibrium} steady state. In this context, granular matter can be considered as a good prototype of a system that inherently is in a non-equilibrium state. In real experiments, the granular gas is driven through the boundaries, for example, vibrating its walls \cite{YHCMW02} or alternatively by bulk driving, as in air-fluidized beds \cite{AD06,SGS05}. The same effect can be achieved by means of the action of an \emph{external} driving force that heats the system homogeneously. This way of supplying energy is quite usual in computer simulations \cite{Puglisi,Zippelius} and this type of external forces is usually called ``thermostat'' \cite{E90}. Although thermostats have been widely used in the past to study granular dynamics, their effects on the properties of the system are not yet completely understood \cite{DSBR86,GSB90,GS03}. In this paper, we are interested in analyzing the homogeneous steady state of a driven granular fluid. Our thermostat is composed by two different terms: (i) a drag force proportional to the velocity of the particle and (ii) a stochastic force with the form of a Gaussian white noise where the particles are randomly kicked between collisions \cite{WM96}. Under these conditions, our kinetic equation has the structure of a Fokker-Planck equation \cite{VK92} plus the corresponding (inelastic) collisional operator of the Enskog-Boltzmann equation. The viscous drag force allows us to model the friction from a surrounding fluid over a moderately dense set of spheres \cite{GTSH12}. The stochastic force would model the energy transfer from the surrounding fluid molecules to the granular particles, due to molecular thermal motion (much in the same way as in a Brownian particle). Thus, our study has obvious applications to the dynamics of colloids and suspensions \cite{J00,K90,KH01,BGP11,GTSH12}. In particular, and since the volume forces act homogeneously in the granular gas, our system may show homogeneous steady states, if there is no additional energy input from the boundaries. The same type of thermostats were used in previous works by other authors \cite{Puglisi}. In particular, Gradenigo \emph{et al.} \cite{GSVP11} carried out Langevin dynamics simulations for hard disks to measure the static and dynamic structure factors for shear and longitudinal modes. The corresponding best fit of their simulation results allows them to identify the kinematic and longitudinal viscosities and the thermal diffusivity. For the sake of simplicity, they neglect non-Gaussian corrections to the (homogeneous) distribution function and use the forms of the \emph{elastic} Enskog transport coefficients to compare with simulations. More recently, the expressions of the \emph{inelastic} transport coefficients of driven granular fluids have been derived \cite{GCV13} by means of the Chapman-Enskog method instead \cite{CC70}. In this case, the inherently homogeneous steady state of our system emerges as the zeroth-order approximation $f^{(0)}$ in the Chapman-Enskog perturbative scheme. In order to characterize the deviation of the distribution $f^{(0)}$ from its Maxwellian form, a Sonine polynomial expansion was considered. As usual, for practical purposes, we retained only the first non-zeroth order contribution to the expansion and derived an explicit expression for the second Sonine coefficient $a_2$. We want in the present work to focus in the properties of the homogeneous steady state, describing in more detail the features of the velocity distribution function. More specifically, our aim here is two-fold. First, as noted in our previous work \cite{GCV13}, we \emph{assume} that in the steady state the homogeneous distribution function $f_\text{s}$ admits a \emph{scaling} solution \begin{equation} \label{1.1} f_\text{s}\to n v_{0,\text{s}}^{-d} \varphi(c,\xi^*), \end{equation} where $n$ is the number density, $v_{0,\text{s}}=\sqrt{2T_\text{s}/m}$ is the thermal velocity ($T_\text{s}$ being the steady granular temperature), $\mathbf{c}\equiv \mathbf{v}/v_{0,\text{s}}$ is a dimensionless velocity and $\xi^*$ (defined below in Eq.\ \eqref{2.13}) is the (dimensionless) noise strength of the stochastic term of thermostat. According to the scaling form \eqref{1.1} and in contrast to the results obtained in the homogeneous cooling state (undriven gas) \cite{NE98}, the dependence of the reduced distribution $\varphi$ on temperature is encoded not only through the (reduced) velocity $c$ but also through the driven parameter $\xi^*$. In this paper, we perform Monte Carlo simulations \cite{B94} of the Enskog-Boltzmann equation to confirm that indeed the scaled distribution $\varphi$ presents this universal character for arbitrary values of the coefficient of restitution $\al$ and the external driven parameters. As a second goal, we shall characterize the behavior of $\varphi(c,\xi^*)$ in the domain of thermal velocities by evaluating the two first non-trivial coefficients ($a_2$ and $a_3$) of an expansion of $\varphi$ in Sonine polynomials. Given that both coefficients cannot be exactly obtained, we will propose two different approximations to estimate $a_2$ and $a_3$. In particular, we provide expressions for the coefficient $a_2$ in a more accurate calculation method than in our previous work \cite{GCV13}. Therefore, we give an analytical expression for the distribution function with one more term, and in a more refined approximation. As we will see, the comparison with the direct simulation Monte Carlo (DSMC) results obtained specifically for this work shows that the analytical expression of the distribution function derived here describes very well the system in a wide range of velocities. A preliminary report of part of the results offered in this paper has been published elsewhere \cite{M12}. The plan of the paper is as follows. In section \ref{sec2} we describe the system, the thermostat and the system kinetic equation. Next, in section \ref{sec3} we obtain explicit expressions for the two first Sonine coefficients $a_2$ and $a_3$ while the numerical solution of the Enskog-Boltzmann equation for the system studied here is presented in section \ref{comparison} for disks and spheres. A comparison with the approximated theoretical expressions derived in section \ref{sec3} is also carried out, showing in general good agreement between theory and simulation. The paper is closed in section \ref{discussion} with some concluding remarks. \section{Enskog-Boltzmann kinetic theory for homogeneous driven states} \label{sec2} Let us then consider a set of identical smooth hard disks/spheres ($d$ is the dimension of the system) with mass $m$ and diameter $\sigma$ that collide inelastically. At moderate densities, one can still assume that there are no correlations between the velocities of two particles that are about to collide (molecular chaos hypothesis) \cite{GS95}, so that the two-body distribution function factorizes into the product of the one-particle velocity distribution functions $f(\mathbf{r}, \mathbf{v}, t)$. For a spatially uniform state, the Enskog kinetic equation for $f(v, t)$ reads \begin{equation} \partial_{t}f+{\cal F}f=\chi J[f,f], \label{2.1} \end{equation} where $J[f,f]$ is the collision operator, given by \begin{equation} \label{2.2} J\left[\mathbf{v}_1|f(t), f(t)\right] =\sigma^{d-1}\int d\mathbf{v}_{2}\int d\widehat{\boldsymbol {\sigma}}\Theta (\widehat{\boldsymbol {\sigma}}\cdot \mathbf{g}_{12})(\widehat{ \boldsymbol {\sigma }}\cdot \mathbf{g}_{12})\left[ \alpha^{-2}f(\mathbf{v}_{1}^{\prime })f(\mathbf{v}_{2}^{\prime})-f(\mathbf{v}_{1})f(\mathbf{v}_{2})\right]. \end{equation} Here, $\cal{F}$ is an operator representing the effect of an external force, $\chi$ is the pair correlation function at contact \cite{CS69}, $\widehat{\boldsymbol {\sigma}}$ is a unit vector along the line joining the centers of the colliding spheres, $\Theta $ is the Heaviside step function and ${\bf g}_{12}={\bf v}_{1}-{\bf v}_{2}$ is the relative velocity. In addition, the primes on the velocities in equation \ \eqref{2.2} denote the initial values $\{\mathbf{v}_1', \mathbf{v}_2'\}$ that lead to $\{\mathbf{v}_1, \mathbf{v}_2\}$ following a binary collision: \begin{equation} \label{2.3} \mathbf{v}_{1}'=\mathbf{v}_{1}-\frac{1}{2}(1+\alpha^{-1})(\widehat{{\boldsymbol {\sigma }}}\cdot \mathbf{g}_{12})\widehat{\boldsymbol {\sigma}}, \quad \mathbf{v}_{2}'=\mathbf{v}_{2}+\frac{1}{2}(1+\alpha^{-1})(\widehat{{\boldsymbol {\sigma }}}\cdot \mathbf{g}_{12})\widehat{\boldsymbol {\sigma}}, \end{equation} where $\alpha \leq 1$ is the (constant) coefficient of normal restitution. Except for the presence of the factor $\chi$ (which accounts for the increase of the collision frequency due to excluded volume effects), the Enskog equation for \emph{uniform} states is identical to the Boltzmann equation for a low-density gas. For this reason, henceforth we will call the Enskog-Boltzmann equation to Eq.\ \eqref{2.1}. As we said in the Introduction, our granular gas is subjected to homogeneous volume forces that try to mimic the interaction with a surrounding molecular fluid \cite{Puglisi}. These forces, usually called ``thermostats'' \cite{E90}, show up in the kinetic equation \eqref{2.1} through the term $\mathcal{F}$. Here, we will consider a volume force composed by two independent terms. One term corresponds to a Gaussian white noise force ($\mathbf{F}^\text{st}$), that tries to simulate the kinetic energy gain due to eventual collisions with the (more rapid) molecules of the surrounding fluid. It does this by adding a ``random'' velocity to each particle. This additional velocity is extracted from a Maxwellian distribution with a characteristic variance determined by the ``noise intensity'' $\xi_b^2$ \cite{WM96}. The other term corresponds to a drag force ($\mathbf{F}^\text{drag}$) of the form $-\gamma_b\mathbf{v}_i(t)$, that tries to capture the effect of the surrounding fluid viscosity ($\gamma_b$ is a drag coefficient). This kind of thermostat composed by two different forces has been used by other authors in previous works \cite{Puglisi}. The total thermostat force $\mathbf{F}^\text{th}(t)$ is \begin{equation} \label{2.5} {\bf F}_i^{\text{th}}(t)={\bf F}_i^{\text{st}}(t)+{\bf F}_i^{\text{drag}}(t)= {\bf F}_i^{\text{st}}(t)-\gamma_\text{b}{\bf v}_i(t). \end{equation} Since ${\bf F}_i^{\text{st}}(t)$ is a Gaussian white noise \cite{WM96}, it fulfills the conditions \cite{MS00} \begin{equation} \label{2.6} \langle {\bf F}_i^{\text{st}}(t) \rangle ={\bf 0}, \quad \langle {\bf F}_i^{\text{st}}(t) {\bf F}_j^{\text{st}}(t') \rangle =\mathbf{1}m^2 \xi_\text{b}^2 \delta_{ij}\delta(t-t'), \end{equation} where $\mathbf{1}$ is the $d\times d$ unit matrix and $\delta_{ij}$ is the Kronecker delta function. The corresponding term in the Enskog-Boltzmann equation associated to the stochastic force ${\bf F}_i^{\text{st}}$ is represented by the Fokker-Planck operator $-\frac{1}{2}\xi_\text{b}^2\partial^2/\partial v^2$ \cite{NE98}. Therefore, the stochastic and drag forces contribute to the kinetic equation with terms of the form \begin{equation} \label{calF} \mathcal{F}f=\mathcal{F}^{\text{st}}f+\mathcal{F}^{\text{drag}}f, \quad \mathcal{F}^{\text{st}}f=-\frac{1}{2}\xi_\text{b}^2\frac{\partial^2}{\partial v^2} f, \quad \mathcal{F}^{\text{drag}}f=-\frac{\gamma_\text{b}}{m}\frac{\partial}{\partial\mathbf{v}}\cdot\mathbf{v}f. \end{equation} Notice that the thermostat terms $\mathcal{F}^\text{st}$ and $\mathcal{F}^\text{drag}$ introduce in the kinetic equation \eqref{2.1} two new and independent time scales given by $\tau_\text{st}=v_0^2/\xi_\text{b}$ and $\tau_\text{drag}=m/\gamma_\text{b}$, respectively. Here, $v_0=\sqrt{2T/m}$ is the thermal velocity defined in terms of the granular temperature $T$. A similar external driving force to that of equation \eqref{2.6} has been recently proposed to model the effect of the interstitial fluid on grains in monodisperse gas-solid suspensions \cite{GTSH12}. The Enskog-Boltzmann equation \eqref{2.1} can be more explicitly written when one takes into account the form \eqref{calF} of the forcing term $\mathcal{F}f$. It is given by \begin{equation} \partial_{t}f-\frac{\gamma_\text{b}}{m} \frac{\partial}{\partial{\bf v}}\cdot {\bf v} f-\frac{1}{2}\xi_\text{b}^2\frac{\partial^2}{\partial v^2}f=\chi J[f,f]. \label{2.7} \end{equation} The density $n$ and temperature $T$ fields are defined as usual (except that for our case the mean flow velocity vanishes) \begin{equation} \label{n} n(t)=\int\;\dd\mathbf{v}f(\mathbf{v},t), \end{equation} \begin{equation} \label{2.8} T(t)=\frac{m}{d n}\; \int\; \dd \mathbf{v}\; v^2 f(\mathbf{v},t). \end{equation} The balance equation for the homogeneous temperature can be easily obtained by multiplying both sides of equation \ \eqref{2.7} by $v^2$ and integrating over velocity. The result is \begin{equation} \label{Tdt} \partial_tT=-\frac{2T}{m}\gamma_\text{b} +m\xi_\text{b}^2- \zeta T, \end{equation} where \begin{equation} \label{zeta} \zeta=-\frac{m}{dnT}\int\; \dd\mathbf{v}\; v^2 J[f,f] \end{equation} is the cooling rate $\zeta$. It is proportional to $1-\alpha^2$ and is due to the inelastic character of the collisions. We will assume now that a \emph{normal} (or hydrodynamic) solution to equation\ \eqref{2.7} exists. This means that all the time dependence of the distribution function $f(v,t)$ occurs through a functional dependence on the hydrodynamic fields \cite{CC70,McL89}. Since the temperature is the only relevant field in this problem, the time dependence of $f(v,t)$ occurs only through $T(t)$. Therefore, according to equation\ \eqref{Tdt}, one gets \begin{equation} \label{fhd} \partial_t f=\frac{\partial f}{\partial T}\partial_tT= -\left(\frac{2\gamma_\text{b}}{m} -\frac{m}{T}\xi_\text{b}^2+ \zeta\right)T \frac{\partial f}{\partial T}. \end{equation} Substitution of equation\ (\ref{fhd}) into equation\ (\ref{2.7}) yields \begin{equation} \label{2.12} -\left( \frac{2}{m}\gamma_\text{b}-\frac{m}{T}\xi_\text{b}^2+ \zeta\right)T \frac{\partial f}{\partial T}-\frac{\gamma_\text{b}}{m} \frac{\partial}{\partial {\bf v}}\cdot {\bf v} f-\frac{1}{2}\xi_\text{b}^2\frac{\partial^2}{\partial v^2}f=\chi J[f,f]. \end{equation} After a transient regime, it is expected that the gas reaches a steady state characterized by a constant granular temperature $T_\text{s}$. In this case, $\partial_t T=0$ and the balance equation \eqref{Tdt} leads to \begin{equation} \label{2.9} T_\text{s}=\frac{m\xi_\text{b}^2}{\zeta_\text{s}+\frac{2\gamma_\text{b}}{m}}, \end{equation} where the subindex $\text{s}$ means that the quantities are evaluated in the steady state. Given that equation \eqref{2.9} establishes a relation between the two driven parameters $\gamma_\text{b}$ and $\xi_\text{b}^2$, only one of the above parameters will be considered as independent. Henceforth, we will take $\xi_\text{b}^2$ as the relevant driven parameter. For elastic collisions ($\al=1$), $\zeta_\text{s}=0$ and so, equation \eqref{2.9} yields $T_\text{s}=T_\text{b}$ where \beq \label{2.10.1} T_\text{b}=\frac{m^2\xi_\text{b}^2}{2\gamma_\text{b}}. \eeq As in the work of Gradenigo et al. \cite{GSVP11}, equation \eqref{2.10.1} defines a ``bath temperature''. Its name may be justified since it is determined by the two thermostats parameters ($\gamma_\text{b}$ and $\xi_\text{b}^2$), and thus it can be considered as remnant of the physical temperature of the surrounding ordinary (elastic) fluid. In this sense, for elastic collisions, $T=T_\text{b}$, and so energy equipartition is fulfilled (in accordance to equilibrium statistical mechanics principles). For inelastic gases (i.e., for $\alpha<1$, $\zeta_\text{s}>0$), equation \eqref{2.9} yields $T<T_\text{b}$. From a physical point of view, it makes sense that the inelastic granular gas is cooler than the surrounding ordinary fluid. The kinetic equation of the steady distribution function $f_s$ can be easily obtained by using the relation \eqref{2.9} in equation \eqref{2.12}: \begin{equation} \label{2.11} \frac{1}{2}\zeta_\text{s} \frac{\partial}{\partial {\bf v}}\cdot {\bf v} f_\text{s}-\frac{m\xi_\text{b}^2}{2T_\text{s}} \frac{\partial}{\partial {\bf v}}\cdot {\bf v} f_\text{s}-\frac{1}{2}\xi_\text{b}^2\frac{\partial^2}{\partial v^2}f_\text{s}=\chi J[f_\text{s},f_\text{s}]. \end{equation} Equation \eqref{2.11} clearly shows that $f_\text{s}(v)$ must also depend on the model parameter $\xi_\text{b}^2$ and the steady temperature $T_\text{s}$ apart from its dependence on the coefficient of restitution $\al$. Note that the steady cooling rate $\zeta_\text{s}$ is defined in terms of the steady distribution $f_s$ (see equation \eqref{zeta}). Based on previous results obtained for undriven \cite{NE98,GS95} and driven \cite{NE98,MS00,GCV13} systems, it is expected that equation\ \eqref{2.11} admits a scaling solution of the form given by equation \eqref{1.1}, where $\varphi$ is an unknown function of the dimensionless parameters \begin{equation} \label{2.13} \mathbf{c}\equiv \frac{\mathbf{v}}{v_{0,\text{s}}}, \quad \xi^*\equiv \frac{m\ell}{\chi T_\text{s}v_{0,\text{s}}}\xi_\text{b}^2. \end{equation} Here, \begin{equation} \label{mfp} \ell=\frac{1}{n\sigma^{d-1}} \end{equation} is the mean free path for hard spheres. In the steady state, it is also convenient to define the collision frequency \begin{equation} \label{cf} \nu_s=\frac{v_{0,\text{s}}}{\ell}=\sqrt{\frac{2T_\text{s}}{m}}n\sigma^{d-1}, \end{equation} and the reduced drag coefficient \begin{equation} \label{reddrag} \gamma^*=\frac{\gamma_\text{b}}{\chi m\nu_s}. \end{equation} In terms of the (reduced) distribution function $\varphi$, equation\ \eqref{2.11} may be rewritten as \begin{equation} \label{2.15} \frac{1}{2}\zeta^* \frac{\partial}{\partial {\bf c}}\cdot {\bf c}\varphi-\frac{1}{2}\xi^*\frac{\partial}{\partial {\bf c}}\cdot {\bf c} \varphi- \frac{1}{4}\xi^* \frac{\partial^2}{\partial c^2}\varphi= J^*[\varphi,\varphi], \end{equation} where we have introduced the dimensionless quantities \begin{equation} \label{2.16} \zeta^*\equiv\frac{\zeta_\text{s}}{\chi\nu_s}, \quad J^*[\varphi,\varphi] \equiv \frac{ v_{0,\text{s}}^{d}}{n\nu_s}J[f,f]. \end{equation} Equation \eqref{2.15} clearly shows that the dependence of the scaled distribution function $\varphi$ on the temperature is encoded through two \emph{different} parameters: the dimensionless velocity $c$ and the (reduced) noise strength $\xi^*$. This scaling differs from the one assumed in the free cooling case \cite{NE98} where only the dimensionless velocity $c$ is required to characterize the distribution $\varphi$. A similar scaling solution to the form \eqref{2.12} has been recently found \cite{GMT12} at all times (also for unsteady states) in the particular case $\gamma_\text{b}=0$. Thus, our guess in equation \eqref{1.1} seems to be reasonable. In the case of elastic particles ($\al=1$), the cooling rate $\zeta_\text{s}^*$ vanishes and the solution of equation\ \eqref{2.15} is the Maxwellian distribution \begin{equation} \label{2.17} \varphi_\text{M}(c)= \pi^{-d/2}e^{-c^2}. \end{equation} However, if the particles collide inelastically ($\alpha <1$), then $\zeta^*\neq 0$ and the exact form of $\varphi(c)$ is not known. One of the objectives of the present work is to find an accurate analytical solution to the reduced distribution function $\varphi$, from equation \eqref{2.15}. As usual, the behavior of $\varphi(c,\xi^*)$ in the region of thermal velocities ($c\simeq 1$) can be well characterized by the two first nontrivial coefficients ($a_2$ and $a_3$) of an expansion in Sonine polynomials. This will be done in the next section by considering two different approaches. In reduced units, the steady-state condition \eqref{2.9} can be simply written as \begin{equation} \label{ss} 2\gamma^*=\xi^*-\zeta^*. \end{equation} Since $\gamma^* \geq 0$, then equation \eqref{ss} requires $\xi^*\geq \zeta^*$. Thus, at a given value of the coefficient of restitution $\al$, there is a minimum threshold value $\xi^*_\text{th}(\al)$ of the noise intensity needed to reach a steady state. The value of $\xi^*_\text{th}$ coincides with the (reduced) cooling rate $\zeta^*(\al)$. Given that the latter cannot be exactly determined, a good estimate of it is obtained when one replaces the true distribution $\varphi$ by its Maxwellian form $\varphi_\text{M}$. In this case, $\zeta^*\to \zeta_\text{M}^*$, where \cite{GS95} \beq \label{ss.1} \zeta_\text{M}^*=\frac{\sqrt{2}}{d}\frac{\pi^{(d-1)/2}}{\Gamma(d/2)}(1-\al^2). \eeq Before closing this section, let us make some observations. First, due to the equivalence between the Enskog and Boltzmann equations in the homogeneous states, the solution to equation \eqref{2.15} does not depend explicitly on the pair correlation function $\chi$ (and $\chi$ is a function of the packing fraction). This means that the reduced distribution function $\varphi$ has the same universal form for arbitrary values of the packing fraction of the granular fluid. Thus, we do not need to provide explicit expressions for the purpose of this work, although they may be found elsewhere \cite{CS69,T95}. Also, as mentioned before, the steady-state equation \eqref{2.9} leads to a relation between $\xi_\text{b}^2$ and $\gamma_\text{b}$ so that the scaled distribution $\varphi$ depends on both parameters only through the reduced noise strength $\xi^*$. Therefore, providing $\xi^*$ (and not $\xi_\text{b}^2$, $\gamma_\text{b}$, nor $\chi$) is enough to determine uniquely the steady distribution function. We will check that in effect this is the case in section \ref{comparison}, from comparison with simulation data. \section{Analytical solution of the scaled distribution function} \label{sec3} The goal of this section is to determine a perturbative (although sufficiently accurate) analytic solution of the distribution function $\varphi(c,\xi^*)$. As said before, a convenient and useful way of characterizing $\varphi(c,\xi^*)$ in the range of low to intermediate velocities is through the Sonine polynomial expansion \begin{equation} \label{3.1} \varphi(c,\xi^*)=\varphi_\text{M}(c)\left[1+\sum_{p=1}^\infty\; a_p(\xi^*)\; S_p(c^2)\right], \end{equation} where $S_p$ are generalized Laguerre or Sonine polynomials. They are defined as \cite{BP04} \beq \label{3.2} S_p(x)=\sum_{k=0}^p\;\frac{(-1)^k \left(\frac{d}{2}-1+p\right)!}{\left(\frac{d}{2}-1+k\right)!(p-k)!k!}x^k, \eeq and satisfy the orthogonality relations \cite{Abramowitz} \beq \label{3.6} \int\; \dd \mathbf{c}\; \varphi_\text{M}(c)\; S_p(c^2)\; S_{p'}(c^2)={\cal N}_p\;\delta_{p p'}, \eeq where ${\cal N}_p$ is a normalization constant. The first few Sonine polynomials relevant for our study are \beq \label{3.4} S_0(x)=1, \quad S_1(x)=-x+\frac{d}{2}, \quad S_2(x)=\frac{1}{2}x^2-\frac{d+2}{2}x+\frac{d(d+2)}{8}, \eeq \beq \label{3.5} S_3(x)=-\frac{1}{6}x^3+\frac{d+4}{4}x^2-\frac{(d+2)(d+4)}{8}x+\frac{d(d+2)(d+4)}{48}. \eeq The coefficients $a_p$ appearing in equation \eqref{3.1} (also called \textit{cumulants}) are the corresponding velocity moments of the scaling function $\varphi$, i.e., \beq \label{3.6_31} a_p(\xi^*)=\frac{1}{{\cal N}_p}\int\; \dd \mathbf{c}\; S_p(c^2)\; \varphi(c,\xi^*). \eeq In particular, the temperature definition \eqref{2.8} implies $\langle c^2 \rangle =\frac{d}{2}$ and therefore, \beq \label{3.7} a_1=\frac{2}{d}\langle S_1(c^2) \rangle=0. \eeq Here, $\langle \cdots \rangle$ denotes an average over the scaled distribution $\varphi$, namely, \beq \label{3.8} \langle c^p \rangle\equiv \int\; \dd \mathbf{c}\; c^p\; \varphi(c). \eeq In the present work, we will retain up to the first two nontrivial coefficients $a_2$ and $a_3$. They are related to the fourth and sixth velocity moments as \begin{equation} \label{3.9} \langle c^4 \rangle =\frac{d(d+2)}{4}(1+a_2), \end{equation} \begin{equation} \label{3.10} \langle c^6 \rangle =\frac{d(d+2)(d+4)}{8}(1+3a_2-a_3). \end{equation} In order to determine the coefficients $a_k$, we construct a set of equations for the velocity moments $\langle c^{2p} \rangle$. The hierarchy for the moments can be easily derived by multiplying both sides of equation\ \eqref{2.15} by $c^{2p}$ and integrating over $\mathbf{c}$. The result is \begin{equation} \label{3.11} p(\zeta^*-\xi^*)\langle c^{2p} \rangle+\frac{p(2p+d-2)}{2}\xi^*\langle c^{2p-2} \rangle=\mu_{2p}, \end{equation} where \begin{equation} \label{3.12} \mu_{2p}=-\int\; d{\bf c}\;c^{2p}\; J^*[\varphi,\varphi]. \end{equation} Upon writing equation\ \eqref{3.11} use has been made of the results \begin{equation} \label{3.13} \int\; d{\bf c}\; c^{2p}\; \frac{\partial}{\partial {\bf c}}\cdot {\bf c}\varphi(\mathbf{c})=-2p\langle c^{2p} \rangle, \end{equation} \begin{equation} \label{3.14} \int\; \dd {\bf c}\; c^{2p}\; \frac{\partial^2}{\partial c^2}\varphi(\mathbf{c})=2p(2p+d-2)\langle c^{2p-2} \rangle. \end{equation} Note that, according to equations \eqref{zeta} and \eqref{3.12}, the (reduced) cooling rate $\zeta^*=\frac{2}{d}\mu_2$. The cumulants $a_p$ can be obtained from the \emph{exact} set of moment equations \eqref{3.11}. However, given that the collisional moments $\mu_{2p}$ are functionals of the distribution $\varphi$, equation\ \eqref{3.11} becomes an infinite hierarchy of moment equations. In other words, \emph{all} the Sonine coefficients $a_p$ are coupled and so, one has to resort to some kind of truncation in the series \eqref{3.1} to get explicit forms for $a_p$. Thus, based on the expectation that the Sonine coefficients are small, one usually \emph{approximates} the first few collisional moments $\mu_{2p}$ by inserting the expansion \eqref{3.1} into equation\ \eqref{3.12}, truncating the expansion in a given order and, in some cases, neglecting nonlinear terms. In particular, in the case of the collisional moments defined by equation\ \eqref{3.12} with $p=1$, 2, and 3, one gets \begin{equation} \label{3.15} \mu_2\to A_0+A_2 a_2+A_3 a_3, \end{equation} \begin{equation} \label{3.16} \mu_4\to B_0+B_2 a_2+B_3 a_3, \end{equation} \begin{equation} \label{3.17} \mu_6\to C_0+C_2 a_2+C_3 a_3. \end{equation} The expressions of the coefficients $A_i$, $B_i$, and $C_i$ as functions of the coefficient of restitution $\alpha$ and the dimensionality $d$ were independently derived by van Noije and Ernst \cite{NE98} and by Brilliantov and P\"oschel \cite{BP06}. They are displayed in the Appendix \ref{appA} for the sake of completeness. We need to note that in equations \eqref{3.15}--\eqref{3.17}, we have neglected the coefficients $a_p$ with $p\geq 4$ and nonlinear terms (like $a_2^2$, $a_2 a_3$, and $a_3^2$). The exact moment equation \eqref{3.11} becomes an approximation when it is linearized with respect to $a_2$ and $a_3$. For $p=2$, one gets \begin{equation} \label{3.18} \left[B_2-(d+2)(A_0+A_2)+\frac{d(d+2)}{2}\xi^*\right]a_2+\left[B_3-(d+2)A_3\right]a_3=(d+2)A_0-B_0, \end{equation} while the result for $p=3$ is \begin{equation} \label{3.19} \left[C_2+\frac{3}{4}(d+2)(d+4)(d\xi^*-3A_0-A_2)\right]a_2+\left[C_3-\frac{3}{4}(d+2)(d+4) \left(A_3-A_0+\frac{d}{2}\xi^*\right)\right]a_3=\frac{3}{4}(d+2)(d+4)A_0-C_0. \end{equation} The (reduced) thermostat parameter $\xi^*$ depends on the value of the (steady) granular temperature, which is a function of $a_2$ and $a_3$. Since it is expected that both coefficients are quite small, we evaluate $\xi^*$ by assuming $a_2=a_3=0$. In this case, the set of Eqs.\ \eqref{3.18} and \eqref{3.19} becomes a simple linear algebraic set of equations that can be easily solved to give $a_2$ and $a_3$ in terms of $d$, $\al$ and $\xi^*$. As noted previously by Montanero and Santos \cite{MS00,SM09}, there is a certain degree of ambiguity in the approximations used in the determination of $a_2$ and $a_3$. Here, in order to solve the set of equations \eqref{3.18} and \eqref{3.19}, we consider two basic classes of approximations. \vicente{ In Approximation I, we first assume that $a_3\ll a_2$ so that, $a_3$ can be neglected versus $a_2$ in Eq.\ \eqref{3.18} but not in Eq.\ \eqref{3.19}. This is equivalent to neglect $a_3$ in Eqs.\ \eqref{3.15} and \eqref{3.16} for $\mu_2$ and $\mu_4$, respectively. Given that $\mu_6$ is expected to be smaller than $\mu_4$, it seems to be more accurate to neglect $a_3$ in Eq.\ \eqref{3.18} rather than in Eq.\ \eqref{3.19}. The comparison with computer simulations confirm this expectation.} In Approximation II, both Sonine coefficients $a_2$ and $a_3$ are considered as being of the same order of magnitude. Since the latter does not assume negligible contributions of $a_3$ to the expression of $a_2$, this approximation should be more accurate. In Approximation I, the expression of the second Sonine coefficient $a_2$ may be calculated independently of $a_3$, from equation \eqref{3.18} with $a_3=0$. In fact, $a_2^{(I)}$ was obtained in previous works \cite{M12,GCV13}. Its explicit expression is given by equation \eqref{b2} while $a_3^{(I)}$ is \begin{equation} \label{b3bis} a_3^{(I)}(\alpha,\xi^*)=F\left(\alpha,a_2^{(I)}(\alpha),\xi^*\right), \end{equation} where the function $F(\al, a_2, \xi^*)$ is given by equation \eqref{b4}. The expressions in Approximation II have the following forms \begin{equation} \label{b5} a_2^{(II)}(\alpha,\xi^*)=\frac{M(\alpha,\xi^*)}{N(\alpha,\xi^*)}, \end{equation} \begin{equation} \label{b8.1} a_3^{(II)}(\alpha,\xi^*)=F\left(\alpha,a_2^{(II)}(\alpha),\xi^*\right), \end{equation} where the explicit (and rather large) expressions of $M(\alpha,\xi^*)$ and $N(\alpha,\xi^*)$ are given by equations \eqref{b6} and \eqref{b7}, respectively. \section{Computer simulation results. Comparison with theoretical approaches} \label{comparison} In this section, the relevant quantities of the problem ($a_2$, $a_3$ and $\varphi$) will be computed by numerically solving the Enskog-Boltzmann equation by means of the DSMC method \cite{B94,P05}. The DSMC method has proven to be a very efficient tool by solving numerically the Enskog-Boltzmann equation \cite{B94,MS00} for inelastic collisions. The numerical results will be also compared with the theoretical predictions described in section \ref{sec3}. Before doing it, let us provide some details on the implementation of the DSMC method to the problem considered in this paper. \subsection{Direct simulation Monte Carlo method} By means of the DSMC method we can obtain a numerical solution of the kinetic equation \eqref{2.7}. This solution has the following advantages: 1) it also determines homogeneous non-steady states; and 2) it does not assume \textit{a priori} nor a \emph{normal} solution neither the specific scaling form \eqref{1.1} of the distribution function, as we did for the analytical solution. Therefore, a comparison of both numerical and analytical solutions is a direct way of validating (for steady states) the hypotheses of existence of a normal solution and of the special scaling form of equation \eqref{2.11}. Also, the comparison will allow us to assess the accuracy of the approximated expressions for $a_2$ and $a_3$. Additionally, it will allow us to show in this work the first preliminary analysis of the transient regime towards the steady state for the kind of thermostat we are using. \begin{figure*} \begin{center} \begin{tabular}{lr} \resizebox{7.5cm}{!}{\includegraphics{fig1a.eps}}&\resizebox{7.5cm}{!} {\includegraphics{fig1b.eps}} \end{tabular} \end{center} \caption{Time evolution for hard disks of the reduced temperature $T(t)/T_\text{s}$ (left panel) and the scaled distribution function $\varphi(c_0)$ (right panel) for $\xi^*=0.478$, $\gamma^*=0.014$, and $\alpha=0.8$. Three different initial temperatures have been considered: $T(0)/T_\text{s}=0.25 (\times), 1 (\cdots),$ and $4 (\square)$. Here, $T_\text{s}$ is the steady value of the temperature and $c_0(t)=v_{0,\text{s}}/v_0(t)$, $v_{0,\text{s}}=\sqrt{2T_\text{s}/m}$ being the steady value of thermal speed. The symbols correspond to the simulation results while the horizontal lines refer to the theoretical predictions for $T_\text{s}$ and $\varphi(c_0)$. The latter has been obtained by retaining the three first Sonine polynomials (see equation \eqref{4.1}) and evaluating $a_2$ and $a_3$ with Approximation II. Time is measured in units of $\nu^{-1}$ ($t^*=t \nu^{-1})$. \label{fig1}} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{lr} \resizebox{7.5cm}{!}{\includegraphics{fig2a.eps}}&\resizebox{7.5cm}{!} {\includegraphics{fig2b.eps}} \end{tabular} \end{center} \caption{Plot of the second Sonine coefficient $a_2$ versus the coefficient of restitution $\al$ for hard disks (left panel) and hard spheres (right panel). The symbols refer to three different systems with different values of the simulation parameters $\gamma_\text{sim}^*$ and $\xi_\text{sim}^*$ but with the same value of $\xi^*$ ($\xi^*=1.26$ for disks and $\xi^*=1.68$ for spheres). The solid and dashed lines are the values obtained for $a_2$ by means of Approximation I and Approximation II, respectively. \label{fig2}} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{lr} \resizebox{7.5cm}{!}{\includegraphics{fig3a.eps}}&\resizebox{7.5cm}{!} {\includegraphics{fig3b.eps}} \end{tabular} \end{center} \caption{Plot of the third Sonine coefficient $a_3$ versus the coefficient of restitution $\al$ for hard disks (left panel) and hard spheres (right panel). The symbols refer to three different systems with different values of the simulation parameters $\gamma_\text{sim}^*$ and $\xi_\text{sim}^*$ but with the same value of $\xi^*$ ($\xi^*=1.26$ for disks and $\xi^*=1.68$ for spheres). The solid and dashed lines are the values obtained for \vicente{$a_3$} by means of Approximation I and Approximation II, respectively. \label{fig3}} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{lr} \resizebox{7.5cm}{!}{\includegraphics{fig4a.eps}}&\resizebox{7.5cm}{!} {\includegraphics{fig4b.eps}} \end{tabular} \end{center} \caption{Plot of the second Sonine coefficient $a_2$ versus the (reduced) noise strength $\xi^*$ for $\al=0.7$ in the case of hard disks (left panel) and hard spheres (right panel). The symbols refer to simulation results while the solid and dashed lines are the values obtained for $a_2$ by means of Approximation I and Approximation II, respectively. The vertical lines indicate the threshold values $\xi^*_\text{th}$. \label{fig4}} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{lr} \resizebox{7.5cm}{!}{\includegraphics{fig5a.eps}}&\resizebox{7.5cm}{!} {\includegraphics{fig5b.eps}} \end{tabular} \end{center} \caption{Plot of the third Sonine coefficient $a_3$ versus the (reduced) noise strength $\xi^*$ for $\al=0.7$ in the case of hard disks (left panel) and hard spheres (right panel). The symbols refer to simulation results while the solid and dashed lines are the values obtained for $a_3$ by means of Approximation I and Approximation II, respectively. The vertical lines indicate the threshold values $\xi^*_\text{th}$. \label{fig5}} \end{figure*} The DSMC algorithm is composed in its basic form of a collision step, that takes care of all particle collisions, and a free drift step between particle collisions \cite{B94}. If volume forces act on the system, their corresponding steps need to be incorporated to the algorithm. \vicente{Although the DSMC method has been explained elsewhere \cite{B94, MS00,GCV13}, we will give here some details of the specific method we have used to solve the uniform Enskog-Boltzmann equation \eqref{2.7}. The velocity distribution function is represented by the velocities $\left\{\mathbf{v}_i \right\}$ of $N$ ``simulated'' particles: \beq \label{f} f(\mathbf{v},t)\to \frac{n}{N}\; \sum_{i=1}^N\; \delta (\mathbf{v}_i(t)-\mathbf{v}). \eeq The system is always initialized with a Maxwellian velocity distribution with temperature $T_0$. In the collision stage, a sample of $\frac{1}{2}N\omega_\text{max} dt$ pairs is chosen at random with equiprobability, where $dt$ is the time step (which is much smaller than the mean free time) and $\omega_\text{max}$ is an upper bound estimate of the probability that a particle collides per unit of time. For each pair $(i,j)$ belonging to this sample, a given direction $\widehat{\boldsymbol {\sigma}}_{ij}$ is chosen at random with equiprobability. Then, the collision between particles $i$ and $j$ is accepted with a probability equal to $\Theta (\mathbf{g}_{ij}\cdot \widehat{\boldsymbol {\sigma}}_{ij})\omega_{ij}/\omega_\text{max}$ where $\omega_{ij}=(4\pi n \sigma^2 \chi)|\mathbf{g}_{ij}\cdot \widehat{\boldsymbol {\sigma}}_{ij}|$ for hard spheres and $\omega_{ij}=(2\pi n \sigma \chi)|\mathbf{g}_{ij}\cdot \widehat{\boldsymbol {\sigma}}_{ij}|$ for hard disks. Here, $\mathbf{g}_{ij}=\mathbf{v}_i-\mathbf{v}_j$ is the relative velocity. If the collision is accepted, postcollisional velocities are assigned according to the scattering rule \eqref{2.3}. In the case that $\omega_{ij}>\omega_\text{max}$, then the estimate $\omega_\text{max}$ is updated as $\omega_\text{max}=\omega_{ij}$. Thus, notice that the acceptance probability $\Theta (\mathbf{g}_{ij}\cdot \widehat{\boldsymbol {\sigma}}_{ij})\omega_{ij}/\omega_\text{max}$ is independent of the pair correlation function and for this reason the DSMC algorithm is formally identical for both Boltzmann and Boltzmann-Enskog equations, if the system is homogeneous, like in our case \cite{MS00}.} In the streaming stage, the velocity of every particle is changed according to the thermostat which is composed by two different forces. These two forces act consecutively (the precedence is not relevant) to the collision step. We only need to take care that the intrinsic time scales produced by the two forces ($\tau_\text{drag}=m/\gamma_\text{b}$ and $\tau_\text{st}=v_0^2/\xi_\text{b}$) are not too fast compared to the algorithm time step $dt$, which needs to be small compared to the characteristic collision time in order to describe properly the collision integral of the Boltzmann-Enskog equation \cite{B94,GCV13}. In other words, we need that $\tau_\text{drag}\le \nu^{-1}$ and $\tau_\text{st}\le \nu^{-1}$, where $\nu=v_0/\ell$. As said in section \ref{sec2}, our thermostat is constituted by a deterministic external force proportional to the velocity particle plus a stochastic force. Consequently, the thermostat updates particle velocities following the rule \begin{equation} \label{updateF} \mathbf{v}_i\to\mathbf{v}_i+\mathbf{w}_i^{\text{th}}, \quad \mathbf{w}_i^{\text{th}}=\mathbf{w}_i^{\text{drag}}+\mathbf{w}_i^{\text{st}}. \end{equation} Here, $\mathbf{w}_i^{\text{drag}}$ and $\mathbf{w}_i^{\text{st}}$ denote the velocity increments due to the drag and stochastic forces, respectively. The increment $\mathbf{w}_i^{\text{st}}$ is picked from a Gaussian distribution with a variance characterized by the noise intensity $\xi_\text{b}^2$ fulfilling the conditions \begin{equation} \label{fstcond} \left<\mathbf{w}_i\right>=\mathbf{0}, \quad \left<\mathbf{w}_i\mathbf{w}_j\right>=\xi_b^2dt\delta_{ij}, \end{equation} where \begin{equation} \label{fst} P(w_i^{\text{st}})=(2\pi\xi_b^2dt)^{-3/2}e^{-{w_i^{\text{st}}}^2/(2\xi_b^2dt)} \end{equation} is a Gaussian probability distribution \cite{MS00}. The velocity increment $\mathbf{w}_i^{\text{drag}}$ due to the drag force is given by \begin{equation} \label{fdrag} \mathbf{w}_i^{\text{drag}}=-\gamma_\text{b}\mathbf{v}_idt. \end{equation} In the simulations carried out in this work we have used a number of particles $N=2\times 10^6$ particles and a time step $dt=5\times 10^{-2}\nu_0^{-1}$, where $\nu_0^{-1}=(2T_0/m)^{1/2}n\sigma^{d-1}$. Moreover, for the sake of convenience, we introduce the following dimensionless quantities ($\gamma_\text{sim}^*$ and $\xi_\text{sim}^*$) characterizing the driven parameters used for the different simulations \beq \label{sim1} \gamma_\text{sim}^*=\frac{\gamma_\text{b}}{\chi m \nu_0}=\left(\frac{T_\text{s}}{T_0}\right)^{1/2}\gamma^*, \eeq \beq \label{sim2} \xi_\text{sim}^*=\frac{m\xi_\text{b}^2}{\chi T_0 \nu_0}=\left(\frac{T_\text{s}}{T_0}\right)^{3/2}\xi^*. \eeq The last equality in equations \eqref{sim1} and \eqref{sim2} provides the relation between the simulation (reduced) quantities $\gamma_\text{sim}^*$ and $\xi_\text{sim}^*$ and their corresponding theoretical ones $\gamma^*$ and $\xi^*$, respectively. \begin{figure*} \begin{center} \begin{tabular}{lr} \resizebox{7.5cm}{!}{\includegraphics{fig6a_bis.eps}}&\resizebox{7.5cm}{!} {\includegraphics{fig6b_bis.eps}} \end{tabular} \end{center} \caption{Plot of the scaled distribution function $\varphi(c,\xi^*)/\varphi_\text{M}(c)$ in the steady state for $\al=0.8$. The left panel is for hard disks while the right panel corresponds to hard spheres. The symbols refer to DSMC data obtained for three different systems with parameters: $\{\gamma_\text{sim}^*, \xi_\text{sim}^*\}=\{(1.4\times 10^{-2}, 5.2\times 10^{-5}), (9.8\times 10^{-3}, 1.8\times 10^{-5}), (7\times 10^{-3}, 6.5\times 10^{-6})\}$ for $d=2$ and $\{\gamma_\text{sim}^*, \xi_\text{sim}^*\}=\{(7.1\times 10^{-3}, 2.9\times 10^{-6}), (5\times 10^{-3}, 9.8\times 10^{-7}), (3.6\times 10^{-3}, 3.6\times 10^{-7})\}$ for $d=3$. These values yield a common value of $\xi^*$: $\xi^*=1.263$ for $d=2$ and $\xi^*=1.688$ for $d=3$. The lines correspond to equation \eqref{4.1} with expressions for the cumulants given by Approximation I (solid lines) and Approximation II (dashed lines). \label{fig6}} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{lr} \resizebox{7.5cm}{!}{\includegraphics{fig7a_bis.eps}}&\resizebox{7.5cm}{!} {\includegraphics{fig7b_bis.eps}} \end{tabular} \end{center} \caption{Plot of the scaled distribution function $\varphi(c,\xi^*)/\varphi_\text{M}(c)$ in the steady state for $\al=0.6$. The left panel is for hard disks while the right panel corresponds to hard spheres. The symbols refer to DSMC data obtained for three different systems with parameters: $\{\gamma_\text{sim}^*, \xi_\text{sim}^*\}=\{(1.4\times 10^{-2}, 2.9\times 10^{-4}), (9.8\times 10^{-3}, 10^{-4}), (7\times 10^{-3}, 3.6\times 10^{-5})\}$ for $d=2$ and $\{\gamma_\text{sim}^*, \xi_\text{sim}^*\}=\{(7.1\times 10^{-3}, 1.5\times 10^{-5}), (5\times 10^{-3}, 5.4\times 10^{-6}), (3.6\times 10^{-3}, 1.9\times 10^{-6})\}$ for $d=3$. These values yield a common value of $\xi^*$: $\xi^*=1.263$ for $d=2$ and $\xi^*=1.688$ for $d=3$. The lines correspond to equation \eqref{4.1} with expressions for the cumulants given by Approximation I (solid lines) and Approximation II (dashed lines). \label{fig7}} \end{figure*} \subsection{Comparison between theory and simulations} Although we are mainly interested in evaluating all the relevant quantities of the problem ($a_2$, $a_3$ and $\varphi$) in the (asymptotic) steady state, it is also interesting to analyze the approach of some of these quantities towards the steady state. Figure \ref{fig1} shows the time evolution of both the (reduced) temperature $T(t)/T_\text{s}$ (left panel) and the distribution function $\varphi(c_0)$ (right panel) for the (dimensionless) velocity $c_0=v_{0,\text{s}}/v_0(t)$. Here, $T_\text{s}$ and $v_{0,\text{s}}=\sqrt{2T_\text{s}/m}$ refer to the theoretical steady values of the granular temperature and thermal velocity, respectively. The solid horizontal lines correspond to the theoretical predictions by considering the first two non-Gaussian corrections (third Sonine approximation) to the distribution $\varphi$ (see equation \eqref{4.1}). We have made runs of identical systems except that they are initialized with different temperatures. After a transient regime, as expected we observe that all simulation data tend to collapse to the same steady values for sufficiently long times. In addition, the corresponding steady values obtained from the simulation for both temperature and distribution function practically coincide with those predicted by the Sonine solution. It is also to be noticed that the convergence to the steady values occurs approximately at the same time for both $T(t)/T_\text{s}$ and $\varphi(c_0)$ (thermal fluctuations make difficult to determine the exact point for steady state convergence for the distribution function). This is another and indirect way of checking that indeed the normal solution exists for simulations, since its existence implies, from equation \eqref{fhd}, that we reach the scaled form \eqref{1.1} when the temperature is stationary. \vicente{Some previous works on a granular gas heated by the stochastic thermostat \cite{GMT12} and on the simple shear flow \cite{AS07} have shown that before reaching the steady state the system evolves towards a universal \emph{unsteady} state that depends on a new parameter measuring the distance to the steady state. A similar behavior is expected here where the different solutions to the Enskog-Boltzmann equation \eqref{2.7} would be attracted by the universal distribution function $f(v,t)\to n v_0(t)^{-d} \varphi (\mathbf{c},\tilde{\gamma}(t),\tilde{\xi}(t))$, where $\mathbf{c}=\mathbf{v}/v_0(t)$, and \beq \label{4.0} \tilde{\gamma}(t)\equiv \frac{\ell \gamma_\text{b}}{\chi m v_0(t)}, \quad \tilde{\xi}(t)\equiv \frac{\ell \xi_\text{b}^2}{\chi T(t) v_0(t)}. \eeq The dimensionless driven parameters $\tilde{\gamma}(t)$ and $\tilde{\xi}(t)$ measure the distance to the steady state. Of course, for asymptotically long times, the steady state is eventually reached, i.e., $\varphi(\mathbf{c},\tilde{\gamma}(t),\tilde{\xi}(t))\to \varphi_\text{s}(\mathbf{c},\xi^*)$, where $\xi^*$ is defined by Eq.\ \eqref{2.13}. The above unsteady hydrodynamic regime (for which the system has forgotten its initial condition) is expected to be achieved after a certain number of collisions per particle. On the other hand, although the characterization of this unsteady state is a very interesting problem, its study lies beyond the goal of the present paper.} Now, we will focus on the steady state values of the relevant quantities of the problem. In particular, the basic quantities measuring the deviation of the distribution function from its Maxwellian form are the second and third Sonine coefficients $a_2$ and $a_3$, respectively. The dependence of $a_2$ and $a_3$ on the coefficient of restitution $\al$ is shown in figures \ref{fig2} and \ref{fig3}, respectively, for hard disks (left panels) and spheres (right panels). Three different systems with different values of the simulation parameters $\gamma_\text{sim}^*$ and $\xi_\text{sim}^*$ but with the same value of $\xi^*$ ($\xi^*=1.263$ for disks and $\xi^*=1.688$ for spheres) have been considered. We observe that, at a given of $\al$, the corresponding three simulation data collapse in a common curve, showing that indeed both Sonine coefficients are always of the form $a_i(\alpha, \xi^*)$. Regarding the comparison between theory and simulation, it is quite apparent that while both Approximations I and II compare quantitatively quite well with simulations in the case of $a_2$, Approximation II has a better performance than Approximation I in the case of $a_3$, specially at very strong dissipation. This is the expected result since Approximation II is in principle more accurate that Approximation I, although the latter is more simple than the former. In this sense and with respect to the $\al$-dependence of $a_2$ and $a_3$, Approximation I could be perhaps preferable to Approximation II since it has an optimal compromise between simplicity and accuracy. On the other hand, more quantitative discrepancies between both Approximations are found when one analyzes both Sonine coefficients vs. $\xi^*$ with constant $\al$. Figures \ref{fig4} and \ref{fig5} show $a_2$ and $a_3$, respectively, versus $\xi^*$ at $\al=0.7$. We see that Approximation I exhibits a poor agreement with simulations since it predicts a dependence on the noise strength opposite to the one found in the simulations. On the other hand, Approximation II agrees very well with simulation data in all the range of values of $\xi^*$ (note that $\xi^*\gtrsim 0.639$ for $d=2$ and $\xi^*\gtrsim 0.852$ for $d=3$ to achieve a steady state for $\al=0.7$). It must be also noted that for the systems studied in figures \ref{fig4} and \ref{fig5}, although the magnitudes of both Sonine coefficients are very small, $|a_2|$ is of the order of ten times smaller than $|a_3|$. \vicente{This may indicate that in certain ranges the cumulant $a_3$ is relevant compared to $a_2$, which justifies our Approximation II.} The small values of the coefficients $a_2$ and $a_3$ support the assumption of a low-order truncation in polynomial expansion and suggests that the scaled distribution function $\varphi(c,\xi^*)$ for thermal velocities can be well represented by the three first contributions (note that $a_1=0$) in the Sonine polynomial expansion \eqref{3.1}. To confirm it, we have measured the deviation of $\varphi(c,\xi^*)$ from its Maxwellian form $\varphi_\text{M}(c)$. In figures \ref{fig6} an \ref{fig7} we plot the ratio $\varphi(c,\xi^*)/\varphi_\text{M}(c)$ versus the reduced velocity $c$ in the steady state for two values of the coefficient of restitution ($\al=0.8$ and $\al=0.6$). As before, we have considered a system of inelastic hard disks (figure \ref{fig6} with $\xi^*=1.26$) and inelastic hard spheres (figure \ref{fig7} with $\xi^*=1.69$). As in figures \ref{fig2}--\ref{fig5}, symbols correspond to simulation results obtained for different values of $\gamma_\text{sim}^*$ and $\xi_\text{sim}^*$. The solid and dashed lines are obtained from equation \eqref{3.1} with the series truncated at $p=3$, i.e., \beqa \label{4.1} \frac{\varphi(c,\xi^*)}{\varphi_\text{M}(c)}&\to& 1+a_2(\xi^*)\left(\frac{1}{2}c^4-\frac{d+2}{2}c^2+\frac{d(d+2)}{8}\right)\nonumber\\ & & -a_3(\xi^*)\left( \frac{1}{6}c^6-\frac{d+4}{4}c^4+\frac{(d+2)(d+4)}{8}c^2-\frac{d(d+2)(d+4)}{48}\right). \eeqa The coefficients $a_2$ and $a_3$ in equation \eqref{4.1} are determined by using Approximation I (solid lines) and Approximation II( dashed lines). First, it is quite apparent that simulations confirm that the reduced distribution function $\varphi(c,\xi^*)$ is a universal function of $\xi^*$ since all simulation series at constant $\xi^*$ collapse to the same curve (within non-measurable margin error). We also see that the simulation curves agree very well with the corresponding third-degree Sonine polynomial in this range of velocities, especially in the two-dimensional case. Surprisingly, in the high velocity region, the curves obtained from Approximation I fits the simulation data slightly better than those obtained by using the improved Approximation II. In any case, the agreement between theory and simulation is again excellent, especially taking into account the very small discrepancies we are measuring. \section{Concluding remarks} \label{discussion} In this paper we have performed Monte Carlo simulations of the Enskog-Boltzmann for a granular fluid in a homogeneous state. The system is driven by a stochastic bath with friction. One of the primary objectives of this work has been to check the velocity scaling and form assumed for the distribution function in the steady state. As equation \eqref{1.1} indicates, the new feature of the scaled distribution $\varphi$ is that it not only depends on the granular temperature $T$ through the (scaled) velocity $c$ but also through the (reduced) noise strength $\xi^*$ (defined in equation \eqref{2.13}). The simulation results reported here (see figures \ref{fig6} and \ref{fig7}) have confirmed the above dependence since different systems sharing the same values of $\xi^*$ and $\al$ lead to the same distribution function $\varphi$. This is consistent with the existence of a \emph{normal} solution in the long-time limit. Apart from performing Monte Carlo simulations to confirm the validity of a hydrodynamic description for finite degree of collisional dissipation, we have also characterized the distribution $\varphi$ through its first velocity moments. More specifically, we have obtained the second $a_2$ and third $a_3$ Sonine coefficients. While the coefficient $a_2$ measures the fourth-degree velocity moment of $\varphi$, the coefficient $a_3$ is defined in terms of the sixth-degree velocity moment of $\varphi$. Both Sonine coefficients provide information on the deviation of $\varphi$ from its Maxwellian form $\varphi_\text{M}$. Moreover, the knowledge of those coefficients are important, for instance, in the precise determination of the transport coefficients \cite{GCV13}. On the other hand, given that the Sonine coefficients cannot be \emph{exactly} determined (they obey an infinite hierarchy of moments), one has to truncate the corresponding Sonine polynomial expansion in order to estimate them. Here, we have considered two different approaches (Approximation I and II) to get explicit expressions of $a_2$ and $a_3$ in terms of the dimensionality of the system $d$, the coefficient of restitution $\al$ and the driven parameter $\xi^*$. Approximation II is more involved than Approximation I since it considers both Sonine coefficients as being of the same order of magnitude. The comparison between the analytical solution and DSMC results shows in general a good agreement, even for high-inelasticity. Moreover, as expected, the improved Approximation II for $a_2$ and $a_3$ shows a better agreement with simulations than Approximation I (see figures \ref{fig2}--\ref{fig5}). Thus, taking into account all the above comparisons, we can conclude that a good compromise between accuracy and simplicity is represented by Approximation I. The results derived in this paper show clearly that the combination of analytical and computational tools (based on the DSMC method) turns out to be an useful way to characterize properties in granular flows. On the other hand, given that most of the Sonine coefficients could be directly measured by DSMC, one could in principle make a least-square fit to obtain explicit forms for those coefficients. However, this procedure would not be satisfactory from a more fundamental point of view, especially if one is interested in capturing the behavior of $\varphi(c)$ and its Sonine expansion. In this context, our analytical solution of the distribution function (redundant as it may seem) has the advantage of providing a rational description of the physical properties of the kinetic equation of the system. This is not accomplished by the numerical solution. However, the fact that the DSMC method gives an accurate numerical solution of the Enskog-Boltzmann equation makes it complementary to the theoretical one and thus both conform a complete description of the kinetic equation of our system. \acknowledgments The present work has been supported by the Ministerio de Educaci\'on y Ciencia (Spain) through grants No. FIS2010-16587 (V.G., M.G.Ch. and F.V.) and No. MAT2009-14351-C02-02 (F.V.). The first Grant has been partially financed by FEDER funds and by the Junta de Extremadura (Spain) through Grant No. GRU10158. The research of M. G. Chamorro has been supported by the predoctoral fellowship BES-2011-045869 from the Spanish Government (Spain).
2,869,038,155,817
arxiv
\section{Introduction} Cooperative control of multi-agent systems has traditionally focused on designing local control laws in order to achieve tasks such as consensus, formation, network connectivity, and collision avoidance (\cite{ren_beard_concensus, olfati_murray_concensus, jadbabaie_morse_coordination, zavlanos_connectivity, egerstedt_formation, tanner_flocking, dimos_rendezvous_problem}). Over the last decade or so, the field of control of multi-agent systems with complicated behavior under complex high-level task specifications has been gaining significant research attention. Such high-level tasks may have the form of ``Periodically survey regions A, B, C while avoiding region D", or ``Visit regions A, B, C, in this order", and many others. Multiple robotic vehicles then may perform these types of tasks faster and more efficiently than a single robot. In this work, we aim to introduce specific time bounds into the complex tasks, such as ``Periodically survey regions A, B, C, avoid region D and always keep the longest time between two consecutive visits to A below 5 time units", or ``Visit regions A, B, C, in this order within 10 time units". The team of agents is usually associated with a set of tasks that should be fulfilled by the group of agents as a whole at the discrete level. A three-step hierarchical procedure to address such a problem is described as follows (\cite{vardi_2011_planning, fainekos_planning}): First the robot dynamics is abstracted into a finite or countable, discrete transition system using sampling or cell decomposition methods based on triangulations, rectangular or other partitions. Second, invoking ideas from verification methods, a discrete plan that meets the high-level task is synthesized. Third, the discrete plan is translated into a sequence of continuous controllers for the original system. The specification language that has extensively been used to express the tasks is the Linear Temporal Logic (LTL) (see, e.g., \cite{loizou_2004}). LTL has proven a valuable tool for controller synthesis, because it provides a compact mathematical formalism for specifying desired behaviors of a system. There is a rich body of literature containing algorithms for verification and synthesis of system obeying temporal logic specifications (\cite{guo_2015_reconfiguration, frazzoli_vehicle_routing}). A common approach in multi-agent planning under LTL specifications is the consideration of a centralized, global task specification for the team of agents which is then decomposed into local tasks to be accomplished by the individual agents. For instance, the authors in \cite{belta_2010_product_system} utilized the parallel composition (synchronous products) of multi-robot systems in order to decompose a global specification that is given to a team of robots into individual specifications. This method has been proven computationally expensive due to state space explosion problem and in order to relax the computational burden the authors in \cite{belta_regular1, belta_regular2} proposed a method that does not require the computation of the parallel composition. This method, however, is restrictive to specifications that can be expressed in certain subclasses of LTL. In \cite{belta_cdc_reduced_communication}, the specification formula was given in LTL in parallel with the problem of minimum inter-robot communication. Explicit time constraints in the system modeling have been included e.g., in \cite{belta_optimality}, where a method of automated planning of optimal paths of a group of agents satisfying a common high-level mission specification was proposed. The mission was given in LTL and the goal was the minimization of a cost function that captures the maximum time between successive satisfactions of the formula. Authors in \cite{quottrup_timed_automata, quadrup2} used a different approach, representing the motion of each agent in the environment with a timed automaton. The composition of the team automaton was achieved through synchronization and the UPPAAL verification tool (\cite{uppal}) was utilized for specifications given in Computational Tree Logic (CTL). In the same direction, authors in \cite{belta_2011_timed_automata} modeled the multi-robot framework with timed automata and weighted transition systems considering LTL specifications and then, an optimal motion of the robots satisfying instances of the optimizing proposition was proposed. Most of the previous works on multi-agent planning consider temporal properties which essentially treat time in a qualitative manner. For real applications, a multi-agent team might be required to perform a specific task within a certain time bound, rather than at some arbitrary time in the future (quantitative manner). Timed specifications have been considered in \cite{liu_MTL, murray_2015_stl, baras_MTL, frazzoli_MTL}. In \cite{liu_MTL}, the authors addressed the problem of designing high-level planners to achieve tasks for switching dynamical systems under Metric Temporal Logic (MTL) specification and in \cite{murray_2015_stl}, the authors utilized a counterexample-guided synthesis to cyber-physical systems subject to Signal Temporal Logic (STL) specifications. In \cite{baras_MTL}, the MTL formula for a single agent was translated into linear constraints and a Mixed Integer Linear Programming (MILP) problem was solved. However, these works are restricted to single agent motion planning and are not expendable to multi-agent systems in a straightforward way. In \cite{frazzoli_MTL}, the vehicle routing problem was considered under the presence of MTL specifications. The approach does not rely on automata-based approach to verification, as it constructs a set of linear inequalities from MTL specification formula in order to solve an MILP problem. In this work, we aim at designing automated planning procedure for a team of agents that are given an individual, independent timed temporal specification each and a single global team specification. This constitutes the first step towards including time constraints to temporal logic-based multi-agent control synthesis. We consider a quantitative logic called Metric Interval Temporal Logic (MITL) (\cite{alur_mitl}) in order to specify explicit time constraints. The proposed solution is fully automated and completely desynchronized in the sense that a faster agent is not required to stay in a region and wait for the slower one. It is decentralized in handling the individual specifications and centralized only in handling the global team specification. To the best of the authors' knowledge this is the first work that address the cooperative task planning for multi-agent systems under individual and global timed linear temporal logic specifications. The remainder of the paper is structured as follows. In Sec. \ref{sec: preliminaries} a description of the necessary mathematical tools, the notations and the definitions are given. Sec. \ref{sec: prob_formulation} provides the model of the multi-agent system, the task specification, several motivation examples as well as the formal problem statement. Sec. \ref{sec: solution} discusses the technical details of the solution. Sec. \ref{sec: simulation_results} is devoted to an illustrative example. Finally, the conclusions and the future work directions are discussed in Sec. \ref{sec: conclusions}. \vspace{-1mm} \section{Notation and Preliminaries} \label{sec: preliminaries} Given a set $S$, we denote by $|S|$ its cardinality and by $2^S$ the set of all its subsets. An infinite sequence of elements of $S$ is called a infinite word over the set $S$ and it is denoted by $w = w(0)w(1) \ldots $ The $i$-th element of a sequence is denoted with $w(i)$. We denote by $\mathbb{Q}_+, \mathbb{N}$ the set of positive rational and natural numbers including 0, respectively. Let us also define $\mathbb{T}_{\infty} = \mathbb{T} \cup \{\infty\}$ for a set of numbers $\mathbb{T}$. \begin{definition} (\cite{alur1994}) A \emph{time sequence} $\tau = \tau(0) \tau(1) \cdots$ is a infinite sequence of time values $\tau(j) \in \mathbb{T} = \mathbb{Q}_{+}$, satisfying the following constraints: \begin{itemize} \item Monotonicity: $\tau(j) < \tau(j+1)$ for all $j \geq 0$. \item Progress: For every $t \in \mathbb{T}$, $\exists \ j \geq 1$, such that $\tau(j) > t$. \end{itemize} \end{definition} An \emph{atomic proposition} $p$ is a statement over the problem variables and parameters that is either True $(\top)$ or False $(\bot)$ at a given time instance. \begin{definition} (\cite{alur1994}) Let $\AP$ be a finite set of atomic propositions. A \emph{timed word} $w$ over the set $\AP$ is an infinite sequence $w = (w(0), \tau(0)) (w(1), \tau(1)) \cdots$ where $w(0) w(1) \ldots$ is an infinite word over the set $2^{\AP}$ and $\tau(0) \tau(1) \ldots$ is a time sequence with $\tau(j) \in \mathbb{T}, \ j \geq 0$. A \emph{timed language} $\mathit{Lang}_\mathcal{T}$ over $AP$ is a set of timed words over $AP$. \end{definition} \subsection{Weighted Transition System} \begin{definition} A \emph{Weighted Transition System} (WTS) is a tuple $(S, S_0, \xrightarrow[~~]{}, d, AP, L)$ where $S$ is a finite set of states; $S_0 \subseteq S$ is a set of initial states; $\xrightarrow[~~]{} \subseteq S \times S$ is a transition relation; $d: \xrightarrow[~~]{} \xrightarrow[~~]{} \mathbb{T}$ is a map that assigns a positive weight to each transition; $\AP$ is a finite set of atomic propositions; and $L: S \xrightarrow[~~]{} 2^{AP}$ is a labeling function. \end{definition} \vspace{-2mm} For simplicity, we use $s \rightarrow s'$ to denote the fact that $(s,s') \in \rightarrow$. \begin{definition \label{run_of_WTS} A \emph{timed run} of a WTS is an infinite sequence $r^t = (r(0), \tau(0))(r(1), \tau(1)) \ldots$, such that $r(0) \in S_0$, and for all $j \geq 1$, $r(j) \in S$ and $r(j) \rightarrow r(j+1)$. The \emph{time stamps} $\tau_k(j), j \geq 0$ are inductively defined as \begin{enumerate} \item $\tau(0) = 0$. \item $\displaystyle \tau(j+1) = \tau(j) + d(r(j), r(j+1)), \ \forall \ j \geq 1.$ \end{enumerate} \label{eq: timed_word_WTS} Every timed run $r^t$ generates a \emph{timed word} $w(r^t) = (L(r(0)), \tau(0)) \ (L(r(1)), \tau(1))\ldots$ over the set $2^{\AP}$ where $w(j) = L(r(j))$, $\forall \ j \geq 0$ is the subset of atomic propositions that are true at state $r(j)$ at time $\tau(j)$. \end{definition} \subsection{Metric Interval Temporal Logic and Timed Automata} The syntax of \emph{Metric Interval Temporal Logic (MITL)} over a set of atomic propositions $AP$ is defined by the grammar \begin{equation} \label{eq: grammar} \varphi := p \ | \ \neg \varphi \ | \ \varphi_1 \wedge \varphi_2 \ | \bigcirc_I \ \varphi \mid \Diamond_I \varphi \mid \square_I \varphi \mid \varphi_1 \ \mathcal{U}_I \ \varphi_2 \end{equation} where $p \in \AP$, and $\bigcirc$, $\Diamond$, $\square$ and $\mathcal U$ is the next, future, always and until temporal operator, respectively. $I \subseteq \mathbb{T}$ is a non-empty time interval in one of the following forms: $[i_1, i_2], [i_1, i_2),(i_1, i_2], $ $ (i_1, i_2), [i_1, \infty], (i_1, \infty)$ where $i_1, i_2 \in \mathbb{T}$ with $i_1 < i_2$. MITL can be interpreted either in continuous or point-wise semantics. We utilize the latter one and interpret MITL formulas over timed runs such as the ones produced by a WTS (Def.~\ref{run_of_WTS}) \begin{definition} (\cite{pavithra_expressiveness}, \cite{quaknine_decidability}) Given a run $r^t = (r(0),\tau(0))(r(1),\tau(1)) \dots$ of a WTS and an MITL formula $\varphi$, we define $(r^t, i) \models \varphi$, for $\ i \geq 0$ (read $r^t$ satisfies $\varphi$ at position $i$) as follows \begin{align*} \label{eq: for1} (r^t, i) &\models p \Leftrightarrow p \in L(r(i)) \\ (r^t, i) &\models \neg \varphi \Leftrightarrow (r^t, i) \not \models \varphi \\ (r^t, i) &\models \varphi_1 \wedge \varphi_2 \Leftrightarrow (r^t, i) \models \varphi_1 \ \text{and} \ (r^t, i) \models \varphi_2 \\ (r^t, i) &\models \bigcirc_I \ \varphi \Leftrightarrow (r^t, i+1) \models \varphi \ \text{and} \ \tau(i+1) - \tau(i) \in I\\ (r^t, i) & \models \Diamond_I \varphi \Leftrightarrow \exists j, i \leq j, \ \text{s.t. } (r^t, j) \models \varphi, \tau(j)-\tau(i) \in {I} \\ (r^t, i) & \models \square_I \varphi \Leftrightarrow \forall j, i \leq j, \ \tau(j)-\tau(i) \in {I} \Rightarrow (r^t, j) \models \varphi \\ (r^t, i) &\models \varphi_1 \ \mathcal{U}_I \ \varphi_2 \Leftrightarrow \exists j, i \leq j, \ \text{s.t. } (r^t, j) \models \varphi_2, \\ & \tau(j)-\tau(i) \in I \ \text{and } (r^t, k) \models \varphi_1 \ \text{for every} \ i \leq k < j \end{align*} \end{definition} \emph{Timed B\"uchi Automata (TBA)} were introduced in \cite{alur1994} and in this work, we also partially adopt the notation from \cite{bouyer_phd, tripakis_tba}. Let $X = \{x_1, x_2, \ldots, x_M\}$ be a finite set of \emph{clocks}. The set of \emph{clock constraints} $\Phi(X)$ is defined by the grammar \begin{equation} \phi := \top \mid \ \neg \phi \ | \ \phi_1 \wedge \phi_2 \ | \ x \bowtie c \ \end{equation} where $x \in X$ is a clock, $c \in \mathbb{T}$ is a clock constant and $\bowtie \in \{ <, >, \geq, \leq, = \}$. A clock \emph{valuation} is a function $\nu: X \rightarrow\mathbb{T}$ that assigns a real value to each clock. A clock $x_i$ has valuation $\nu_i$ for $i \in \{1, \ldots, M\}$, and $\nu = (\nu_1, \ldots, \nu_M)$. We denote by $\nu \models \phi$ the fact that the valuation $\nu$ satisfies the clock constraint $\phi$. \begin{definition} A \emph{TBA} is a tuple $\mathcal{A} = (S, S^{\text{init}}, X, I, E, F, AP, \mathcal{L})$ where $S$ is a finite set of locations; $S^{\text{init}} \subseteq S$ is the set of initial locations; $X$ is a finite set of clocks; $I: S \rightarrow \Phi(X)$ is the invariant; $E \subseteq S \times \Phi(X) \times 2^X \times S$ gives the set of transitions; $F \subseteq S$ is a set of accepting locations; $\AP$ is a finite set of atomic propositions; and $\mathcal{L}: S \rightarrow 2^{AP}$ labels every state with a subset atomic propositions. \end{definition} A state of $\mathcal{A}$ is a pair $(s, \nu)$ where $s \in S$ and $\nu$ satisfies the \emph{invariant} $I(s)$, i.e., $\nu \models I(s)$. The initial state of $\mathcal{A}$ is $(s(0), (0,\ldots,0))$, where $s(0) \in S_0$. Given two states $(s, \nu)$ and $(s', \nu')$ and an edge $e = (s, \gamma, R, s')$, there exists a \emph{discrete transition} $(s, \nu) \xrightarrow{e} (s', \nu')$ iff $\nu$ satisfies the \emph{guard} of the transition $\gamma$, i.e., $\nu \models \gamma$, $\nu' \models I(s')$, and $R$ is the \emph{reset set}, i.e., $\nu'_i = 0$ for $x_i \in R$ and $\nu'_i = \nu_i$ for $x_i \notin R$. Given a $\delta \in \mathbb{T}$, there exists a \emph{time transition} $(s, \nu) \xrightarrow{\delta} (s', \nu')$ iff $s = s', \nu' = \nu+\delta$ and $\nu' \models I(s)$. An infinite run of $\mathcal{A}$ starting at state $(s(0), \nu)$ is an infinite sequence of time and discrete transitions $(s(0), \nu(0))\xrightarrow{\delta_0} (s(0)', \nu(0)')\xrightarrow{e_0} (s(1), \nu(1)) \xrightarrow{\delta_1} (s(1)', \nu(1)') \ldots$, where $(s(0),\nu(0))$ is an initial state. This run produces the timed word $w = (\mathcal{L}(s(0)), \tau(0)) (\mathcal{L}(s(1)), \tau(1)) \ldots$ with $\tau(0) = 0$ and $\tau(i+1) = \tau(i) +\delta_i$, $\forall \ i \geq 1$. The run is called \emph{accepting} if $s(i) \in F$ for infinitely many times. A timed word is \emph{accepted} if there exists an accepting run that produces it. The problem of deciding the emptiness of the language of a given TBA $\mathcal{A}$ is \emph{PSPACE}-complete \cite{alur1994}. In other words, we can synthesize an accepting run of a given a TBA $\mathcal{A}$, if one exists. \begin{remark} Traditionally, the clock constraints and the TBAs are defined with $\mathbb T = \Nat$, however, they can be extended to accommodate $\mathbb T = \mathbb Q_+ \cup \{0\}$. By multiplying all the rational numbers that are appearing in the state invariants and the edge constraints with their least common multiple, we have equivalently only natural numbers occurring to the TBA. For the sake of physical understanding of the timed properties of the under investigation framework, we will be working with $\mathbb{T} = \mathbb{Q}_{+} \cup \{0\}$. \end{remark} Any MITL formula $\varphi$ over $AP$ can be algorithmically translated to a TBA with the alphabet $2^\AP$, such that the language of timed words that satisfy $\varphi$ is the language of timed words produced by the TBA \cite{alur_mitl, maler_MITL_TA, MITL_ata}. \vspace{-2mm} \section{Problem Formulation} \label{sec: prob_formulation} \subsection{System Model} Consider a multi-agent team composed by $N$ agents operating in a bounded workspace $\mathcal{W}_0 \subseteq \mathbb{R}^n$. Let $\mathcal{I} = \left\{ 1,\ldots, N\right\}$ denote the index set of the agents. We assume that the workspace $\mathcal{W}_0$ is partitioned into a finite number (assume $W$) of regions of interest $\pi_1, \ldots, \pi_W$ where \begin{equation} \label{eq: partition} \mathcal{W}_0 = \mathop{\bigcup} \limits_{i \in \mathcal{W}}^{} \pi_i \ \ \text{and} \ \ \pi_i \cap \pi_j \neq \emptyset , \forall \ i \neq j \ \ \text{with} \ \ i,j \in \mathcal{W} \end{equation} for the index set $\mathcal{W} = \{1,\ldots,W\}$. We denote by $\pi_i^k$ the agent $k$ being at region $\pi_i$, where $k \in \mathcal{I}, i \in \mathcal{W}$. In this work, we focus on interaction and high-level control strategies rather than on nonlinear models, and we assume that the dynamics of each agent is given by a single integrator \begin{equation} \label{eq: system} \dot{x}_i = u_i, \ i \in \mathcal{I}. \end{equation} The partitioned environment \eqref{eq: partition} is a discretization that allows us to control the agents with dynamics \eqref{eq: system} using finite models such as finite transition systems (e.g., \cite{fainekos_planning, tabuada_book_verification, girard_approximation, alur_pappas_abstractions}). We define a weighted transition system (see Def. \ref{def: transition_systems}) so that \begin{itemize} \item if there exists a controller $u_i, \ i \in \mathcal{I}$ such that the agent $k$ can be driven from any point within the region $\pi^i$ to a neighboring region $\pi^j$, then we allow for a transition $\pi_k^i \rightarrow_k \pi_k^j$ between the respective system states, and \item the weight of each transition {estimates} the time each agent needs in order to move from one region to another. In particular, the travel time is here determined as the worst-case shortest time needed to travel from an arbitrary point of the current region to the boundary of the following region. This estimate is indeed conservative, however, it is sufficient for specifications that we are generally interested in within multi-agent control. Namely, it is suitable for scenarios where tasks are given deadlines and upper rather than lower bound requirements are associated with events along the agents' runs. \end{itemize} \begin{definition} \label{def: transition_systems} The motion of each agent $k \in \mathcal{I}$ in the workspace is modeled by a WTS $\mathcal{T}_k =(\Pi_k, \Pi_{k}^{\text{init}}, \rightarrow_k,d_k, AP_k, L_k)$ where \begin{itemize} \item $\Pi_k = \left\{ \pi_1^k, \pi_2^k, \ldots, \pi_W^k \right\}$ is the set of states of agent $k$. Any state of an agent $k$ can be denoted as $\pi_j^k \in \Pi_k$ for $k \in \mathcal{I}, j \in \mathcal{W}$. The number of states for each agent is $|\Pi_k| = W$. \item $\Pi_k^{\text{init}} \subseteq \Pi_k$ is the initial states of agent $k$, i.e. the set of regions where agent $k$ may start. \item $\rightarrow_k \subseteq \Pi_k \times \Pi_k$ is the transition relation. For example, by $\pi_3^3 \rightarrow_3 \pi_5^3$ we mean that the agent $3$ can move from region $\pi_3$ to region $\pi_5$. \item $d_k: \rightarrow_k \rightarrow \mathbb{T}$ is a map that assigns a positive weight (duration) to each transition. For example, $d_2(\pi_2^2, \pi_5^2) = 0.7, \ \text{where} \ \pi_2^2 \rightarrow_2 \pi_5^2$, means that agent $2$ needs at most $0.7$ time units to move from any point of region $\pi_2$ to the boundary of the neighboring region $\pi_5$. \item $\AP_k$ is a finite set of atomic propositions known to agent $k$. Without loss of generality, we assume that $\AP_k \cap \AP_{k'} = \emptyset$ for all $k \neq k' \in \mathcal{I}$. \item $L_k: \Pi_k \rightarrow 2^{\AP_k}$ is a labeling function that assigns to each state $\pi^k_j \in \Pi_k$ a subset of atomic propositions $AP_k$ that are satisfied when agent $k$ is in region $\pi_j$. \end{itemize} \end{definition} \subsubsection{Individual Timed Runs and Words} The behaviors of the individual agents can be captured through their timed runs and timed words. The timed run $r^t_k = (r_k(0), \tau_k(0))(r_k(1), \tau_k(1)) \cdots, \ k \in \mathcal{I}$ of each WTS $\mathcal{T}_k, \ k \in \mathcal{I}$ and the corresponding timed words $w(r_k^t) = (L_k(r_k(0)), \tau_k(0)) \ (L_k(r_k(1)), \tau_k(1)) \ \cdots$ are defined by using the terminology of Def. \ref{run_of_WTS}. \smallskip \subsubsection{Collective Timed Run and Word} \label{sec: collective_run} At the same time, the agents form a team and we are interested in their global, collective behaviors, which we formalize through the following definition. \begin{definition} \label{def: collective_run} Let $r_1^t, \ldots, r_N^t$ be individual timed runs of the agents $1, \ldots, N$, respectively, as defined above. Then, the \emph{collective timed run} $r_G = (r_G(0), \tau_G(0)) (r_G(1), \tau_G(1)) \ldots$ of the team of agents is defined inductively as follows \begin{enumerate} \item $(r_G(0), \tau_G(0)) = ( (r_1(0), \ldots, r_N(0)) , \tau_G(0))$. \item Let $(r_G(i), \tau_G(i)) = ((r_1(i_1), \ldots, r_N(i_N)) , \tau_G(i))$, where $i \geq 0$ be the current state and time stamp of the collective timed run. Then the next state and time stamp $(r_G(i+1), \tau_G(i+1)) = ((r_1(j_1), \ldots, r_N(j_N)) , \tau_G(i+1))$ are given by the following \begin{itemize} \item $\ell = \underset{k \in \mathcal{I}}{\text{argmin}}\{ \tau_k(i_k+1)\}$. \item $\tau_G(i+1) = \tau_\ell(i_\ell+1)$. \item $r_k(j_k) = \begin{cases} r_\ell(i_\ell+1) & \ \text{if} \ k = \ell \\ r_k(i_\ell) & \ \text{if} \ k \neq \ell. \end{cases} $ \end{itemize} \end{enumerate} Intuitively, given the current states $r_1(i_1),\ldots,r_N(i_N)$ and the next states $r_1(i_1+1),\ldots,r_N(i_N+1)$ of the individual agents at time $\tau_G(i)$, $\ell$ is the index of the agent $k$ who will finish its current transition from $r_\ell(i_\ell)$ to $r_\ell(i_\ell + 1)$ the soonest amongst all. The time of agent $\ell$'s arrival to its next state $r_\ell(i_\ell + 1)$ becomes the new time stamp $\tau_G(i+1)$ of the collective timed run. The next state of the collective timed run reflects that each agent $k$ which cannot complete its transition from $r_k(i_k)$ to $r_k(i_k+1)$ before $\tau_G(i+1)$ remains in $r_k(i_k)$. \end{definition} \vspace{-3mm} In what follows, $r_G^t = (r_G(0), \tau_G(0)) (r_G(1), \tau_G(1)) \ldots,$ where $r_G(i) = (r_1(i_1), \ldots, r_N(i_N)), \ i, i_k \geq 0$ and $k \in \mathcal{I}$ denotes the collective timed run. \vspace{-3mm} \begin{definition} We define the global set of atomic propositions $\AP_G = \displaystyle \bigcup_{k=1}^{N} \AP_k$ and for every state $r_G(i) = (r_1(i_1), \ldots, r_N(i_N))$ of a collective timed run, where $i, i_k \geq 0$ and $k \in \mathcal{I}$, we define the labeling function $L_G:\Pi_1 \ldots \Pi_N \rightarrow \AP_G$ as $L_G(r_G(i)) = \bigcup_{k=1}^{N} L_k(r_k(i_k))$. \end{definition} A collective timed run $r_G^t$ thus naturally produces a timed word $w_G^t = (L_G(r_G(0)), \tau_G(0)) (L_G(r_G(1)), \tau_G(1)) \ldots$ over $\AP_G$. \vspace{-2mm} \begin{example} \label{example: team_run} Consider $N=2$ robots operating in a workspace with $\mathcal{W} = \pi_1 \cup \pi_2 \cup \pi_3, W_0 = 3$ and $\mathcal{I} = \{1,2\}$ modeled as the WTSs illustrated in Fig. \ref{fig: trans_system_example}. Let $\AP_1 = \{\mathit{green}\}$, and $\AP_2 = \{\mathit{red}\}$. The labeling functions are $L_1(\pi_1^1) = \{green\}, L_1(\pi_2^1) = L_1(\pi_3^1) = \emptyset$, and $L_2(\pi_1^2) = L_2(\pi_2^2) = \emptyset, L_2(\pi_3^2) = \{red\}$. \vspace{-7mm} \begin{figure}[ht!] \centering \begin{tikzpicture}[scale = 1.0] \node(pseudo1) at (-1.2,0){}; \node(0) [line width = 1.0] at (0,0)[shape=circle,draw][fill=green!20] {$\pi_1^1$}; \node(1) [line width = 1.0] at (2,0)[shape=circle,draw][fill=blue!20] {$\pi_2^1$}; \node(5) [line width = 1.0] at (4,0)[shape=circle,draw][fill=blue!20] {$\pi_3^1$}; \node(pseudo2) at (-1.2,-2.0){}; \node(2) [line width = 1.0] at (0,-2.0)[shape=circle,draw][fill=blue!20] {$\pi_1^2$}; \node(3) [line width = 1.0] at (2,-2.0)[shape=circle,draw][fill=blue!20] {$\pi_2^2$}; \node(6) [line width = 1.0] at (4,-2.0)[shape=circle,draw][fill=red!20] {$\pi_3^2$}; \path [->] [line width = 1.0] (0) edge [bend left = 15] node [above] {$1.0$} (1) (1) edge [bend right = -15] node [below] {$2.0$} (0) (1) edge [bend right = -15] node [above] {$1.5$} (5) (5) edge [bend right = -15] node [below] {$0.5$} (1) (2) edge [bend left = 15] node [above] {$2.0$} (3) (3) edge [bend right = -15] node [below] {$1.5$} (2) (3) edge [bend right = -15] node [above] {$0.5$} (6) (6) edge [bend right = -15] node [below] {$2.0$} (3) (pseudo1) edge (0) (pseudo2) edge (2); \node at (-1.9, 0.0) {WTS $\T_1$ :}; \node at (-1.9, -2.0) {WTS $\T_2$ :}; \end{tikzpicture} \caption{WTSs $T_1, T_2$ representing two agents in $\mathcal W$. $\Pi_1 = \{\pi_1^1, \pi_2^1, \pi_3^1\}$, $\Pi^{\text{init}}_1 = \{\pi_1^1\}$, $\Pi_2 = \{\pi_1^2, \pi_2^2, \pi_3^2\}, \Pi^{\text{init}}_2 = \{\pi_1^2\}$, the transitions are depicted as arrows which are annotated with the corresponding weights. } \label{fig: trans_system_example} \end{figure} \noindent Examples of the agents' runs are: \begin{align} r_1^t = & (r_1(0) = \pi_1^1, \tau_1(0) = 0.0)(r_1(1) = \pi_2^1, \tau_1(1) = 1.0) \notag \\ &(r_1(2) = \pi_3^1, \tau_1(2) = 2.5) (r_1(3) = \pi_2^1, \tau_1(3) = 3.0) \notag \\ & (r_1(4) = \pi_1^1, \tau_1(4) = 5.0) \ldots \notag \\ r_2^t = & (r_2(0) = \pi_1^2, \tau_2(0) = 0.0)(r_2(1) = \pi_2^2, \tau_2(1) = 2.0) \notag \\ & (r_2(2) = \pi_3^2, \tau_2(2) = 2.5) (r_2(3) = \pi_2^2, \tau_2(3) = 4.5) \notag \\ & (r_2(4) = \pi_3^2, \tau_2(4) = 5.0) \ldots \notag \end{align} Given $r_1^t$ and $r_2^t$ the collective run $r_G$ is given according to Def. \ref{def: collective_run} as follows: \begin{align} r_G^t = & (\underbrace{(\pi_1^1, \pi_1^2)}_{r_G(0)}, \tau_G(0) = 0.0)(\underbrace{(\pi_2^1, \pi_1^2)}_{r_G(1)}, \tau_G(1) = 1.0) \notag \\ & (\underbrace{(\pi_2^1, \pi_2^2)}_{r_G(2)}, \tau_G(2) = 2.0) (\underbrace{(\pi_3^1, \pi_3^2)}_{r_G(3)}, \tau_G(3) = 2.5) \notag \\ & (\underbrace{(\pi_2^1, \pi_3^2)}_{r_G(4)}, \tau_G(4) = 3.0) (\underbrace{(\pi_2^1, \pi_2^2)}_{r_G(5)}, \tau_G(5) = 4.5) \notag \\ &(\underbrace{(\pi_1^1, \pi_3^2)}_{r_G(6)}, \tau_G(6) = 5.0) \ldots \notag \end{align} The produced collective timed word is \begin{align*} w_G^t= & (\{green\},0.0)(\emptyset,1.0)(\emptyset,2.0) (\{red\},2.5) \\ &(\{red\},3.0)(\emptyset,4.5)(\{green,red\},5.0)\ldots. \end{align*} \end{example} \subsection{Specification} Several different logics have been designed to express timed properties of real-time systems, such as MTL \cite{koymans_MTL} that extends the until operator of LTL with a time interval. Here, we consider a fragment of MTL, called MITL (see Sec. \ref{sec: preliminaries} for definition) which has been proposed in \cite{alur_mitl}. Namely, we utilize its point-wise semantics and interpret its formulas over timed runs. Unlike MTL, MITL excludes punctual constraints on the until operator. For instance, the formula $\square (a \Rightarrow \Diamond_{=1}b)$ saying that every $a$ is followed by a $b$ precisely 1 time unit later, is not allowed in MITL, whereas $\square(a \Rightarrow \Diamond_{(0, 1]} b)$, saying that every $a$ is followed by a $b$ at most after 1 time unit later, is. While MTL formulas cannot be generally translated into TBAs, MITL formulas can \cite{alur_mitl}. \smallskip \subsubsection{Local Agent's Specification} Each agent $k$, $k \in \mathcal{I}$ is given an individual, local, independent specification in the form of a MITL formula $\varphi_k$ over the set of atomic propositions $\AP_k$. The satisfaction of $\varphi_k$ is decided from the agent's own perspective, i.e., on the timed run $r_k^t$. \smallskip \subsubsection{Global Team Specification} \label{sec: spec_satisf} In addition, the team of agents is given a global team specification, which is a MITL formula $\varphi_G$ over the set of atomic propositions $\AP_G$. The team specification satisfaction is decided on the collective timed run~$r_G^t$. \addtocounter{example}{-1} \begin{example}[Continued] Recall the two agents from Example \ref{example: team_run}. Each of the agents is given a local, independent, specification and at the same time, the team is given an overall goal that may require collaboration or coordination. Examples of local specification formulas are $\varphi_1 = \square \Diamond_{\leq 10} (\mathit{green})$ and $\varphi_2 = \square( \mathit{red} \Rightarrow \bigcirc \square_{\leq 5} (\neg \mathit{red}))$ stating that ``The green region is periodically visited with at most 10 time units between two consecutive visits'' and ``Whenever a red region is visited, it will not be visited for the following 5 time units again'', respectively. While $\varphi_1$ is satisfied on $r_1^t$, $\varphi_2$ is not satisfied on $r_2^t$. An example of the global specification is $\varphi_G = \square \Diamond_{\leq 5} (\mathit{green} \wedge \mathit{red})$ that imposes requirement on the agents' collaboration; it states that agents 1 and 2 will periodically simultaneously visit the green and the red region, respectively, with at most 5 time units between two consecutive visits. \end{example} \subsection{Problem Statement} \begin{problem}[Run Synthesis] \label{problem: basic_prob} Given $N$ agents governed by dynamics as in \eqref{eq: system}, a task specification MITL formula $\varphi_G$ for the team of robots, over a set of atomic propositions $AP_G$ and $N$ local task specifications $\varphi_k$ over $AP_k, \ k \in \mathcal{I}$, synthesize a sequence of individual timed runs $r_1^t, \ldots, r_N^t$ such that the following hold \begin{equation} \label{eq: problem_adf} \left(r_G^t \models \varphi_G \right) \wedge \left( r_1^t \models \varphi_1 \wedge \ldots \wedge r_N^t \models \varphi_N \right). \end{equation} \end{problem} Though it might seem that the satisfaction of the individual specifications $\varphi_1,\ldots,\varphi_N$ can be treated as the satisfaction of the formula $\bigwedge_{k \in \mathcal{I}}{\varphi_k}$ on the collective timed run $r_G^t$, this is generally not the case, as demonstrated through the following example: \addtocounter{example}{-1} \begin{example}[Continued] Recall the two agents from Example \ref{example: team_run} and a local specification $\varphi_2 = \square( \mathit{red} \Rightarrow \bigcirc \square_{\leq 2} (\neg \mathit{red}))$. While this specification is satisfied on $r_2^t$ since $w(r_2^t) = (\emptyset,0.0)(\emptyset,2.0) (\{red\},2.5)(\emptyset,4.5)(\{green,red\},5.0) \ldots$, it can be easily seen that it is not satisfied on $r_G^t$. \end{example} Formally, we have \begin{equation} r_G^t \models \bigwedge_{k \in \mathcal{I}}{\varphi_k} \nLeftrightarrow r_1^t \models \varphi_1 \wedge \ldots \wedge r_N^t \models \varphi_N. \label{eq: remark_import} \end{equation} Hence, Problem~\ref{problem: basic_prob} may not be treated in a straightforward, fully centralized way. We propose a two-stage solution that first pre-computes all timed runs of the individual agents in a decentralized way and stores them efficiently in weighted transition systems enhanced with a B\"uchi acceptance condition. Second, these are combined and inspected with respect to guaranteeing the satisfaction of the team specification by the collective timed run. \section{Proposed Solution} \label{sec: solution} In this section, we introduce a systematic solution to Problem~\ref{problem: basic_prob}. Our overall approach builds on the following steps: \begin{enumerate} \item We construct TBAs $\mathcal{A}_k, \ k \in \mathcal{I}$ and $\mathcal{A}_G$ that accept all the timed words satisfying the specification formulas $\varphi_k, \ k \in \mathcal{I}$ and $\varphi_G$, respectively (Sec. \ref{sec: 4a}). \item We construct a \emph{local B\"uchi WTS} $\widetilde{\mathcal{T}}_k = \mathcal{T}_k \otimes \mathcal{A}_k$, for all $\ k \in \mathcal{I}$. The accepting timed runs of $\widetilde{\T}_k$ are the timed runs of the $\mathcal{T}_k$ that satisfy the corresponding local specification formula $\varphi_k, k \in \mathcal{I}$ (Sec. \ref{sec: 4b}). \item We construct a \emph{product B\"uchi WTS} $\mathcal{T}_G = \widetilde{\mathcal{T}}_1 \otimes \cdots \otimes \widetilde{\mathcal{T}}_N$ such that its timed runs are collective timed runs of the team and their projections onto the agents' individual timed runs are admissible by the local B\"uchi WTSs $\widetilde \T_1, \ldots \widetilde \T_N$ respectively (Sec. \ref{sec: 4c}). \item We construct a \emph{global B\"uchi WTS} $\widetilde{\mathcal{T}}_G = \mathcal{T}_G \otimes \mathcal{A}_G$. The accepting timed runs of the $\widetilde{\mathcal{T}}_G$ are the timed runs of the $\T_G$ that satisfy the team formula $\varphi_G$ (Sect. \ref{sec: product_buchi_tba}). \item We find an accepting timed run $\widetilde{r}_G^t$ of the global B\"uchi WTS $\widetilde{\mathcal{T}_G}$ and project it onto timed runs of the product B\"uchi WTS $\T_G$, then onto timed runs of the local B\"uchi WTSs $\widetilde \T_1,\ldots, \widetilde \T_N$, and finally onto individual timed runs $r_1^t, \ldots, r_N^t$ of the original WTSs $\T_1,\ldots, \T_N$. By construction, $r_1^t, \ldots, r_N^t$ are guaranteed to satisfy $\varphi_1,\ldots, \varphi_N$, respectively, and furthermore $r_G^t$ satisfies $\varphi_G$ (Sec. \ref{sec: projection}). \end{enumerate} { \subsection{Construction of TBAs} \label{sec: 4a} As stated in Sec. \ref{sec: preliminaries}, every MITL formula $\varphi$ can be translated into a language equivalent TBA. Several approaches are proposed for that purpose, for instance \cite{maler_MITL_TA, alur_mitl, nickovic_timed_aut, MITL_ata}. Here, we translate each local specification $\varphi_k$, where $k \in \mathcal I$ into a TBA $\mathcal{A}_k = (S_k, S^\text{init}_k, X_k, I_k, E_k, \mathcal F_k, AP_k, \mathcal{L}_k)$, and the global specification $\varphi_G$ into a TBA $\A_G = (S_G, S^\text{init}_G, X_G, I_G, E_G, \mathcal F_G, AP_G, \mathcal{L}_G)$. } \subsection{Construction of the local B\"uchi WTSs $\widetilde \T_1,\ldots,\widetilde \T_N$} \label{sec: 4b} \begin{definition} Given a WTS $\mathcal{T}_k =(\Pi_k, \Pi_{k}^{\text{init}}, \rightarrow_k, AP_k, L_k, d_k)$, and a TBA $\A_k = (S_k, S^\text{init}_k, X_k, I_k, E_k, F_k, AP_k, \mathcal{L}_k)$ with $M_k = |X_k|$ and $C^{\mathit{max}}_k$ being the largest constant appearing in $\A_k$, we define their \emph{local B\"uchi WTS} $\widetilde{\T}_k = \mathcal{T}_k \otimes \A_k = (Q_k, Q_{k}^{\mathit{init}}, {\rightsquigarrow}_{k}, \widetilde{d}_k, \widetilde{F}_k, AP_k, \widetilde{L}_k)$ as follows: \begin{itemize} \item {$Q_k \subseteq \{(r_k,s_k) \in \Pi_k \times S_k : L_k(r_k) = \mathcal{L}_k(s_k)\} \times \mathbb{T}_\infty^{M_k} $.} \item $Q_{k}^{\mathit{init}} = \Pi_k^{\mathit{init}} \times S_k^{\mathit{init}} \times \underbrace{\{0\} \times \ldots \times \{0\}}_{M_k \ products}$. \item $q \, {\rightsquigarrow}_k \, q'$ iff \begin{itemize} \item[$\circ$] $q = (r,s,\nu_1,\ldots,\nu_{M_k}) \in Q_k$, \\ $q' = (r',s',\nu_1',\ldots,\nu_{M_k}') \in Q_k$, \item[$\circ$] $r \, \rightarrow_k r'$, and \item[$\circ$] there exists $\gamma, R$, such that $(s,\gamma,R,s') \in E_k$, $\nu_1,\ldots,\nu_{M_k} \models \gamma$, $\nu_1',\ldots,\nu_{M_k}' \models I_k(s')$, and for all $i\in \{1,\ldots, M_k\}$ \begin{equation*} \nu_i' = \begin{cases} 0, & \text{if } x_i \in R \\ \nu_i + d_k(r, r'), & \text{if } x_i \not \in R \text{ and } \\ & \nu_i + d_k(r, r') \leq C^{\mathit{max}}_k \\ \infty, & \text{otherwise}. \end{cases} \end{equation*} \end{itemize} Then $\widetilde{d}_k(q,q') = d_k(r,r')$. \item $\widetilde{F}_k = \{(r_k,s_k,\nu_1,\ldots,\nu_{M_k}) \in Q_k : s_k \in F_k\}$. \item $\widetilde{L}_k(r_k, s_k, \nu_1, \ldots, \nu_{M_k}) = L_k(r_k)$. \end{itemize} \label{def:localBWTS} \end{definition} Each local B\"uchi WTS $\widetilde \T_k, k \in \mathcal I$ is in fact a WTS with a B\"uchi acceptance condition $\widetilde{F}_k$. A timed run of $\widetilde \T_k$ can be written as $\widetilde{r}_k^t = (q_k(0), \tau_k(0))(q_k(1), \tau_k(1)) \ldots$ using the terminology of Def. \ref{run_of_WTS}. It is \emph{accepting} if $q_k(i) \in \widetilde F_k$ for infinitely many $i \geq 0$. An accepting timed run of $\widetilde{\T}_k$ projects onto a timed run of $\T_k$ that satisfies the local specification formula $\varphi_k$ by construction. Formally, the following lemma, whose proof follows directly from the construction and and the principles of automata-based LTL model checking (see, e.g., \cite{katoen}), holds: \vspace{-2mm} \begin{lemma} \label{eq: lemma_1} Consider an accepting timed run $\widetilde{r}_k^t = (q_k(0), \tau_k(0))(q_k(1), \tau_k(1)) \ldots$ of the local B\"uchi WTS $\widetilde \T_k$ defined above, where $q_k(i) = (r_k(i), s_k(i), \nu_{k, 1}, \ldots, \nu_{k, M_k})$ denotes a state of $\mathcal{\widetilde T}_k$, for all $i \geq 1$. The timed run $\widetilde{r}_k^t$ projects onto the timed run $r_k^t = (r_k(0), \tau_k(0))(r_k(1), \tau_k(1)) \ldots $ of the WTS $\mathcal{T}_k$ that produces the timed word $w(r_k^t) = (L_k(r_k(0)), \tau_k(0))(L_k(r_k(1)), \tau_k(1)) \ldots $ accepted by the TBA $\mathcal{A}_k$ via its run $\rho_k = s_k(0)s_k(1) \ldots$ Vice versa, if there exists a timed run $r_k^t = (r_k(0),\tau_k(0))(r_k(1),\tau_k(1))\ldots$ of the WTS $\T_k$ that produces a timed word $w(r_k^t) = (L_k(r_k(0)), \tau_k(0))(L_k(r_k(1)), \tau_k(1)) \ldots$ accepted by the TBA $\A_k$ via its run $\rho_k = s_k(0)s_k(1)\ldots$ then there exist the accepting timed run $\widetilde{r}_k^t = (q_k(0),\tau_k(0))(q_k(1),\tau_k(1)) \ldots$ of $\widetilde{\T}_k$, where $q_k(i)$ denotes $(r_k(i),s_k(i),\nu_{k,1}(i), \ldots, \nu_{k,M_k}(i))$ in $\widetilde{\T}_k$. \end{lemma} \subsection{Construction of the product B\"uchi WTS $\mathcal{T}_G$} \label{sec: 4c} Now we aim to construct a finite product WTS $\mathcal{T}_G$ whose timed runs represent the collective behaviors of the team and whose B\"uchi acceptance condition ensures that the accepting timed runs account for the local specifications. In other words, $\mathcal{T}_G$ is a product of all the local WTS $\widetilde{\T}_k$ built above. In the construction of $\mathcal{T}_G$, we need to specifically handle the cases when transitions of different agents are associated with different time durations, i.e, different transition weights. To this end, we introduce a vector $b = (b_1, \ldots, b_N) \in \mathbb{T}^N$. Each element of the vector is a rational number $b_k \in \mathbb{T}, k \in \mathcal{I}$ which can be either $0$, when the agent $k$ has just completed its transition, or the time elapsed from the beginning of the agent's current transition, if this transition is not completed, yet. The state of the team of agents is then in the form $q_G = (q_1, \ldots, q_N, b_1, \ldots, b_N, \ell)$ where $q_k$ is a state of $\widetilde \T_k$, for all $k \in \mathcal{I}$, and $\ell \in \mathcal{I}$ has a special meaning in relation to the acceptance condition of $\T_G$ that will become clear shortly. Taking the above into consideration we define the global model $\T_G$ as follows: \begin{definition} Given $N$ local B\"uchi WTSs $\widetilde{\T}_1,\ldots,\widetilde{\T}_N$ from Def.~\ref{def:localBWTS}, their \emph{product B\"uchi WTS} $\mathcal{T}_G = \widetilde{\T}_1 \otimes \ldots \otimes \widetilde{\T}_N =(Q_G, Q_{G}^{\mathit{init}}, \rightarrow_G, d_G, F_G, \AP_G, L_G)$ is defined as follows: \begin{itemize} \item {$Q_G \subseteq Q_1 \times \cdots \times Q_N \times \mathbb{T}^N \times \{1, \ldots, N\}$.} \item $Q_{G}^{\mathit{init}} = Q_{1}^{\mathit{init}} \times \ldots \times Q_{N}^{\mathit{init}} \times \underbrace{\{0\} \times \ldots \times \{0\}}_{N \ products} \times \{1\}$. \item $q_G \rightarrow_G q_G'$ iff \begin{itemize} \item[$\circ$] $q_G =(q_1, \ldots, q_N, b_1, \ldots, b_N,\ell) \in Q_G, \\ q_G' = (q'_1, \ldots, q'_N, b'_1, \ldots, b'_N,\ell') \in Q_G$, \item[$\circ$] {$\exists \ q''_k \in Q_k : \, q_k {\rightsquigarrow}_k \, q''_k$, for some $k \in \mathcal{I}$}, \item[$\circ$] \[b_k' = \begin{cases} 0, & \text{if } b_k + d_{\mathit{min}} = \widetilde{d}_k(q_k,q_k'') \\ &\text{and } q_k' = q_k'' \\ b_k + d_{\mathit{min}}, &\text{if } b_k + d_{\mathit{min}} < \widetilde{d}_k(q_k,q_k'') \\ &\text{and } q_k' = q_k \end{cases} \] where $d_{\mathit{min}} = \underset{k\in \{1,\ldots,N\}}{\text{min}} (\widetilde{d}_k(q_k,q_k'') - b_k)$ is (loosely speaking) the smallest time step that can be applied, and \item[$\circ$] \[\ell' = \begin{cases} \ell, & \text{if } q_\ell \not \in \widetilde{F}_\ell \\ ((\ell + 1) \mod N), & \text{otherwise} \end{cases} \] \end{itemize} Then $d_G(q_G,q_G') = d_{\mathit{min}}$. \item $F_G = \{(q_1, \ldots, q_N, b_1, \ldots, b_N, N) \in Q_G : q_N \in \widetilde{F}_N\}$. \item $AP_G = \displaystyle \bigcup_{k=1}^{N} AP_k$. \item $L_G((q_1,\ldots,q_N,b_1,\ldots, b_N,\ell) = \displaystyle \bigcup_{k=1}^{N} \widetilde{L}_k(q_k)$. \end{itemize} \end{definition} The product WTS $\T_G$ is again a WTS with a B\"uchi acceptance condition. Informally, the index $\ell$ in a state $q_G =(q_1, \ldots, q_N, b_1, \ldots, b_N,\ell) \in Q_G$ allows to project an accepting timed run of $\T_G$ onto an accepting run of every one of the local B\"uchi WTS. The construction is based on the standard definition of B\"uchi automata intersection (see, e.g.,~\cite{katoen}). The following lemma follows directly from the construction and and the principles of automata-based LTL model checking (see, e.g., \cite{katoen}): \vspace{-1mm} \begin{lemma} \label{eq: lemma_2} For all $k \in \mathcal I$, an accepting timed run ${r}_G^t$ of the product B\"uchi WTS ${\mathcal{T}}_G$ projects onto an accepting timed run $r_k^t of the local B\"uchi WTS $\widetilde\T_k$ that produces a timed word $w(r_k^t)$ accepted by the corresponding TBA $\mathcal{A}_k$. Vice versa, if there exists a timed run $r_k^t$ of the local B\"uchi WTS $\widetilde\T_k$ that produces a timed word $w(r_k^t)$ accepted by the TBA $\A_k$ for each $k \in \mathcal I$, then there exist an accepting timed run ${r}_G^t$ of ${\T}_G$. \end{lemma} \vspace{-2mm} \subsection{Construction of the global B\"uchi WTS $\widetilde{\mathcal{T}}_G$} \label{sec: product_buchi_tba} \begin{definition} Finally, given the product B\"uchi WTS $\mathcal{T}_G =(Q_G, Q_{G}^{\text{init}}, \rightarrow_G, d_G, F_G, AP_G, L_G)$, and a TBA $\A_G = (S_G, S^\text{init}_G, X_G, I_G, E_G, \mathcal F_G, AP_G, \mathcal{L}_G)$ that corresponds to the team specification formula $\varphi_G$ with $M_G = |X_G|$ and $C^{\mathit{max}}_G$ being the largest constant appearing in $\A_G$, we define their product WTS $\widetilde{\T}_G = \mathcal{T}_G \otimes \A_G = (\widetilde{Q}_G, \widetilde{Q}_{G}^{\mathit{init}}, \rightsquigarrow_{G},$ $\widetilde{d}_G, \widetilde{F}_G, AP_G, \widetilde{L}_G)$ as follows: \begin{itemize} \item $\widetilde{Q}_G \subseteq \{(q,s) \in Q_G \times S_G : L_G(q) = \mathcal{L}_G(s)\} \times \mathbb{T}_\infty^{M_G}$. \item $\widetilde{Q}_{G}^{\mathit{init}} = Q_G^{\mathit{init}} \times S_G^{\mathit{init}} \times \underbrace{\{0\} \times \ldots \times \{0\}}_{M_G - products} \times \{1,2\} $. \item $q \rightsquigarrow_G q'$ iff \begin{itemize} \item[$\circ$] $q = (r,s,\nu_1,\ldots,\nu_{M_G}, \ell) \in Q_G$ , \\ $q' = (r',s',\nu_1',\ldots,\nu_{M_G}',\ell') \in Q_G$, \item[$\circ$] $r \rightarrow_G r'$, and \item[$\circ$] there exists $\gamma, R$, such that $(s,\gamma,R,s') \in E_G$, $\nu_1,\ldots,\nu_{M_G} \models \gamma$, $\nu_1', \ldots, \nu_{M_G}' \models I_G(s')$, and for all $i\in \{1,\ldots, M_G\}$ \begin{equation*} \nu_i' = \begin{cases} 0, & \text{if } x_i \in R \\ \nu_i + d_G(r, r'), & \text{if } x_i \not \in R \text{ and } \\ & \nu_i + d_G(r, r') \leq C^{\mathit{max}}_G \\ \infty, & \text{otherwise} \end{cases} \end{equation*} \item[$\circ$] \[\ell' = \begin{cases} 1 \text{ if } \ell = 1 \text { and } r \not \in {F}_G, \text{ or } \ell = 2 \text{ and } s \in \mathcal F_G \\ 2 \text{ otherwise} \end{cases} \] \end{itemize} Then $\widetilde{d}_G(q,q') = d_G(r,r')$. \item {$\widetilde{F}_G = \{(r,s,\nu_1,\ldots,\nu_{M_G},1) \in Q_G : r \in F_G\}$.} \item $\widetilde{L}_G(r_G, s_G, \nu_1, \ldots, \nu_{M_G}) = L_G(r_G)$. \end{itemize} \end{definition} Analogously to above, the global B\"uchi WTS $\widetilde \T_G$ is a WTS with a B\"uchi acceptance condition. An accepting timed run of $\widetilde{T}_G$ guarantees the satisfaction of the team specification formula $\varphi_G$ by construction. Furthermore, the projected individual timed runs of the original $\T_1, \ldots, \T_N$ satisfy their respective local specifications. The following lemma follows directly from the construction and and the principles of automata-based LTL model checking (see, e.g., \cite{katoen}): \vspace{-1mm} \begin{lemma} \label{eq: lemma_3} An accepting timed run $\widetilde{r}_G^t$ of the global B\"uchi WTS $\widetilde{\mathcal{T}}_G$ projects onto an accepting timed run $r_G^t$ of the product B\"uchi WTS $\mathcal{T}_G$ that produces a timed word $w(r_G^t)$ accepted by the TBA $\mathcal{A}_G$. Vice versa, if there exists a timed run $r_G^t$ of the product B\"uchi WTS $\mathcal{T}_G$ that produces a timed word $w(r_G^t)$ accepted by the TBA $\A_G$ then there exist an accepting timed run $\widetilde{r}_G^t$ of $\widetilde{\T}_G$. \end{lemma} \vspace{-3mm} \subsection{Projection to the desired timed runs of $\T_1,\ldots, \T_N$} \label{sec: projection} An accepting run $\widetilde r_G^t$ of the global B\"uchi WTS $\widetilde \T_G$ can be found efficiently leveraging ideas from automata-based LTL model checking \cite{katoen}. Namely, $\widetilde \T_G$ is viewed as a graph that is searched for a so-called accepting lasso; a cycle containing an accepting state that is reachable from the initial state. Once $\widetilde r_G^t$ is obtained, Lemmas \ref{eq: lemma_3}, \ref{eq: lemma_2}, and \ref{eq: lemma_1} directly provide guidelines for projection of $\widetilde r_G^t$ onto the individual timed runs of $\T_1,\ldots,\T_N$. In particular, $\widetilde r_G^t$ is projected onto a timed run $r_G^t$ of $\T_G$, which is projected onto timed runs $\widetilde r_1^t,\ldots,\widetilde r_N^t$ of $\widetilde \T_1,\ldots,\T_N$, which are finally projected onto timed runs $ r_1^t,\ldots, r_N^t$ of $\T_1,\ldots, \T_N$, respectively. Such a projection guarantees that $ r_1^t,\ldots, r_N^t$ are a solution to Problem \ref{problem: basic_prob}. \vspace{-2mm} \section{Illustrative Example} \label{sec: simulation_results} For an illustrative example, consider $2$ robots in the shared workspace of Fig. \ref{fig: illustrative_example}. The workspace is partitioned into $W = 21$ cells and a robot's state is defined by the cell it is currently present at. Agent 1 (R1) is depicted in green and it is two times faster than Agent 2 (R2) which is depicted in red. We assume that the environment imposes such moving constraints that the traveling right and up is faster than left and down. Let Agent 1 need 1 time unit for up and right moves and 2 time units for down and left moves. Let also Agent 2 need 2 time units for up and right moves and 4 time units for down and left moves. We consider a scenario where the robots have to eventually meet at yellow regions (global team task), and at the same time, they have to recharge within a certain time interval in recharge locations (blue squares with the circles in the respective color). The individual specifications are $\varphi_1 = \Diamond_{\leq 6} (\mathit{recharge1})$ and $\varphi_2 = \Diamond_{\leq 12} (\mathit{recharge2})$ stating that agent 1 has to recharge within 5 time units and agent 2 within 10 units, respectively, and the team task is $\varphi_G = \Diamond_{\leq 30} \{(\mathit{meet_1^A} \wedge \mathit{meet_2^A}) \vee (\mathit{meet_1^B} \wedge \mathit{meet_2^B} ) \}$ stating that the agents have to meet either in yellow region $A$ or $B$ within 30 time units. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale = 0.7] \draw[step=1.5, line width=.04cm] (-7.5,-1.5) grid (3,3); \draw[-latex, draw=green!90, line width = 1.0] (-2.25, 3.30) -- (-2.25,2.50); \draw[-latex, draw=red!70, line width = 1.0] (-2.25, -1.8) -- (-2.25,-1.05); \draw [green!90, line width = 1.5] (-2.90, 3.30) -- (-2.25,3.30); \draw [red!50, line width = 1.0] (-2.90, -1.8) -- (-2.25,-1.8); \filldraw[fill=green!90, line width=.04cm] (-2.25,2.30) circle (0.25cm); \node at (-3.20,3.25) {$R_1$}; \filldraw[fill=red!70, line width=.04cm] (-2.25,-0.80) circle (0.25cm); \node at (-3.20,-1.80) {$R_2$}; \filldraw[fill=yellow!90, line width=.04cm] (0.28, 1.67) rectangle +(1.0, 1.0); \node (agent 1) at (0.75, 2.20) [label=center:\textbf{$A$}] {}; \filldraw[fill=yellow!90, line width=.04cm] (-7.25, -1.2) rectangle +(1.0, 1.0); \node (agent 1) at (-6.75,-0.7) [label=center:\textbf{$B$}] {}; \filldraw[fill=blue!50, line width=.04cm] (-4.28, 1.7) rectangle +(1.0, 1.0); \filldraw[fill=red!70, line width=.04cm] (-3.75, 2.2) circle (0.25cm); \node at (-3.75, 2.2) {$2$}; \filldraw[fill=blue!50, line width=.04cm] (-5.68, 0.3) rectangle +(1.0, 1.0); \filldraw[fill=green!90, line width=.04cm] (-5.15, 0.8) circle (0.25cm); \node at (-5.15, 0.8) {$1$}; \draw[-latex, draw=green!90, line width = 1.0] (-2.25,2.30) -- (-2.25,0.80); \draw[-latex, draw=green!90, line width = 1.0] (-2.25,0.80) -- (-3.90,0.80); \draw[-latex, draw=green!90, line width = 1.0] (-3.90,0.80) -- (-5.1,0.80); \draw[-latex, draw=green!90, line width = 1.0] (-5.10,0.80) -- (-5.1,-0.40); \draw[-latex, draw=green!90, line width = 1.0] (-5.10,-0.40) -- (-3.8,-0.40); \draw[-latex, draw=green!90, line width = 1.0] (-3.8,-0.40) -- (-2.2,-0.40); \draw[-latex, draw=green!90, line width = 1.0] (-2.2,-0.40) -- (-0.6,-0.40); \draw[-latex, draw=green!90, line width = 1.0] (-0.6,-0.40) -- (0.7,-0.40); \draw[-latex, draw=green!90, line width = 1.0] (0.7,-0.40) -- (0.7,0.7); \draw[-latex, draw=green!90, line width = 1.0] (0.7,0.7) -- (0.7,1.9); \draw[-latex, draw=red!70, line width = 1.0] (-2.25,-0.80) -- (-3.70,-0.8); \draw[-latex, draw=red!70, line width = 1.0] (-3.70,-0.8) -- (-3.70,0.7); \draw[-latex, draw=red!70, line width = 1.0] (-3.70,0.7) -- (-3.70,2.2); \draw[-latex, draw=red!70, line width = 1.0] (-3.70,2.2) -- (-2.2,2.2); \draw[-latex, draw=red!70, line width = 1.0] (-2.2,2.2) -- (-0.7,2.2); \draw[-latex, draw=red!70, line width = 1.0] (-0.7,2.2) -- (0.6,2.2); \filldraw[fill=blue!50, line width=.04cm] (1.65, 1.6) rectangle +(1.2, 1.2); \filldraw[fill=green!90, line width=.04cm] (2.0, 2.1) circle (0.20cm); \filldraw[fill=red!70, line width=.04cm] (2.5, 2.1) circle (0.20cm); \node at (2.0, 2.1) {$1$}; \node at (2.5, 2.1) {$2$}; \node at (-7.2, 2.7) {$\pi_1$}; \node at (1.9, -0.3) {$\pi_{21}$}; \node at (1.9, 1.30) {$\pi_{14}$}; \node at (-7.2, 1.20) {$\pi_{8}$}; \end{tikzpicture} \caption{An illustrative example with $2$ robots evolving in a common workspace. Let $\mathcal{W}_0 = \pi_1 \cup \ldots \cup \pi_{21}$. We enumerate the regions starting from the left region in every row and ending in the right. The initial positions of robots $R_1, R_2$ are depicted by a green and a red circle, respectively, the desired meeting points in yellow and the recharging spots by the agents' respective colors inside a blue box. The accepting runs for task specifications $\phi_1$, $\phi_2$, $\phi_G$ are depicted with green and red arrows for agent 1 and agent 2 respectively.} \label{fig: illustrative_example} \end{figure} \begin{figure*} [htp!] \centering \begin{tikzpicture} [scale = 0.75] \draw[dashed, line width = 1.0, red] (0,-2.5) -- (0,5); \draw[dashed, line width = 1.0, red] (4,-2.5) -- (4,5); \draw[dashed, line width = 1.0, red] (6,-2.5) -- (6,5); \draw[dashed, line width = 1.0, red] (8,-2.5) -- (8,5); \draw[dashed, line width = 1.0, red] (10,-2.5) -- (10,5); \draw[dashed, line width = 1.0, red] (12,-2.5) -- (12,5); \draw[dashed, line width = 1.0, red] (14,-2.5) -- (14,5); \draw[-latex, line width = 1.0] (0, -2.5) -- (1.95,-2.5); \draw[-latex, line width = 1.0] (1.95, -2.5) -- (3.95,-2.5); \draw[-latex, line width = 1.0] (3.95, -2.5) -- (5.95,-2.5); \draw[-latex, line width = 1.0] (5.95, -2.5) -- (7.95,-2.5); \draw[-latex, line width = 1.0] (7.95, -2.5) -- (8.95,-2.5); \draw[-latex, line width = 1.0] (8.95, -2.5) -- (9.95,-2.5); \draw[-latex, line width = 1.0] (9.95, -2.5) -- (10.95,-2.5); \draw[-latex, line width = 1.0] (10.95, -2.5) -- (11.95,-2.5); \draw[-latex, line width = 1.0] (11.95, -2.5) -- (12.95,-2.5); \draw[-latex, line width = 1.0] (12.95, -2.5) -- (13.95,-2.5); \draw[-latex, line width = 1.0] (0, 2.5) -- (3.95,2.5); \draw[-latex, line width = 1.0] (3.95, 2.5) -- (5.95,2.5); \draw[-latex, line width = 1.0] (5.95, 2.5) -- (7.95,2.5); \draw[-latex, line width = 1.0] (7.95, 2.5) -- (9.95,2.5); \draw[-latex, line width = 1.0] (9.95, 2.5) -- (11.95,2.5); \draw[-latex, line width = 1.0] (11.95, 2.5) -- (13.95,2.5); \draw[-latex, line width = 1.0] (0, 5) -- (1.95,5); \draw[-latex, line width = 1.0] (1.95, 5) -- (3.95,5); \draw[-latex, line width = 1.0] (3.95, 5) -- (5.95,5); \draw[-latex, line width = 1.0] (5.95, 5) -- (7.95,5); \draw[-latex, line width = 1.0] (7.95, 5) -- (8.95,5); \draw[-latex, line width = 1.0] (8.95, 5) -- (9.95,5); \draw[-latex, line width = 1.0] (9.95, 5) -- (10.95,5); \draw[-latex, line width = 1.0] (10.95, 5) -- (11.95,5); \draw[-latex, line width = 1.0] (11.95, 5) -- (13.95,5); \draw[line width = 1.0] (0,0) -- (14,0); \fill[blue] (0,0) circle (2pt); \fill[blue] (1,0) circle (2pt); \fill[blue] (2,0) circle (2pt); \fill[blue] (3,0) circle (2pt); \fill[blue] (4,0) circle (2pt); \fill[blue] (5,0) circle (2pt); \fill[blue] (6,0) circle (2pt); \fill[blue] (7,0) circle (2pt); \fill[blue] (8,0) circle (2pt); \fill[blue] (9,0) circle (2pt); \fill[blue] (10,0) circle (2pt); \fill[blue] (11,0) circle (2pt); \fill[blue] (12,0) circle (2pt); \fill[blue] (13,0) circle (2pt); \fill[blue] (14,0) circle (2pt); \fill[orange] (0,-2.5) circle (2pt); \fill[orange] (2,-2.5) circle (2pt); \fill[orange] (4,-2.5) circle (2pt); \fill[orange] (6,-2.5) circle (2pt); \fill[orange] (8,-2.5) circle (2pt); \fill[orange] (9,-2.5) circle (2pt); \fill[orange] (10,-2.5) circle (2pt); \fill[orange] (11,-2.5) circle (2pt); \fill[orange] (12,-2.5) circle (2pt); \fill[orange] (13,-2.5) circle (2pt); \fill[orange] (14,-2.5) circle (2pt); \fill[green] (0,2.5) circle (2pt); \fill[green] (4,2.5) circle (2pt); \fill[green] (6,2.5) circle (2pt); \fill[green] (8,2.5) circle (2pt); \fill[green] (10,2.5) circle (2pt); \fill[green] (12,2.5) circle (2pt); \fill[green] (14,2.5) circle (2pt); \fill[red] (0, 5) circle (2pt); \fill[red] (2, 5) circle (2pt); \fill[red] (4, 5) circle (2pt); \fill[red] (6, 5) circle (2pt); \fill[red] (8, 5) circle (2pt); \fill[red] (9, 5) circle (2pt); \fill[red] (10,5) circle (2pt); \fill[red] (11,5) circle (2pt); \fill[red] (12,5) circle (2pt); \fill[red] (13,5) circle (2pt); \fill[red] (14,5) circle (2pt); \node at (0.2, -0.4) {\small $0$}; \node at (1, -0.4) {\small $1$}; \node at (2, -0.4) {\small $2$}; \node at (3, -0.4) {\small $3$}; \node at (4, -0.4) {\small $4$}; \node at (5, -0.4) {\small $5$}; \node at (6, -0.4) {\small $6$}; \node at (7, -0.4) {\small $7$}; \node at (8, -0.4) {\small $8$}; \node at (9, -0.4) {\small $9$}; \node at (10, -0.4) {\small $10$}; \node at (11, -0.4) {\small $11$}; \node at (12, -0.4) {\small $12$}; \node at (13, -0.4) {\small $13$}; \node at (14, -0.4) {\small $14$}; \node at (0.4, -2.9) {\small $(\pi_4^1, 0)$}; \node at (2.0, -2.1) {\small $(\pi_{11}^1, 2)$}; \node at (4, -2.9) {\small $(\pi_{10}^1, 4)$}; \node at (6, -2.1) {\small $(\pi_{9}^1, 6)$}; \node at (8, -2.9) {\small $(\pi_{16}^1, 8)$}; \node at (9, -2.1) {\small $(\pi_{17}^1, 9)$}; \node at (10, -2.9) {\small $(\pi_{18}^1, 10)$}; \node at (11, -2.1) {\small $(\pi_{19}^1, 11)$}; \node at (12, -2.9) {\small $(\pi_{20}^1, 12)$}; \node at (13, -2.1) {\small $(\pi_{13}^1, 13)$}; \node at (14, -2.9) {\small $(\pi_{6}^1, 14)$}; \node at (0.7, 2.9) {\small $(\pi_{18}^2, 0)$}; \node at (4, 2.1) {\small $ (\pi_{17}^2, 4)$}; \node at (6, 2.9) {\small $(\pi_{10}^2, 6)$}; \node at (8, 2.1) {\small $(\pi_{3}^2, 8)$}; \node at (10, 2.9) {\small $(\pi_{4}^2, 10)$}; \node at (12, 2.1) {\small $(\pi_{5}^2, 12)$}; \node at (14, 2.9) {\small $(\pi_{6}^2, 14)$}; \node at (0.6, 5.4) {\tiny $((\pi_{4}^1, \pi_{18}^2), 0)$}; \node at (2, 4.6) {\tiny $((\pi_{11}^1, \pi_{18}^2), 2)$}; \node at (4, 5.4) {\tiny $((\pi_{10}^1, \pi_{17}^2), 4)$}; \node at (6, 4.6) {\tiny $((\pi_{9}^1, \pi_{10}^2), 6)$}; \node at (8, 5.4) {\tiny $((\pi_{16}^1, \pi_{3}^2), 8)$}; \node at (9, 4.6) {\tiny $((\pi_{17}^1, \pi_{3}^2), 9)$}; \node at (10, 5.4) {\tiny $((\pi_{18}^1, \pi_{4}^2), 10)$}; \node at (11, 4.6) {\tiny $((\pi_{9}^1, \pi_{4}^2), 11)$}; \node at (12, 5.4) {\tiny $((\pi_{20}^1, \pi_{5}^2), 12)$}; \node at (13, 4.6) {\tiny $((\pi_{13}^1, \pi_{5}^2), 13)$}; \node at (14, 5.4) {\tiny $((\pi_{6}^1, \pi_{6}^2), 14)$}; \node at (1.0, 1.5) {\boxed{$\text{Agent} \ 2$}}; \node at (1.0, -1.0) {\boxed{$\text{Agent} \ 1$}}; \node at (0.8, 4.05) {\boxed{$\text{Team}$}}; \end{tikzpicture} \caption{The accepting runs $\widetilde{r}_1^t, \widetilde{r}_2^t$, the collective run $\widetilde{r}_G^t$ and the corresponding timed stamps. We denote with red dashed lines the times that both agents have the same time stamps} \label{fig: run_robots2} \end{figure*} By following the process that was described in Section \ref{sec: solution} step by step we have that an accepting timed run is $\widetilde{r}_G^t = ((\pi_4^1, \pi_{18}^2), 0)((\pi_{11}^1, \pi_{18}^2), 2) \ldots ((\pi_9^1,\pi_{10}^2), 6) ((\pi_{16}^1,\pi_{3}^2), 8) \ldots \\ ((\pi_{13}^1, \pi_{5}^2), 13)((\pi_6^1, \pi_6^2), 14) \ldots$ with corresponding timed word $w(\widetilde{r}_G^t) = (\emptyset, 0)(\emptyset, \pi_{18}^2), 2) \ldots (\{\mathit{recharge1}\}, 6) \\ (\{\mathit{recharge2}\}, 8) \ldots (\emptyset, \pi_{5}^2), 13)((\{\mathit{meet}_1^A, \mathit{meet}_2^A \}, 14) \ldots$ which satisfies formula $\phi_G$. The run $\widetilde{r}_G^t$ can be projected onto individual the timed runs $\widetilde{r_1}^t = (\pi_4^1, 0)(\pi_{11}^1, 2)(\pi_{10}^1, 4) \\ (\pi_{9}^1, 6)(\pi_{16}^1, 8)(\pi_{17}^1, 9)(\pi_{18}^1, 10)(\pi_{19}^1, 11)(\pi_{20}^1, 12)(\pi_{13}^1, 13) \\ (\pi_{6}^1, 14) \ldots$ and $\widetilde{r_2}^t = (\pi_{18}^2, 0)(\pi_{17}^2, 4)(\pi_{10}^2, 6)(\pi_{3}^2, 8)(\pi_{4}^2, 10)\\(\pi_{5}^2, 12)(\pi_{6}^2, 14) \ldots$ (they are depicted in Fig. \ref{fig: illustrative_example} with green and red arrows respectively) with corresponding timed words $w(\widetilde{r}_1^t) = (\emptyset, 0)(\emptyset, 2)(\emptyset, 4)(\{\mathit{recharge1}\}, 6)(\emptyset, 8)(\emptyset, 9)(\emptyset, 10)(\emptyset, 11)\\(\emptyset, 12)(\emptyset, 13)(\{\mathit{meet}_1^A\}, 14) \ldots$ and $w(r_2^t) = (\emptyset, 0)(\emptyset, 4)\\(\emptyset, 6)(\{\mathit{recharge2}\}, 8)(\emptyset, 10)(\emptyset, 12)(\{\mathit{meet}_2^A\}, 14)\ldots$ which satisfy formulas $\phi_1$ and $\phi_2$ respectively. All conditions from \eqref{eq: problem_adf} are satisfied. The runs and the words of the illustrative example are depicted in Fig. \ref{fig: run_robots2}. Consider now the alternative runs of the agents, where they first meet in the meeting point $A$ (after 8 time units) and then recharge in the region $\pi_{7}$ (after 9 and 10 time units, respectively). Regardless of how the agents continue, they have accomplished the untimed formulas $\varphi_1' = \Diamond(\mathit{recharge1})$, $\varphi_2' = \Diamond (\mathit{recharge2})$, and $\varphi_G' = \Diamond \{(\mathit{meet_1^A} \wedge \mathit{meet_2^A}) \vee (\mathit{meet_1^B} \wedge \mathit{meet_2^B} ) \}$. Although this is in fact a more efficient way to satisfy the untimed formulas $\varphi_1', \varphi_2'$, and $\varphi_G'$ than the one described above, the formula $\varphi_1$ is violated due to its time constraint. \vspace{-3mm} \section{Conclusions and Future Work} \label{sec: conclusions} We have proposed a systematic method for multi-agent controller synthesis aiming cooperative planning under high-level specifications given in MITL formulas. The solution involves a sequence of algorithmic automata constructions such that not only team specifications but also individual specifications should be fulfilled. Future research directions include the consideration of more complicated dynamics than the fully actuated ones in \eqref{eq: system}, the decentralized solution such that every agent has information only from his neighbors as well as the modeling of the system with Markov Decision Processes (MDPs) and probabilistic verification. \vspace{-1mm}
2,869,038,155,818
arxiv
\section{Introduction} The ongoing accelerator based experiments in the search for new physics can solve some of the unanswered problems of the fundamental physics like matter-antimatter asymmetry. A complementary to these high energy experiments is the search for violation in spatial inversion (${\mathcal{P}}$) and time reversal (${\mathcal{T}}$) symmetries in nuclei, atoms or molecules in the low energy domain using non-accelerator experiments \cite{ginges_2004, sandars_1965, sandars_1967, labzovskii_1978, barkov_1980, shapiro_1968, pospelov_2005}. One of such ${\mathcal{P,T}}$-violating interaction results into the electric dipole moment of electron (eEDM) \cite{bernreuther_1991, tl_edm, ybf_edm, tho_edm}. The eEDM predicted by the standard model (SM) of elementary particle physics is too small ($< 10^{-38}$ e cm) \cite{khriplovich_2011} to be observed by the today's experiment. However, many extensions of the SM predict the value of eEDM to be in the range of $10^{-26} - 10^{-29}$ e cm \cite{commins_1999} and the sensitivity of the modern eEDM experiment also lies in the same range. Till date, the experiment done by ACME collaboration \cite{tho_edm} using ThO yields the best upper bound limit of eEDM. The high sensitivity of modern eEDM experiment is mainly due to the fact that heavy paramagnetic diatomic molecules offer a very high internal effective electric field ($E_\mathrm{eff}$), which enhances the eEDM effects \cite{sushkov_1978, flambaum_1976}. In the experiment, both eEDM and the coupling interaction between the scalar-hadronic current and the pseudoscalar electronic current contribute to the P,T-odd frequency shift. Therefore, it is impossible to decouple the individual contribution from these two effects in a single experiment. However, it is possible to untwine these two contributions from each other and an independent limit on the value of eEDM ($d_e$) and scalar-pseudoscalar (S-PS) coupling constant ($k_s$) can be obtained by using data from two different experiments as suggested by Dzuba {\it et al} \cite{dzuba_2011}. It is, therefore, an accurate value of the $E_\mathrm{eff}$ and the scalar-pseudoscalar (S-PS) ${\mathcal{P,T}}$-odd interaction constant ($W_\mathrm{s}$) are needed since these two quantities cannot be measured by means of any experiment. Therefore, one has to rely on an accurate {\it ab initio} theory that can simultaneously take care of the effects of relativity and electron correlation for the calculation of these quantities.\par The best way to include the effects of special relativity in the electronic structure calculations is to solve the Dirac-Hartree-Fock (DHF) equation in the four-component framework. The DHF method considers an average electron-electron interaction and thus misses the correlation between electrons having same spin. On the other hand, the single reference coupled-cluster (SRCC) method is the most preferred many-body theory to incorporate the dynamic part of the electron correlation. The calculation of property in the SRCC framework can be done either numerically or analytically. In numerical method (also known as the finite-field (FF) method), the coupled-cluster amplitudes are functions of the external field parameters \cite{monkhorst_1977} and thus, for calculations of each property, separate set of CC calculation is needed. The error associated with the FF method is also dependent on the method of calculation, i.e., the number of data points considered for the numerical differentiation. On the contrary, in the analytical method, the CC amplitudes are independent of the external field of perturbation and therefore, one needs to solve only one set of CC equation for the calculations of any number of properties. Normal CC (NCC) method being non-variational, does not satisfy the generalized Hellmann-Feynman (GHF) theorem and thus, the expectation value and the energy derivative approach are two different formalisms for the calculation of first order property. However, the energy derivative in NCC framework is the corresponding expectation value plus some additional terms which make it closer to the property value obtained in the full configuration interaction (FCI) method. Thus, the property value obtained in the energy derivative method is much more reliable than the corresponding expectation value method. Another disadvantage of the expectation value method is that it leads to a non-terminating series and any truncation further introduces an additional error. The Z-vector method \cite{schafer_1984, zvector_1989} (an energy derivative method), on the other hand, leads to a naturally terminating series at any level of approximation. The higher order derivative in the NCC framework can be calculated by using the Lagrange multiplier method \cite{koch_1990} and for the first order energy derivative, it leads to the identical equations as of Z-vector method. It is worth to note that there are alternative options like expectation value CC (XCC) \cite{bartlett_xcc}, unitary CC (UCC) \cite{bartlett_ucc}, and extended CC (ECC) \cite{arponen_ecc, bishop_ecc} to solve the SRCC equation. All these methods are known in the literature as the variational coupled-cluster (VCC) method \cite{szalay_1995}. These VCC methods are well established in the non-relativistic framework but are not that much popular in the relativistic domain, a few are documented in the literature like relativistic UCC by Sur {\it et al.} \cite{mukherjee_ucc,rajat_ucc}, applicable only for the purpose of atomic calculations. Recently, Sasmal {\it et al} implemented ECC in the four-component relativistic domain to calculate the magnetic hyperfine structure (HFS) constants of both atoms and molecules in their open-shell ground state configuration \cite{sasmal_ecc}. The ECC method being variational satisfies the GHF theorem, therefore, expectation value and the energy derivative approach are identical to each other. However, in ECC method amplitude equations for the excitation and de-excitation operators are coupled to each other, whereas, in Z-vector method, the amplitude equations of excitation operator are decoupled from the amplitude equations of the de-excitation operator. This accelerates the convergence in the Z-vector method with a lesser computational cost as compared to the ECC. \par In this work, we have calculated the $E_\text{eff}$ and $W_\text{s}$ of RaF in its ground electronic ($^2\Sigma$) state using Z-vector method in the CC framework. We also calculated these properties in the expectation value method to show the superiority of the Z-vector method over the expectation value method. We have chosen the RaF molecule for the following reasons: This molecule has been proposed for the ${\mathcal{P}}$-odd and ${\mathcal{P,T}}$-odd experiment \cite{isaev_2010, isaev_2013, kudashov_2014} due to its high Schiff moment, $E_\mathrm{eff}$ and $W_\text{s}$. The $E_\text{eff}$ of $^2\Sigma$ state of RaF is even higher than the ground state ($^2\Sigma$) of YbF. Therefore, the more precise value of $E_\text{eff}$ and $W_\text{s}$ and their ratio are very important for the eEDM experiment using this molecule. RaF can be directly laser cooled as it has high diagonal Franck-Condon matrix element between the ground and first excited electronic state and the corresponding transition frequency lies in the visible region with a reasonable lifetime \cite{isaev_2010}. \par The manuscript is organized as follows. A brief overview of the expectation value and the Z-vector method in the CC framework including concise details of the properties calculated in this work are given in Sec. \ref{theory}. Computational details are given in Sec. \ref{comp}. We presented our calculated results and discuss about those in Sec. \ref{res_dis} before making concluding remark. Atomic unit is used consistently unless stated. \section{Theory}\label{theory} \subsection{Expectation value and Z-vector method}\label{corr} The DHF wavefunction is the best description of the ground state in a single determinant theory and thus, it is used as a reference function for the correlation calculations where the Dirac-Coulomb (DC) Hamiltonian is used which is given by \begin{eqnarray} {H_{DC}} &=&\sum_{i} \Big [-c (\vec {\alpha}\cdot \vec {\nabla})_i + (\beta -{\mathbb{1}_4}) c^{2} + V^{nuc}(r_i)+ \nonumber\\ && \sum_{j>i} \frac{1}{r_{ij}} {\mathbb{1}_4}\Big] \end{eqnarray} Here, {\bf$\alpha$} and $\beta$ are the usual Dirac matrices, $c$ is the speed of light, ${\mathbb{1}_4}$ is the 4$\times$4 identity matrix and the sum is over all the electrons, which is denoted by $i$. The Gaussian charge distribution is used as nuclear potential function ($V^{nuc}(r_i)$). The DHF method approximates the electron-electron repulsion in an average way and thus misses the correlation between same spin electrons. In this article, we have used the SRCC method to incorporate the dynamic part of electron correlation. The SRCC wavefunction is given by $|\Psi_{cc}\rangle=e^{T}|\Phi_0\rangle$ , where $\Phi_0$ is the DHF wavefunction and $T$ is coupled-cluster excitation operator which is given by \begin{eqnarray} T=T_1+T_2+\dots +T_N=\sum_n^N T_n , \end{eqnarray} with \begin{eqnarray} T_m= \frac{1}{(m!)^2} \sum_{ij\dots ab \dots} t_{ij \dots}^{ab \dots}{a_a^{\dagger}a_b^{\dagger} \dots a_j a_i} , \end{eqnarray} where i,j(a,b) are the hole(particle) indices and $t_{ij..}^{ab..}$ are the cluster amplitudes corresponding to the cluster operator $T_m$. In coupled-cluster single and double (CCSD) model, $T=T_1+T_2$. The equations for T$_1$ and T$_2$ are given as \begin{eqnarray} \langle \Phi_{i}^{a} | (H_Ne^T)_c | \Phi_0 \rangle = 0 , \,\, \langle \Phi_{ij}^{ab} | (H_Ne^T)_c | \Phi_0 \rangle = 0 , \label{cc_amplitudes} \end{eqnarray} where H$_N$ is the normal ordered DC Hamiltonian and subscript $c$ means only the connected terms exist in the contraction between H$_N$ and T. Size-extensivity is ensured by this connectedness. \par Once the cluster amplitudes are solved, the expectation value of any property operator of interest, $\langle O_N \rangle$ can be calculated by the following expression as given in Ref. \cite{cizek_1967}, \begin{eqnarray} \langle O_N \rangle=\frac{\langle \Psi_{cc} | O_N | \Psi_{cc} \rangle}{\langle \Psi_{cc} | \Psi_{cc} \rangle} &=& \frac{\langle \Phi_0 e^{T^{\dagger}} | O_N | e^{T} \Phi_0 \rangle}{\langle \Phi_0 | e^{T^{\dagger}} e^{T} | \Phi_0 \rangle} \nonumber\\ &=& \langle \Phi_0 | (e^{T^{\dagger}} O_N e^{T})_c | \Phi_0 \rangle. \end{eqnarray} The above series is a non-terminating series. Since, the dominant contribution comes from the linear terms, therefore, linear approximation is the most favored choice. The detailed diagrammatic expression considering only linear terms within the CCSD approximation is given in Fig. \ref{lin_expec} and the corresponding algebraic equation is given as in Eq. \ref{expect_eqn}. We have used Einstein summation convention, i.e., the repeated indices are summed over in the expression. The $t$ amplitudes with particle(hole) indices at the subscript(superscript) are the corresponding amplitudes of the $T^{\dagger}$ operator. It is interesting to note that there is no possible diagrams (as well as algebraic expression) of the kind $T_2^{\dagger}O$ or $OT_2$, since closed connected diagrams can not be constructed by these two expressions. \begin{widetext} \begin{eqnarray} \langle O \rangle &=& O(i,a) \cdot t_{i}^{a} + t_{a}^{i} \cdot O(a,i) + t_{a}^{i} \cdot O(a,b) \cdot t_{i}^{b} - t_{a}^{i} \cdot O(j,i) \cdot t_{j}^{a} + t_{ab}^{ij} \cdot O(b,j) \cdot t_{i}^{a}+\nonumber \\ && t_{a}^{i} \cdot O(j,b) \cdot t_{ij}^{ab}-\frac{1}{2} t_{ab}^{ij} \cdot O(k,j) \cdot t_{ik}^{ab} + \frac{1}{2} t_{ab}^{ij} \cdot O(b,c) \cdot t_{ij}^{ac} . \label{expect_eqn} \end{eqnarray} \end{widetext} \begin{figure}[ht] \centering \begin{center} \includegraphics[scale=.1, height=4.0cm]{expect_lin} \caption {Diagrams for expectation value approach using linear truncation scheme} \label{lin_expec} \end{center} \end{figure} \par The CC amplitudes are solved in a nonvariational way (using Eq. \ref{cc_amplitudes}) and thus, the CC energy is not minimized with respect to the determinantal coefficient and the molecular orbital coefficient in the expansion of the many electron correlated wavefunction for a fixed nuclear geometry \cite{monkhorst_1977}. Therefore, the calculation of CC energy derivative needs to include the derivative of energy with respect to these two coefficients in addition to the derivative of these two parameters with respect to the external field of perturbation. \begin{figure}[ht] \centering \begin{center} \includegraphics[scale=.1, height=6.0cm]{z_vec_prop} \caption {Diagrams for the energy derivative in Z-vector method} \label{z_vec_prop} \end{center} \end{figure} However, the derivative terms associated with determinantal coefficient can be integrated by the introduction of a perturbation independent linear operator, $\Lambda$ \cite{zvector_1989}. $\Lambda$ is an antisymmetrized de-excitation operator whose second quantized form is given by \begin{eqnarray} \Lambda=\Lambda_1+\Lambda_2+ \dots+\Lambda_N=\sum_n^N \Lambda_n , \end{eqnarray} where \begin{eqnarray} \Lambda_m= \frac{1}{(m!)^2} \sum_{ij \dots ab \dots} \lambda_{ab \dots}^{ij \dots}{a_i^{\dagger}a_j^{\dagger} \dots a_b a_a} , \end{eqnarray} where $\lambda_{ab \dots}^{ij \dots}$ are the cluster amplitudes corresponding to the operator $\Lambda_m$. The detailed description of $\Lambda$ operator and amplitude equation is given in Ref. \cite{zvector_1989}. In CCSD model, $\Lambda=\Lambda_1+\Lambda_2$. The explicit equations for the amplitudes of $\Lambda_1$ and $\Lambda_2$ operators are given by \begin{eqnarray} \langle \Phi_0 |[\Lambda (H_Ne^T)_c]_c | \Phi_{i}^{a} \rangle + \langle \Phi_0 | (H_Ne^T)_c | \Phi_{i}^{a} \rangle = 0, \end{eqnarray} \begin{eqnarray} \langle \Phi_0 |[\Lambda (H_Ne^T)_c]_c | \Phi_{ij}^{ab} \rangle + \langle \Phi_0 | (H_Ne^T)_c | \Phi_{ij}^{ab} \rangle \nonumber \\ + \langle \Phi_0 | (H_Ne^T)_c | \Phi_{i}^{a} \rangle \langle \Phi_{i}^{a} | \Lambda | \Phi_{ij}^{ab} \rangle = 0. \label{lambda_2} \end{eqnarray} In is interesting to note that the third term of Eq. \ref{lambda_2} is of the nature of disconnected type and it eventually produces one disconnected diagram in the $\Lambda_2$ amplitude equation (for details see Ref. \cite{zvector_1989, sasmal_pra_rapid}). Although the diagram is disconnected but it does not have any closed part. This ensures that the corresponding energy diagram is linked which restores the size extensivity. The energy derivative can be given as \begin{eqnarray} \Delta E' = \langle \Phi_0 | (O_Ne^T)_c | \Phi_0 \rangle + \langle \Phi_0 | [\Lambda (O_Ne^T)_c]_c | \Phi_0 \rangle \end{eqnarray} where, $O_N$ is the derivative of normal ordered perturbed Hamiltonian with respect to external field of perturbation. The detailed diagrammatic expression is given in Fig. \ref{z_vec_prop} and the corresponding algebraic equation is given in the following Eq. \ref{z_vec_prop_eqn}, \begin{widetext} \begin{eqnarray} \Delta E'&=& O(i,a) \cdot t_{i}^{a} + \lambda_{a}^{i} \cdot O(a,i) + \lambda_{a}^{i} \cdot O(a,b) \cdot t_{i}^{b} + \lambda_{a}^{i} \cdot O(j,i) \cdot t_{j}^{a} + \lambda_{a}^{i} \cdot O(j,b) \cdot t_{ij}^{ab} - \lambda_{a}^{i} \cdot O(j,b) \cdot t_{i}^{b} \cdot t_{j}^{a}- \nonumber \\ && \frac{1}{2} \lambda_{ab}^{ij} \cdot O(k,j) \cdot t_{ik}^{ab} + \frac{1}{2} \lambda_{ab}^{ij} \cdot O(b,c) \cdot t_{ij}^{ac} - \frac{1}{2} \lambda_{bc}^{ik} \cdot O(j,a) \cdot t_{i}^{a} \cdot t_{jk}^{bc} - \frac{1}{2} \lambda_{ac}^{jk} \cdot O(i,b) \cdot t_{i}^{a} \cdot t_{jk}^{bc}. \label{z_vec_prop_eqn} \end{eqnarray} \end{widetext} \subsection{One electron property operators}\label{prop} The $E_{\text{eff}}$ can be obtained by evaluating the following matrix element \begin{eqnarray} E_{\text{eff}} = |W_d \Omega| = | \langle \Psi_{\Omega} | \sum_j^n \frac{H_d(j)}{d_e} | \Psi_{\Omega} \rangle |, \label{E_eff} \end{eqnarray} where $\Omega$ is the component of total angular momentum along the molecular axis and $\Psi_{\Omega}$ is the wavefunction corresponding to $\Omega$ state. $n$ is the total number of electrons and H$_d$ is the interaction Hamiltonian of $d_e$ with internal electric field and is given by \cite{kozlov_1987, titov_2006}, \begin{eqnarray} H_d = 2icd_e \gamma^0 \gamma^5 {\bf \it p}^2 , \label{H_d} \end{eqnarray} where $\gamma$ are the usual Dirac matrices and {\bf \it p} is the momentum operator. \begin{table*}[ht] \caption{ Cutoff used and correlation energy of the ground state of Ra$^{+}$ and RaF in different basis sets } \begin{ruledtabular} \newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}} \begin{center} \begin{tabular}{lccccccccr} \mc{4}{c}{Basis} & \mc{2}{c}{Cutoff (a.u.)} & \mc{2}{c}{Spinor} & \mc{2}{c}{Correlation Energy (a.u.)}\\ \cline{1-4} \cline{5-6} \cline{7-8} \cline{9-10} Name & Nature & Ra & F & Occupied & Virtual & Occupied & Virtual & MBPT(2) & CCSD \\ \hline Ra$^{+}$ \\ A & TZ & dyall.cv3z & × & -30 & 500 & 51 & 323 & -1.74841495 & -1.57235409 \\ B & TZ & dyall.cv3z & × & -130 & 500 & 69 & 323 & -2.42790147 & -2.20700361 \\ C & TZ & dyall.cv3z & × & & 500 & 87 & 323 & -2.78897499 & -2.55468917 \\ D & QZ & dyall.cv4z & × & -30 & 20 & 51 & 349 & -1.43221422 & -1.31515023 \\ E & QZ & dyall.cv4z & × & -130 & 20 & 69 & 349 & -1.49747209 & -1.37242346 \\ F & QZ & dyall.cv4z & × & & 20 & 87 & 349 & -1.50382815 & -1.37827038 \\ RaF \\ G & TZ & dyall.cv3z & cc-pCVTZ & -30 & 500 & 61 & 415 & -2.09671991 & -1.91684123 \\ H & TZ & dyall.cv3z & cc-pCVTZ & -130 & 500 & 79 & 415 & -2.77624243 & -2.55153111 \\ I & TZ & dyall.cv3z & cc-pCVTZ & & 500 & 97 & 415 & -3.13733209 & -2.89923481 \\ J & QZ & dyall.cv4z & cc-pCVQZ & -30 & 20 & 61 & 449 & -1.76368821 & -1.63988444 \\ K & QZ & dyall.cv4z & cc-pCVQZ & -130 & 20 & 79 & 449 & -1.82908547 & -1.69728677 \\ L & QZ & dyall.cv4z & cc-pCVQZ & & 20 & 97 & 449 & -1.83544557 & -1.70314714\\ \end{tabular} \end{center} \end{ruledtabular} \label{basis} \end{table*} \par The matrix element of scalar-pseudoscalar P,T-odd interaction constant, $W_{\text{s}}$, is given by \begin{eqnarray} W_{\text{s}}=\frac{1}{\Omega k_\text{s}}\langle \Psi_{\Omega}|\sum_j^n H_{\text{SP}}(j)| \Psi_{\Omega} \rangle, \label{W_s} \end{eqnarray} where, $k_s$ is the dimension less electron-nucleus scalar-pseudoscalar coupling constant which is defined as Z$k_s$=(Z$k_{s,p}$+N$k_{s,n}$), where $k_{s,p}$ and $k_{s,n}$ are electron-proton and electron-neutron coupling constant, respectively.\par The interaction Hamiltonian is defined as \cite{hunter_1991} \begin{eqnarray} H_{\text{SP}}= i\frac{G_{F}}{\sqrt{2}}Zk_{s} \gamma^0 \gamma^5 \rho_N(r) , \label{H_SP} \end{eqnarray} where $\rho_N(r)$ is the nuclear charge density normalized to unity and G$_F$ is the Fermi constant. The calculation of the above matrix elements depends on the accurate wavefunction in the core (near nuclear) region and the standard way to determine the accuracy of the electronic wavefunction in that region is to compare the theoretically calculated hyperfine structure (HFS) constant with the experimental value. The magnetic hyperfine constant of the $J^{th}$ electronic state of an atom is given by \begin{eqnarray} A_J = \frac{\vec{\mu_k}}{IJ} \cdot \langle \Psi_J | \sum_i^n \left( \frac{\vec{\alpha}_i \times \vec{r}_i}{r_i^3} \right) | \Psi_J \rangle, \label{hfs_atom} \end{eqnarray} where $\Psi_J$ is the wavefinction of the $J^{th}$ electronic state, $I$ is the nuclear spin quantum number and $\vec{\mu}_k$ is the magnetic moment of the nucleus $k$. For a diatomic molecule, The parallel ($A_{\|}$) and perpendicular ($A_{\perp}$) magnetic hyperfine constant of a diatomic molecule can be written as \begin{eqnarray} A_{\|(\perp)}= \frac{\vec{\mu_k}}{I\Omega} \cdot \langle \Psi_{\Omega} | \sum_i^n \left( \frac{\vec{\alpha}_i \times \vec{r}_i}{r_i^3} \right)_{z(x/y)} | \Psi_{\Omega(-\Omega)} \rangle, \label{hfs_mol} \end{eqnarray} where the value of $\Omega$ is 1/2 for the ground electronic state ($^{2}\Sigma$) of RaF. \par \section{Computational details}\label{comp} The locally modified version of DIRAC10 \cite{dirac10} program package is used to solve the DHF equation and to construct the one-body, two-body matrix elements and the one electron property integrals of interest. Finite size of nucleus with Gaussian charge distribution is considered as the nuclear model where the nuclear parameters \cite{visscher_1997} are taken as default values of DIRAC10. Small component basis functions are generated from the large component by applying restricted kinetic balance (RKB) \cite{dyall_2007} condition. The basis functions are represented in scalar basis and unphysical solutions are removed by means of the diagonalization of free particle Hamiltonian. This generates the electronic and positronic solution in 1:1 manner. In our calculations, we have used the following uncontracted basis sets: triple zeta (TZ) basis: dyall.cv3z \cite{dyall_s} for Ra and cc-pCVTZ \cite{ccpcvxz_b-ne} for F; quadruple zeta (QZ) basis: dyall.cv4z \cite{dyall_s} basis for Ra and cc-pCVQZ \cite{ccpcvxz_b-ne} basis for F. In TZ basis, three calculations are done for the magnetic HFS constant of Ra$^{+}$ by using 51, 69 and 87 number of correlated electrons and these are denoted by A, B and C, respectively. In QZ basis, three more calculations are done by using 51, 69 and 87 number of correlated electrons and these are denoted by D, E and F, respectively. The properties of RaF are calculated using two different basis. In TZ basis, three calculations are done by using 61, 79 and 97 correlated electrons and those are denoted by G, H and I, respectively and similarly in QZ basis, the calculations using 61, 79 and 97 correlated electrons are denoted by J, K and L, respectively. The bond length of RaF is taken as 4.23$a_0$ (2.24 \AA) \cite{kudashov_2014} in all our calculation. \par \section{Results and discussion}\label{res_dis} \begin{table}[b] \caption{Hyperfine coupling constant (in MHz) of $^{223}$Ra$^{+}$} \begin{ruledtabular} \begin{center} \begin{tabular}{lccr} Basis & Expectation & Z-vector & Expt. \cite{wendt_1987, neu_1989}\\ \hline A & 3458 & 3418 & × \\ B & 3504 & 3464 & × \\ C & 3547 & 3506 & 3404(2)\\ D & 3434 & 3394 & × \\ E & 3448 & 3409 & × \\ F & 3453 & 3414 & × \\ \end{tabular} \end{center} \label{ra_hfs} \end{ruledtabular} \end{table} The aim of the present study is to exploit RaF molecule for the eEDM experiment and to provide more accurate value of the P,T-odd interaction constants of RaF. Since, there are no experimental analogue of the P,T-odd interaction constants like $E_\text{eff}$ and $W_\text{s}$, the accuracy of these theoretically obtained quantities can be assessed by comparing the theoretically obtained HFS values with the corresponding experimental values. Unfortunately, the experimental HFS results of Ra in RaF are not available. Therefore, we compare the experimental HFS value of $^{223}$Ra$^{+}$ \cite{wendt_1987, neu_1989} with the value obtained by theory using the same basis of Ra as used for the calculation of RaF. \par In Table \ref{basis}, we present the information regarding the employed basis-sets, cutoff used for occupied and virtual orbitals and the the number of active spinor for the correlation calculation. We also compiled the correlation energy obtained from second-order many-body perturbation theory (MBPT(2)) and CCSD method in the same table. \par \begin{figure}[ht] \centering \begin{center} \includegraphics[height=5cm, width=8cm]{relative_error} \caption {Comparison of relative deviations between the results of expectation value and Z-vector method with experiment.} \label{delta} \end{center} \end{figure} In Table \ref{ra_hfs}, we present the ground state ($^{2}$S) magnetic HFS constant value of $^{223}$Ra$^{+}$ using both expectation value and Z-vector method. Our results are compared with the available experimental value \cite{wendt_1987, neu_1989}. The deviations of Z-vector and expectation values from the experiment are presented in Fig. \ref{delta}. It is clear that the deviations of expectation value method are always greater than those of Z-vector method. This is expected because Z-vector is a better method than the expectation value method for the ground state property; in fact, the Z-vector value is the corresponding expectation value plus some additional terms which make it closer to the FCI property value. It is interesting to note that when we go from TZ to QZ basis with same number of correlated electrons (i.e., from A to D, B to E, and C to F), the relative deviation of both Z-vector and expectation value decreases. This is because QZ, in comparison to TZ, further improves the configuration space by adding one higher angular momentum basis function. It is also interesting to see that in TZ basis, if we go from A to B and B to C, the addition of 18 electrons (4s+3d+4p and 1s-3p) changes the Z-vector HFS constant by 46 MHz and 42 MHz. Similarly in QZ basis, as we go from D to E and E to F, the addition of 18 electrons changes the Z-vector HFS constant by 15 MHz and 5 MHz. From this observation, we can comment that the core polarization plays a definite role in the correlation contribution of HFS constant and the effect is severe for lower basis sets. Further, the enlargement of basis set and addition of core electrons have opposite effects in the calculated HFS value of Ra$^{+}$. However, The magnetic HFS constant obtained in all electron Z-vector calculation using QZ basis (basis F) is very close to the experimental value ($\delta$\% = 0.29).\par \begin{table}[ht] \caption{ Molecular dipole moment ($\mu$) and magnetic HFS constants of $^{223}$Ra in RaF } \begin{ruledtabular} \newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}} \begin{center} \begin{tabular}{lcccccr} Basis & \mc{2}{c}{$\mu$ (D)} & \mc{2}{c}{A$_{\perp}$ (MHz)} & \mc{2}{c}{A$_{\|}$ (MHz)} \\ \cline{2-3} \cline{4-5} \cline{6-7} × & Expect. & Z-vector & Expect. & Z-vector & Expect. & Z-vector \\ \hline G & 3.7059 & 3.7220 & 2031 & 1987 & 2123 & 2078 \\ H & 3.7028 & 3.7207 & 2059 & 2014 & 2152 & 2107 \\ I & 3.7017 & 3.7201 & 2084 & 2038 & 2178 & 2132 \\ J & 3.8404 & 3.8474 & 2029 & 1982 & 2119 & 2072 \\ K & 3.8375 & 3.8459 & 2037 & 1991 & 2128 & 2082 \\ L & 3.8374 & 3.8459 & 2040 & 1993 & 2131 & 2085 \\ \end{tabular} \end{center} \end{ruledtabular} \label{raf_hfs} \end{table} The properties described by Eqs. \ref{E_eff}, \ref{W_s} and \ref{hfs_mol} strongly depend on the electronic configuration of the given (heavy) atom and are also known as ``atom in compound (AIC)'' properties \cite{AIC}. The accuracy of the theoretically calculated AIC properties depends on the accurate evaluation of the electron density near the atomic core region. From the accuracy of our calculated HFS constant of Ra$^{+}$ ($\delta$\% = 0.29), we can comment that the all electron Z-vector calculation produces an accurate wavefunction in the vicinity of Ra nucleus and we also expect the same kind of accuracy for RaF molecule. \par We have calculated the molecular-frame dipole moment ($\mu$) of RaF, perpendicular (A$_{\perp}$) and parallel (A$_{\|}$) magnetic HFS constants of $^{223}$Ra in RaF using both expectation value and Z-vector method. The results are compiled in Table \ref{raf_hfs}. From this table, it is clear that inclusion of more core electrons decreases the value of $\mu$ but increases the value of magnetic HFS constants of $^{223}$Ra in RaF. On the other hand, if we go from TZ to QZ basis, the $\mu$ value is increased but the magnetic HFS values are decreased. This observation shows that the increase of correlation space either by the addition of core electrons or higher angular momentum wavefunctions have opposite effect on the near nuclear and outer region part of the molecular wavefunction of RaF. We can also comment that the enlargement of basis set and core electrons have opposite effects in the properties of RaF. \begin{table}[ht] \caption{P,T-odd interaction constants and their ratio of RaF} \begin{ruledtabular} \newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}} \begin{center} \begin{tabular}{lcccccr} Basis & \mc{2}{c}{W$_\mathrm{s}$ (kHz)} & \mc{2}{c}{E$_\mathrm{eff}$ (GV/cm)} & \mc{2}{c}{R (10$^{18}$/e cm)}\\ \cline{2-3} \cline{4-5} \cline{6-7} × & Expect. & Z-vector & Expect. & Z-vector & Expect. & Z-vector\\ \hline G & 144.7 & 143.6 & 53.9 & 53.5 & 90.1 & 90.1 \\ H & 147.4 & 146.3 & 54.9 & 54.5 & 90.1 & 90.1 \\ I & 149.3 & 148.1 & 55.6 & 55.1 & 90.0 & 90.0 \\ J & 141.2 & 140.4 & 52.6 & 52.3 & 90.1 & 90.1 \\ K & 141.9 & 141.1 & 52.8 & 52.5 & 90.0 & 90.0 \\ L & 142.0 & 141.2 & 52.8 & 52.5 & 89.9 & 89.9 \end{tabular} \end{center} \end{ruledtabular} \label{raf_pt} \end{table} \par In Table \ref{raf_pt}, we present the two P,T-odd interaction constant, namely E$_\mathrm{eff}$ and W$_\mathrm{s}$. The E$_\mathrm{eff}$ value of RaF in QZ basis using all electron Z-vector calculation (basis L) is 52.5 GV/cm. This E$_\mathrm{eff}$ value of RaF is even higher than the E$_\mathrm{eff}$ value of YbF in its ground state \cite{koxlov_1994, kozlov_1997, titov_1996, quiney_1998, parpia_1998, mosyagin_1998}. The W$_\mathrm{s}$ value of RaF using Z-vector method in the same basis (QZ, all electron) is 141.2 kHz. This high value of W$_\mathrm{s}$ suggests that the S-PS interaction will also be responsible for significant change in the P,T-odd frequency shift in the eEDM experiment. These results reveal the possibility of using RaF in future eEDM experiment. The ratio (R) of E$_\mathrm{eff}$ to W$_\mathrm{s}$ is also calculated as this is a very important quantity to obtain the independent limit of d$_e$ and k$_s$ by using two independent experiments. Our calculated value of R using all electron Z-vector method in QZ (L) basis is 89.9 in units of 10$^{18}$/e cm. Using this ratio, the relation of independent d$_e$ and k$_s$ with experimentally determined d$_e^{expt}$ becomes (for more details see Ref. \cite{sasmal_hgh}) \begin{eqnarray} d_e + 5.56 \times 10^{-21} k_s = d_e^{expt}|_{\!_{k_s=0}}, \label{relation} \end{eqnarray} where $d_e^{expt}|_{\!_{k_s=0}}$ is the eEDM limit derived from the experimentally measured P,T-odd frequency shift at the limit k$_s$ = 0. \begin{table}[ht] \caption{ Comparison of magnetic HFS constant ($^{223}$Ra), W$_\mathrm{s}$ and E$_\mathrm{eff}$ of RaF } \begin{ruledtabular} \newcommand{\mc}[3]{\multicolumn{#1}{#2}{#3}} \begin{center} \begin{tabular}{lcccr} Method & A$_{\perp}$ & A$_{\|}$ & W$_\mathrm{s}$ & E$_\mathrm{eff}$ \\ × & (MHz) & (MHz) & kHz & (GV/cm) \\ \hline ZORA-GHF \cite{isaev_2013} & 1860 & 1900 & 150 & 45.5 \\ SODCI \cite{kudashov_2014} & 1720 & 1790 & 131 & 49.6 \\ FS-RCC \cite{kudashov_2014} & 2020 & 2110 & 139 & 52.9 \\ This work (QZ basis, & all electron) & \\ Expect. & 2040 & 2131 & 142.0 & 52.8 \\ Z-vector & 1993 & 2085 & 141.2 & 52.5 \end{tabular} \end{center} \end{ruledtabular} \label{comparison} \end{table} \par We have compared our calculated results with other theoretically obtained values in table \ref{comparison}. The first {\it ab initio} calculation of W$_\mathrm{s}$ of RaF was performed by Isaev {\it et al.} \cite{isaev_2013}. They employed two-component zeroth-order regular approximation (ZORA) generalized Hartree-Fock (GHF) method and obtained the value of W$_\mathrm{s}$ as 150 kHz. They also obtained the value of E$_\mathrm{eff}$ as 45.5 GV/cm by using ZORA-GHF value of W$_\mathrm{s}$ and the approximate ratio between E$_\mathrm{eff}$ and W$_\mathrm{s}$. Kudashov {\it et al.} \cite{kudashov_2014} employed two different methods to incorporate relativistic and electron correlation effects: (i) spin-orbit direct configuration interaction (SODCI) method and (ii) relativistic two-component Fock-space coupled cluster approach (FS-RCC) within single- and double- excitation approximation. However, it is worth to remember that truncated CI is not size extensive and thus cannot treat electron correlation properly, specially, for the heavy electronic system like RaF where the number of electron is so large. In their FS-RCC method, Kudashov {\it et al.} \cite{kudashov_2014} calculated the properties of RaF using the finite field method, which is a numerical technique. They corrected the error associated with their calculation considering higher order correlation effect and basis set with the addition of partial triple in the CCSD model (CCSD(T)) and using enlarged basis set, respectively. They have done those corrections only for the ground state ((0,0) sector of Fock space) coupled cluster amplitudes. On the other hand, we have calculated the property values of RaF via two analytical methods (expectation value and Z-vector method) in the relativistic coupled-cluster framework within four-component formalism. We also calculated the E$_\text{eff}$ and W$_\text{s}$ values directly by using Eqs. \ref{E_eff} and \ref{W_s}, respectively. \par \section{Conclusion}\label{conc} In conclusion, we have applied both Z-vector and expectation value method in the relativistic coupled-cluster framework to calculate parallel and perpendicular magnetic HFS constant of $^{223}$Ra in RaF, E$_\mathrm{eff}$ and W$_\mathrm{s}$ of RaF. We have also calculated the magnetic HFS constant of $^{223}$Ra$^{+}$ to show the reliability of our results. Our most reliable value of E$_\mathrm{eff}$ and W$_\mathrm{s}$ of RaF are 52.5 GV/cm and 141.2 kHz, respectively. This shows that RaF can be a potential candidate for eEDM experiment. We also showed that core electrons play significant role and the effect is notable for lower basis sets. Our results also show that the Z-vector, being an energy derivative method, is much more reliable than the expectation value method. \section*{Acknowledgement} Authors acknowledge a grant from CSIR 12th Five Year Plan project on Multi-Scale Simulations of Material (MSM) and the resources of the Center of Excellence in Scientific Computing at CSIR-NCL. S.S. and H.P acknowledge the CSIR for their fellowship. S.P. acknowledges funding from J. C. Bose Fellowship grant of Department of Science and Technology (India).
2,869,038,155,819
arxiv
\section*{Introduction} A smooth projective surface \(X\) over an algebraically closed field is said to have \emph{Bounded Negativity} if there exists a positive integer \(b(X)\) such that \(C^2 \geq -b(X)\) for any reduced curve \(C \subset X\). A folklore conjecture, going back to Enriques and discussed in \cite[Conjecture I.2.1]{Harbourne:Cracow} and \cite[Conjecture 1.1]{BNC}, is the \textbf{Bounded Negativity Conjecture. ---} \emph{Any smooth projective surface in characteristic \(0\) has Bounded Negativity.} The assumption on the characteristic cannot be dropped: if \(C\) is a curve over \(\bar{\mathbf F}_p\), then the graph \(\Gamma_{F^e} \subseteq C \times C\) of the \(p^e\)-th power Frobenius endomorphism has self-intersection \(p^e(2-2g)\), which becomes arbitrarily negative as \(e \to \infty\) when \(g \geq 2\). Nonetheless, it is conceivable that certain geometric assumptions on the surface may still guarantee Bounded Negativity in positive characteristic. For instance, \cite[discussion preceding Example 3.3.3]{RecentDevelopments} and \cite[Conjecture 2.1.2]{Harbourne-IMPAN} ask whether smooth rational surfaces over a field of positive characteristic have Bounded Negativity. We give a negative answer to this question: \newcounter{intro} \refstepcounter{intro}\textbf{Main Theorem. ---}\label{theorem} \emph{Let \(k\) be an algebraically closed field of characteristic \(p > 0\), let \(m\) be a positive integer invertible in \(k\), and let \(R_m\) be the blowup of \(\mathbf{P}^2\) along \[ Z_m \coloneqq \Set{[x_0:x_1:x_2] | x_0^m = x_1^m = x_2^m}. \] Let \(C_1 = V(x_0+x_1+x_2) \subseteq \mathbf P^2\), and for \(d \geq 1\) invertible in \(k\), write \(C_d \subseteq \mathbf P^2\) for the image of \begin{align*} \phi_d \colon C_1 &\to \mathbf{P}^2\\ [x_0:x_1:x_2] &\mapsto [x_0^d:x_1^d:x_2^d]. \end{align*} If \(dm = p^e-1\) for some positive integer \(e\), then the strict transform \(\widetilde{C}_d \subseteq R_m\) of \(C_d\) is a smooth rational curve with \(\widetilde{C}_d^2 = d(3 - m) - 1\). Thus, if \(m > 3\), the rational surface \(R_m\) does not have Bounded Negativity over \(k\).} Since \(\mathbf{P}^2\) has Bounded Negativity, this shows that \cite[Problem 1.2]{BNC:Arrangements} has a negative answer in positive characteristic: \textbf{Corollary. ---} \emph{Bounded Negativity is not a birational property of smooth projective surfaces in positive characteristic.} \qed In fact, since every smooth projective surface \(X\) admits a finite morphism \(X \to \mathbf P^2\), pulling back the blowup \(R_m \to \mathbf P^2\) gives a blowup \(\widetilde X \to X\) with a finite morphism \(\widetilde X \to R_m\). Pulling back the curves \(\widetilde C_d\) to \(\widetilde X\) shows: \textbf{Corollary. ---} \emph{If \(X\) is a smooth projective surface over an algebraically closed field \(k\) of positive characteristic, then there exists a blowup \(\widetilde X \to X\) such that \(\widetilde X\) does not have Bounded Negativity.} \qed In \parnameref{\S\!\!}{S:proof}, we give a direct proof of the \hyperref[theorem]{\textbf{Main Theorem}}. In \parnameref{\S\!\!}{S:line-config}, we realise the plane curves \(C_d\) as norms of line configuration, thereby deriving equations for them. In \parnameref{\S\!\!}{S:FerFr}, we view \(R_m\) as an isotrivial family of diagonal curves over \(C_1\) and relate the curves \(\widetilde C_d\) on \(R_m\) to graphs of Frobenius morphisms on Fermat curves. Finally, we close in \parnameref{\S\!\!}{S:char-0} with some questions and remarks towards characteristic zero. Sections \parref{S:line-config} and \parref{S:FerFr} each give alternative methods for computing the self-intersections of \(\widetilde C_d\). Given the simplicity of the formulas for \(\widetilde C_d\) and the many connections to other well-studied examples, it is surprising that these curves have not been found before. \section*{Notation} Throughout the paper, \(k\) will be an algebraically closed field of arbitrary characteristic, and \(m\) and \(d\) will denote positive integers invertible in \(k\). We will use the notation of the \hyperref[theorem]{\textbf{Main Theorem}} throughout. \section{Proof of Main Theorem}\label{S:proof} In this section, fix \(m\) and write \(R \coloneqq R_m\) for the blowup of \(\mathbf{P}^2\) along \( Z \coloneqq Z_m\). The generators \(s_0 = x_1^m-x_2^m\), \(s_1 = x_2^m-x_0^m\), and \(s_2 = x_0^m-x_1^m\) of the ideal of~\(Z\) give a closed immersion \(R \hookrightarrow \mathbf P^2 \times \mathbf P^2\). Since \(s_0+s_1+s_2=0\), one of the \(s_i\) can be eliminated at the expense of breaking the symmetry in the computations below. \tpoint{Lemma}\label{about-R} \emph{The embedding \(R \hookrightarrow \mathbf P^2 \times \mathbf P^2\) realises \(R\) as the complete intersection \[ \Set{\big([x_0:x_1:x_2],[y_0:y_1:y_2]\big) \in \mathbf P^2 \times \mathbf P^2 | \begin{array}{c} y_0+y_1+y_2 = 0\\ x_0^my_0 + x_1^my_1 + x_2^my_2 = 0 \end{array}} \] of degrees \((0,1)\) and \((m,1)\) in \(\mathbf{P}^2 \times \mathbf{P}^2\). In particular, \(K_{R} = \mathcal{O}_{R}(m-3,-1)\).} \emph{Proof.} The generators \(s_0, s_1, s_2\) of the ideal of \(Z\) identify \(R\) as \[ \Set{\big([x_0:x_1:x_2], [y_0:y_1:y_2]\big) \in \mathbf{P}^2 \times \mathbf{P}^2 | \begin{array}{c} y_0(x_2^m-x_0^m) = y_1(x_1^m-x_2^m) \\ y_1(x_0^m-x_1^m) = y_2(x_2^m-x_0^m) \\ y_2(x_1^m-x_2^m) = y_0(x_0^m-x_1^m) \end{array}}. \] The relation \(s_0+s_1+s_2 = 0\) shows that \(R\) is contained in the locus \(y_0+y_1+y_2=0\). The equation \(y_0(x_2^m-x_0^m) = y_1(x_1^m-x_2^m)\) can be rewritten as \((y_0+y_1)x_2^m = x_0^my_0 + x_1^my_1\), which is equivalent to \(x_0^my_0+x_1^my_1+x_2^my_2 = 0\) since \(y_0+y_1+y_2=0\). The same holds for the other two equations by symmetry. The final statement then follows from the adjunction formula since \(K_{\mathbf P^2 \times \mathbf P^2} = \mathcal O(-3,-3)\). \qed Alternatively, one can observe that the complete intersection of \parnameref{Lemma}{about-R} maps birationally onto its first factor, where the fibres are points when \([x_0^m:x_1^m:x_2^m] \neq [1:1:1]\) and lines otherwise. \tpoint{Lemma}\label{coord-phi} \emph{If \(\Char k = p > 0\) and \(dm = p^e-1\) for some positive integer \(e\), then the map \(\widetilde\phi_d \colon C_1 \to \mathbf P^2 \times \mathbf P^2\) given by \[ [x_0:x_1:x_2] \mapsto \big([x_0^d:x_1^d:x_2^d], [x_0:x_1:x_2]\big) \] lands in \(R\). In particular, it is the unique map lifting \(\phi_d \colon C_1 \to \mathbf P^2\). }% \emph{Proof.} Since \(x_0+x_1+x_2=0\), the image of \(\widetilde\phi_d\) is contained in the locus \(y_0+y_1+y_2=0\). Since \(dm = p^e-1\), the expression \(x_0^my_0+x_1^my_1+x_2^my_2\) pulls back to \(x_0^{p^e}+x_1^{p^e}+x_2^{p^e}\), which vanishes because the \(p^e\)-th power Frobenius is an endomorphism. Thus \(\widetilde\phi_d\) is a lift of \(\phi_d\) to \(R\), and it is the unique lift since the first projection \(\pr_1 \colon R \to \mathbf P^2\) is birational. \qed \tpoint{Corollary}\label{self-intersection} \emph{The map \(\widetilde\phi_d \colon C_1 \to R\) is a closed immersion, whose image \(\widetilde{C}_d\) is a smooth rational curve in \(R\) with \(\widetilde{C}_d^2 = d(3-m)-1\).} \emph{Proof.} The first two statements follow from the coordinate expression in \parnameref{Lemma}{coord-phi}, since~\(\widetilde\phi_d\) embeds \(C_1\) linearly into the second factor. The same expression shows that \(\widetilde\phi_d^*\mathcal{O}_R(a,b) = \mathcal{O}_{C_1}(da+b)\), so \(K_{R} \cdot \widetilde{C}_d = d(m-3)-1\) by \parnameref{Lemma}{about-R}. Then the adjunction formula shows that \tagsleft@false\let\veqno\@@eqno \[ \widetilde{C}_d^2 = -2 - K_{R} \cdot \widetilde{C}_d = d(3-m)-1.\tag*{\(\blacksquare\)} \] \tagsleft@true\let\veqno\@@leqno% This completes the proof of the \hyperref[theorem]{\textbf{Main Theorem}}. \qed A consequence of \parnameref{Corollary}{self-intersection} is that the singularities of \(C_d\) are contained in \(Z\). However, the individual multiplicities are not so easy to determine. For example, in \parnameref{Lemma}{zeta-Fermat} we will compute the multiplicity of \(C_d\) at \([1:1:1]\) in terms of point counts on Fermat curves. \section{Relation with line configurations}\label{S:line-config} In this section, we observe that the \(d\)-th power maps \(\pi_d \colon \mathbf{P}^2 \to \mathbf{P}^2\) are finite Galois morphisms such that \(\pi_d^*C_d\) is the union of the Galois translates of \(C_1\). Thus the \(C_d\) are norms of line configurations, from which we derive in \parnameref{Corollary}{cor-norm} a formal product formula for the equation of the plane curves \(C_d\). In the second half of this section, we observe that in characteristic \(p > 0\) and for \(q\) a power of \(p\), the curve \(C_{q - 1}\) comes from a subconfiguration of the set of all \(\mathbf{F}_q\)-rational lines. This allows us to show in \parnameref{Corollary}{cor-eqn-char-p} that an equation of \(C_{q-1}\) in this case is the complete homogeneous polynomial of degree \(q-1\). \bpoint{Power Maps}\label{finite-morphisms} For any integer \(a \geq 1\) invertible in \(k\), write \(\pi_a\) for the \(a\)-th power map \(\mathbf P^2 \to \mathbf P^2\). Since \(\pi_a^* Z_m = Z_{am}\), the map \(\pi_a\) lifts to a finite morphism \(\widetilde\pi_a \colon R_{am} \to R_m\) given by \[ \big([x_0:x_1:x_2],[y_0:y_1,y_2]\big) \mapsto \big([x_0^a:x_1^a:x_2^a],[y_0:y_1,y_2]\big). \] Since \(a\) is invertible in \(k\), both \(\pi_a\) and \(\widetilde\pi_a\) are finite Galois with group \(G = \pmb\mu_a^3/\pmb\mu_a\), where \((\zeta_0,\zeta_1,\zeta_2) \in G\) acts on \(\mathbf P^2\) via \[ [x_0:x_1:x_2] \mapsto [\zeta_0x_0:\zeta_1x_1:\zeta_2x_2]. \] This gives a tower of extensions \begin{equation*} \begin{tikzcd}[column sep=.1em,row sep=.8em,arrows=-] R_4 \ar{rd} & R_6 \ar{d}\ar{rd} & R_9 \ar{d} & \ \scalebox{.6}[1]{$\iddots$} \ar{ld}\\ & R_2 \ar{rd} & R_3 \ar{d} & \ \scalebox{.6}[1]{$\iddots$}\ar{ld}\\ & & R_1 & & \end{tikzcd} \end{equation*} indexed by the poset of positive integers invertible in \(k\) under the divisibility relation. \tpoint{Lemma}\label{totally-split} \emph{If \(a,d \geq 1\) are invertible in \(k\), then \begin{enumerate} \item\label{split-i} The map \(\phi_d \colon C_1 \to \mathbf P^2\) is unramified and birational onto its image; \item\label{split-ii} The inverse image \(\pi_a^*C_{ad}\) is totally split into the \(G\)-translates of \(C_d\). \end{enumerate} } \emph{Proof.} The Jacobian \((d \cdot x_0^{d-1},d \cdot x_1^{d-1},d \cdot x_2^{d-1})\) of \(\phi_d\) only vanishes when \(x_0=x_1=x_2=0\), showing that \(\phi_d\) is unramified. Then the map \(C_1 \to C_d^\nu\) to the normalisation of \(C_d\) is unramified, hence an isomorphism since it is an \'etale map of smooth projective rational curves, proving \ref{split-i}. Since \(\pi_a \circ \phi_d = \phi_{ad}\), part \ref{split-i} shows that \(\pi_a\) maps \(C_d\) birationally onto its image. This shows that the decomposition group of \(C_d\) is trivial, so no two \(G\)-translates \(\zeta C_d\) of \(C_d\) coincide and \(C_{ad}\) is totally split under \(\pi_a\). \qed \tpoint{Corollary}\label{cor-norm} \emph{If \(d\) is invertible in \(k\), then the homogeneous ideal of \(C_d \subseteq \mathbf P^2\) is generated by \[ f_d \coloneqq N_{\pi_{d,*} \mathcal O_{\mathbf P^2}/\mathcal O_{\mathbf P^2}} \big(x_0^{1/d}+x_1^{1/d}+x_2^{1/d}\big) = \prod_{\zeta,\zeta' \in \pmb\mu_d} \big(x_0^{1/d}+\zeta x_1^{1/d}+\zeta' x_2^{1/d}\big). \] }% \emph{Proof.} By \parnameref{Lemma}{totally-split} \ref{split-ii}, the inverse image \(\pi_d^{-1}(C_d)\) is the union of lines \(\bigcup_{\zeta \in \pmb\mu_d^3/\pmb\mu_d} \zeta C_1\). The result follows since \(C_1\) is cut out by \(x_0+x_1+x_2=0\). \qed \pointheader~ In general, the \(f_d\) are complicated symmetric polynomials. However, in \parnameref{Corollary}{cor-eqn-char-p} we will show that the coefficients of \(f_{q-1}\) are congruent to \(1\) modulo \(p\) if \(q\) is a power of a prime \(p\). For example, for \(q = p = 3\), we get \begin{align*} N\big(x_0^{\tfrac{1}{2}}+x_1^{\tfrac{1}{2}}+x_2^{\tfrac{1}{2}}\big) &= \Big(x^{\tfrac{1}{2}}_0+x^{\tfrac{1}{2}}_1+x^{\tfrac{1}{2}}_2\Big)\Big(x_0^{\tfrac{1}{2}}+x_1^{\tfrac{1}{2}}-x_2^{\tfrac{1}{2}}\Big)\Big(x_0^{\tfrac{1}{2}}-x_1^{\tfrac{1}{2}}+x_2^{\tfrac{1}{2}}\Big)\Big(x_0^{\tfrac{1}{2}}-x_1^{\tfrac{1}{2}}-x_2^{\tfrac{1}{2}}\Big) \\ &= x_0^2+x_1^2+x_2^2-2x_0x_1-2x_1x_2-2x_2x_0.\\ &\equiv x_0^2+x_1^2+x_2^2+x_0x_1+x_1x_2+x_2x_0 \pmod 3. \end{align*} In the remainder of this section, assume \(\Char k = p > 0\) and let \(q\) be a power of \(p\). \bpoint{Finite Field Line Configurations}\label{line-config} The configuration of \(\mathbf{F}_q\)-rational lines in \(\mathbf{P}^2\) is the union of the lines \(L_a = \{a_0x_0+a_1x_1+a_2x_2 = 0\}\) indexed by \(a = [a_0:a_1:a_2] \in \check{\mathbf P}^2(\mathbf F_q)\). Their union is the divisor in \(\mathbf{P}^2\) cut out by the polynomial \[ \det\left(\begin{smallmatrix} x_0 & x_1 & x_2 \\ x_0^q & x_1^q & x_2^q \\ x_0^{q^2} & x_1^{q^2} & x_2^{q^2} \end{smallmatrix}\right) = x_0^q x_1^{q^2} x_2 - x_0^{q^2} x_1^q x_2 + x_0 x_1^q x_2^{q^2} - x_0 x_1^{q^2} x_2^q + x_0^{q^2} x_1 x_2^q - x_0^q x_1 x_2^{q^2}, \] since the three columns become linearly dependent when \(x_0\), \(x_1\), and \(x_2\) satisfy a linear relation over \(\mathbf F_q\), and the degree equals \(q^2+q+1 = |\check{\mathbf P}^2(\mathbf F_q)|\). Now \parnameref{Lemma}{totally-split}\ref{split-ii} shows that \(\pi_{q-1}^*C_{q-1}\) consists of the \(q^2 - 2q + 1\) lines \(L_a\) with all coordinates of \(a = [a_0:a_1:a_2]\) nonzero. We can thus derive an equation for \(C_{q-1}\) by extracting factors cutting out the lines \(L_a\) in which \(a\) has a vanishing coordinate. A neat description of the result comes from the following polynomial identity, also observed in \cite[p.\ 90]{RVVZ}: \tpoint{Lemma}\label{lem-generating-function} \emph{For any nonnegative integer \(n\), define the polynomials \[ g_n \coloneqq \sum_{n_0 + n_1 + n_2 = n} x_0^{n_0} x_1^{n_1} x_2^{n_2} \quad\text{and}\quad h_n \coloneqq x_0x_1^n - x_0^nx_1 + x_1x_2^n - x_1^nx_2 + x_2x_0^n - x_2^nx_0 \] in \(\mathbf Z[x_0,x_1,x_2]\). Then \(h_2 = (x_2-x_1)(x_0-x_2)(x_1-x_0)\) and \(h_n = h_2g_{n-2}\) for \(n \geq 3\). } \emph{Proof.} Let \(G(t) \coloneqq \sum_{n \geq 0} g_n t^n\) and \(H(t) \coloneqq \sum_{n \geq 0} h_n t^n\) be the generating functions of the \(g_n\) and \(h_n\), respectively. A standard computation gives \[ G(t) = \frac{1}{(1 - x_0t)(1 - x_1t)(1 - x_2t)}. \] On the other hand, writing \(h_n = (x_2 - x_1) x_0^n + (x_0 - x_2) x_1^n + (x_1 - x_0) x_2^n\) gives \[ H(t) = \frac{x_2 - x_1}{1 - x_0t} + \frac{x_0 - x_2}{1 - x_1t} + \frac{x_1 - x_0}{1 - x_2t} = \frac{(x_2 - x_1)x_2x_1 + (x_0 - x_2)x_0x_2 + (x_1 - x_0)x_0x_1}{(1 - x_0t)(1 - x_1t)(1 - x_2t)} t^2. \] The result follows by recognising the numerator as \(h_2\). \qed \tpoint{Corollary}\label{cor-eqn-char-p} \emph{Suppose \(\Char k = p > 0\) and \(q\) is a power of \(p\). Then \(g_{q-1}\) generates the homogeneous ideal of \(C_{q-1} \subseteq \mathbf P^2\). In particular, \(f_{q-1} \equiv g_{q-1} \pmod{p}\). } \emph{Proof.} Since \(C_1 = L_{[1:1:1]}\) is among the \(\mathbf{F}_q\)-rational lines of \parref{line-config} and is not a coordinate axis, the points \([x_0:x_1:x_2]\) of \(C_1 = V(x_0+x_1+x_2)\) satisfy the equation there divided by \(x_0x_1x_2\): \[ x_0^{q-1}x_1^{q^2-1}-x_0^{q^2-1}x_1^{q-1}+x_1^{q-1}x_2^{q^2-1}-x_1^{q^2-1}x_2^{q-1} +x_2^{q-1}x_0^{q^2-1}-x_2^{q^2-1}x_0^{q-1} = 0. \] Since \(C_1 \to C_{q-1}\) is \([x_0:x_1:x_2]\mapsto[x_0^{q-1}:x_1^{q-1}:x_2^{q-1}]\), any \([x_0:x_1:x_2] \in C_{q-1}\) satisfies \[ x_0x_1^{q+1}-x_0^{q+1}x_1+x_1x_2^{q+1}-x_1^{q+1}x_2+x_2x_0^{q+1}-x_2^{q+1}x_0 = 0. \] So \(h_{q+1}\) vanishes on \(C_{q-1}\), which by \parnameref{Lemma}{lem-generating-function} equals \((x_2-x_1)(x_0-x_2)(x_1-x_0)g_{q-1}\). The result follows since \(C_{q-1}\) is not contained in any of the lines \(\{x_2=x_1\}\), \(\{x_0=x_2\}\), or \(\{x_1=x_0\}\), and \(\deg g_{q-1} = q-1 = \deg C_{q-1}\). \qed \bpoint{Negative Curves via Equations}\label{intersections-components} If \(m > 3\) and \(q\) is a power of \(p\) congruent to \(1\) modulo \(m\), then the curves \(\widetilde C_d \subseteq R_m\) with \(dm = q^e - 1\) of the \hyperref[theorem]{\textbf{Main Theorem}} can therefore be obtained by starting with the very explicit equations \[ C_{q^e - 1} = V\left( \sum_{n_0+n_1+n_2=q^e-1} x_0^{n_0} x_1^{n_1} x_2^{n_2} \right) \subseteq \mathbf P^2, \] blowing up at \([1:1:1]\), pulling back along \(\widetilde \pi_m \colon R_m \to R_1\), and taking one of the \(m^2\) isomorphic components \(\zeta \widetilde C_d\) for \(\zeta \in \pmb\mu_m^3/\pmb\mu_m\). From this point of view, the self-intersection may be computed as \[ \widetilde C_d^2 = \widetilde C_d \cdot \widetilde \pi_m^*(\widetilde C_{q^e-1}) - \sum_{\zeta \neq 1} \widetilde C_d \cdot (\zeta \widetilde C_d) = (2dm-1) - 3(m-1)d = d(3-m)-1, \] since the intersection number between \(\widetilde C_d\) and a Galois translate by \(\zeta = (\zeta_0,\zeta_1,\zeta_2) \in G\setminus \{1\}\) is \[ \widetilde C_d \cdot (\zeta \widetilde C_d) = \begin{cases} d, & \zeta_0 = \zeta_1 \text{ or } \zeta_1 = \zeta_2 \text{ or } \zeta_2 = \zeta_0,\\ 0, & \text{otherwise}. \end{cases} \] Indeed, \(\zeta\widetilde C_d\) is the image of the morphism \(\zeta \circ \widetilde \phi_d\) given by \[ [x_0:x_1:x_2] \mapsto \big([\zeta_0x_0^d:\zeta_1x_1^d:\zeta_2x_2^d],[x_0:x_1:x_2]\big). \] Thus, \(\widetilde C_d\) and \(\zeta\widetilde C_d\) only intersect when \(\zeta\widetilde \phi_d([x_0:x_1:x_2]) = \widetilde \phi_d([x_0:x_1:x_2])\). At most one of the \(x_i\) can vanish since \(x_0+x_1+x_2=0\), so there are no intersections when \(\zeta_i \neq \zeta_j\) for \(i \neq j\), and a single intersection with multiplicity \(d\) at \(V(x_k)\) when \(\zeta_i = \zeta_j\) and \(\{i,j,k\} = \{0,1,2\}\). \section{Relation with Fermat varieties and Frobenius morphisms}\label{S:FerFr} By \parnameref{Lemma}{about-R}, the second projection \(\pr_2 \colon R_m \to V(y_0+y_1+y_2)\) realises \(R_m\) as the family of diagonal degree \(m\) curves over \(C_1 \cong \mathbf P^1\) given by \[ x_0^my_0 + x_1^my_1 + x_2^my_2 = 0. \] If \(\Char k = p > 0\) and \(m\) is invertible in \(k\), then the curves \(\widetilde C_d \subseteq R_m\) for \(dm = p^e-1\) are given by sections \(\widetilde \phi_d \colon C_1 \to R_m\) of \(\pr_2\). In this section, we pull back the family \(R_m \to C_1\) and the sections \(\widetilde \phi_d\) along finite covers of \(C_1\). Pulling back along covers by Fermat curves allows us to relate the \(\widetilde C_d\) in \parnameref{Corollary}{Cd-and-graphs} with graphs of Frobenius on products of Fermat curves. Pulling back along the Frobenius morphism of \(C_1\) allows us to realise the \(\widetilde C_d\) in \parnameref{Corollary}{rel-Fr} as pullbacks of a constant section \(\widetilde C_0\) under powers of a horizontal Frobenius morphism of \(R_m\) over \(C_1\). \bpoint{Intermediate Surfaces} For positive integers \(m\) and \(n\) invertible in \(k\) and \(r \in \mathbf N\), denote by \(R_{m,n,r}\) the normal surface \[ R_{m,n,r} = \Set{\big([x_0:x_1:x_2],[y_0:y_1:y_2]\big) \in \mathbf P^2 \times \mathbf P^2 | \begin{array}{c} y_0^n+y_1^n+y_2^n = 0\\ x_0^my_0^r + x_1^my_1^r + x_2^my_2^r = 0 \end{array}}. \] It is smooth if and only if \(m = 1\) or \(r \in\{0,1\}\); in all other cases, the singular locus \(V(x_0y_0,x_1y_1,x_2y_2)\) consists of the \(3n\) points \[ \left\{\big([1:0:0],[0:s:t]\big), \big([0:1:0],[s:0:t]\big), \big([0:0:1],[s:t:0]\big)\ \Big|\ s^n+t^n = 0\right\}. \] Note that \(R_{m,1,1}\) is none other than the surface \(R_m\) of \parnameref{Lemma}{about-R}. If \(X_n\) denotes the Fermat curve \(V(y_0^n+y_1^n+y_2^n) \subseteq \mathbf P^2\) of degree \(n\), then \(R_{m,n,0}\) coincides with \(X_m \times X_n\). The surfaces \(R_{m,n,r}\) for \(r > 0\) come with a projection \[ \pr_2 \colon R_{m,n,r} \to X_n \] that is smooth away from the \(3n\) fibres above \(V(y_0y_1y_2) \subseteq X_n\), and whose singular fibres consist of \(m\) lines meeting at a point. \bpoint{Generalized Power Maps}\label{pullback} For positive integers \(a\) and \(b\) invertible in \(k\), define the finite morphism \begin{align*} \pi_{a,b} \colon R_{am,bn,br} &\to R_{m,n,r} \\ \big([x_0:x_1:x_2],[y_0:y_1:y_2]\big) &\mapsto \big([x_0^a:x_1^a:x_2^a],[y_0^b:y_1^b:y_2^b]\big). \end{align*} For \(b=1\) and \(n=r=1\), it coincides with the morphism \(\widetilde \pi_a\) from \parref{finite-morphisms}. When \(a=1\), these fit into pullback squares \begin{equation*} \begin{tikzcd} R_{m,bn,br} \ar{r}{\pi_{1,b}}\ar{d}[swap]{\pr_2} & R_{m,n,r}\ar{d}{\pr_2}\\ X_{bn} \ar{r} & X_n\punct{.} \end{tikzcd} \end{equation*} If \(F^e \colon X_n \to X_n\) is the \(p^e\)-th power Frobenius morphism of \(X_n\), there are pullback squares \begin{equation*} \begin{tikzcd} R_{m,n,p^er} \ar{r}\ar{d}[swap]{\pr_2} & R_{m,n,r} \ar{d}{\pr_2}\\ X_n \ar{r}{F^e} & X_n\punct{,} \end{tikzcd} \end{equation*} so \(R_{m,n,p^er}\) is the Frobenius twist \(R_{m,n,r}^{(e)}\) of \(R_{m,n,r}\) over \(X_n\). We denote the top map by \(\pi^{(e)}\). \tpoint{Lemma}\label{birational} \emph{Let \(m\) and \(n\) be positive integers invertible in \(k\), let \(a,r \in \mathbf Z\), and assume that \(r\) and \(r+am\) are nonnegative. Then the map \begin{align*} \psi_a \colon \mathbf{P}^2 \times \mathbf{P}^2 & \stackrel{\sim}{\dashrightarrow} \mathbf{P}^2 \times \mathbf{P}^2 \\ \big([x_0:x_1:x_2],[y_0:y_1:y_2]\big) &\longmapsto \big([x_0y_0^a:x_1y_1^a:x_2y_2^a],[y_0:y_1:y_2]\big) \end{align*} maps \(R_{m,n,r+am}\) birationally onto \(R_{m,n,r}\).} \emph{Proof.} Note that \(\psi_a\) is a birational map with rational inverse \(\psi_{-a}\). The result follows since \(\psi_a\) takes \(R_{m,n,r+am}\) into \(R_{m,n,r}\) and \(\psi_{-a}\) does the opposite, and neither surface is contained in the locus where \(\psi_a\) or \(\psi_{-a}\) is undefined. \qed This allows us to relate \(R_{m,m,m}\) and \(X_m \times X_m\): \tpoint{Corollary}\label{Cd-and-graphs} \emph{The surfaces \(X_m \times X_m \cong R_{m,m,0}\) and \(R_{m,m,m}\) are birational via \begin{align*} \psi \colon X_m \times X_m &\stackrel\sim\dashrightarrow R_{m,m,m}\\ \big([x_0:x_1:x_2],[y_0:y_1:y_2]\big) &\longmapsto \Big(\big[\tfrac{x_0}{y_0}:\tfrac{x_1}{y_1}:\tfrac{x_2}{y_2}\big],[y_0:y_1:y_2]\Big). \end{align*} The composition \(\rho \colon X_m \times X_m \dashrightarrow R_m\) of \(\psi\) with \(\pi_{1,m}\) is given by \[ \big([x_0:x_1:x_2],[y_0:y_1:y_2]\big) \longmapsto \Big(\big[\tfrac{x_0}{y_0}:\tfrac{x_1}{y_1}:\tfrac{x_2}{y_2}\big],[y_0^m:y_1^m:y_2^m]\Big). \] If \(\Char k = p > 0\), \(m\) is invertible in \(k\), and \(dm = p^e-1\) for some positive integer \(e\), then the strict transform of \(\pi_{1,m}^* \widetilde C_d\) under \(\psi\) is the transpose \(\Gamma_{F^e}^\top\) of the graph of the \(p^e\)-power Frobenius. } \emph{Proof.} The first statement follows by applying \parnameref{Lemma}{birational} to \(m=n=r\) and \(a=-1\), and the second is immediate from the definitions. For the final statement, recall that \(\Gamma_{F^e}^\top\) is given by the section \(s \colon X_m \to X_m \times X_m\) of \(\pr_2\) given by \[ [y_0:y_1:y_2] \mapsto \Big(\big[y_0^{p^e}:y_1^{p^e}:y_2^{p^e}\big],[y_0:y_1:y_2]\Big). \] By the first pullback square of \parref{pullback}, the curve \(\pi_{1,m}^* \widetilde C_d\) is the image of the section \(X_m \to R_{m,m,m}\) given by \[ [y_0:y_1:y_2] \mapsto \big([y_0^{dm}:y_1^{dm}:y_2^{dm}],[y_0:y_1:y_2]\big), \] which agrees with \(\psi \circ s\). \qed\pagebreak \pointheader~ The curves \(\Gamma_{F^e} \subseteq X_m \times X_m\) are the standard example of curves with unbounded negative self-intersection: the condition \(m > 3\) of the \hyperref[theorem]{\textbf{Main Theorem}} is exactly the condition \(g(X_m) > 1\) that makes \(\Gamma_{F^e}^2 = p^e(2-2g)\) negative. In fact, since \(\Gamma_{F^e}^\top\) passes through \(3m\) of the \(3m^2\) points of indeterminacy of \(\psi\), resolving the map shows that \(m^2\,\widetilde C_d^2 = \Gamma_{F^e}^2 - 3m\). On the other hand, \(R_m \to C_1\) is an isotrivial family of diagonal degree \(m\) curves that becomes rationally trivialised over the \(m\)-th power cover \(X_m \to C_1\). Thus, we can also look directly at the pullback \(\pi^{(e)} \colon R_m^{(e)} \to R_m\) of the Frobenius \(F^e \colon C_1 \to C_1\). Note that \(R_m^{(e)} = R_{m,1,p^e}\) by \parref{pullback}, so we get: \tpoint{Corollary}\label{rel-Fr} \emph{If \(p^e = dm+1\), then \(R_m^{(e)}\) is birational to \(R_m\) via \begin{align*} \psi \colon R_m &\stackrel\sim\dashrightarrow R_m^{(e)} \\ \big([x_0:x_1:x_2],[y_0:y_1:y_2]\big) &\longmapsto \Big(\big[\tfrac{x_0}{y_0^d}:\tfrac{x_1}{y_1^d}:\tfrac{x_2}{y_2^d}\big],[y_0:y_1:y_2]\Big). \end{align*} If \(\widetilde \phi_0 \colon C_1 \to R_m\) denotes the constant section \([y_0:y_1:y_2] \mapsto \big([1:1:1],[y_0:y_1:y_2]\big)\) and \(\widetilde C_0 \subseteq R_m\) denotes its image, then \(\widetilde C_d\) is the strict transform of \(\pi^{(e),*} \widetilde C_0\) under \(\psi\). }% \emph{Proof.} The first statement follows from \parnameref{Lemma}{birational} applied to \(n=1\), \(r = p^e\), and \(a = -d\). For the second, by the second pullback square of \parref{pullback}, the curve \(\pi^{(e),*}\widetilde C_0\) is the image of the constant section \(C_1 \to R_m^{(e)}\) given by \[ [y_0:y_1:y_2] \mapsto \big([1:1:1],[y_0:y_1:y_2]\big), \] which agrees with \(\psi \circ \widetilde \phi_d\). \qed \pointheader~ Instead of the transpose \(\Gamma_{F^e}^\top\) of the graph of \(F^e\colon X_m \to X_m\), one can also look at the negative curves \(\Gamma_{F^e} \subseteq X_m \times X_m\), which are given by pulling back the diagonal along the relative Frobenius of \(\pr_2 \colon X_m \times X_m \to X_m\). Their images under the rational map \(\rho\) of \parnameref{Corollary}{Cd-and-graphs} are given by the parametrised rational curves \begin{align*} C_1 &\to R_m \\ [y_0:y_1:y_2] &\mapsto \Big(\big[y_1^dy_2^d:y_0^dy_2^d:y_0^dy_1^d\big],\big[y_0^{p^e}:y_1^{p^e}:y_2^{p^e}\big]\Big), \end{align*} where \(dm = p^e-1\) as usual. These are obtained from the curve \(\widetilde C_0\) of \parnameref{Corollary}{rel-Fr} by pulling back the strict transform of \(\widetilde C_0\) under \(\psi^{-1} \colon R_m^{(e)} \dashrightarrow R_m\) along the relative Frobenius \(F_{R_m/C_1}^e\). Note that the images of these curves in \(\mathbf P^2\) differ from the curves \(C_d\) by the Cremona transformation \[ [x_0:x_1:x_2] \mapsto [x_0^{-1}:x_1^{-1}:x_2^{-1}]. \] Finally, we relate the multiplicity of \(C_d\) at \([1:1:1]\) to point counts on the Fermat curve \(X_m\) if \(dm = p^e-1\) for some positive integer \(e\). \tpoint{Lemma}\label{zeta-Fermat} \emph{If \(dm = p^e-1\), then a point \(x \in C_1\) maps to \([1:1:1]\) in \(C_d\) if and only if there exists \(y \in X_m(\mathbf F_{p^e})\) with nonzero coordinates mapping to \(x\) under the \(m\)-th power map \(X_m \to C_1\). In particular, \[ \operatorname{mult}_{[1:1:1]} C_d = \frac{|X_m(\mathbf F_{p^e})|-3m}{m^2}. \] }% \emph{Proof.} The first statement follows since \(X_m \to C_1\) is surjective and a point \(y = [y_0:y_1:y_2]\) on \(X_m\) with nonzero coordinates maps to \([1:1:1]\) under the \((p^e-1)\)-st power map \(X_m \to C_d\) if and only if \(y \in X_m(\mathbf F_{p^e})\). For the second statement, note that \(\operatorname{mult}_{[1:1:1]} C_d\) equals the number of preimages of \([1:1:1]\) in \(C_1\), since \(\widetilde \phi_d \colon C_1 \to \widetilde C_d\) is an isomorphism by \parnameref{Corollary}{self-intersection}. The result now follows since \(X_m \to C_1\) is finite \'etale of degree \(m^2\) away from the coordinate axes, so each point \(x \in C_1 \setminus V(x_0x_1x_2)\) has exactly \(m^2\) preimages in \(X_m\). \qed For example, if \(p^\nu \equiv -1 \pmod m\) for some positive integer \(\nu\), then \[ |X_m(\mathbf F_{p^e})| = 1 - \frac{(m-1)(m-2)}{2} p^{e/2} + p^e \] whenever \(p^e \equiv 1 \pmod m\) \cite[Lem.\ 3.3]{ShiodaKatsura}. \section{Remarks towards characteristic \texorpdfstring{\(0\)}{0}}\label{S:char-0} \pointheader~ Although the Bounded Negativity Conjecture is currently still open in characteristic \(0\), the Weak Bounded Negativity Conjecture is known \cite{Hao}: for any smooth projective complex surface \(X\) and any \(g \in \mathbf N\), there exists a constant \(b(X,g)\) such that \(C^2 \geq -b(X,g)\) for every reduced curve \(C = \sum_i C_i\) whose components \(C_i\) have geometric genus at most \(g\). Our examples in the \hyperref[theorem]{\textbf{Main Theorem}} certainly violate this, and, as we now verify, arise from the failure of the logarithmic Bogomolov--Miyaoka--Yau inequality for the pair \((R_m,\widetilde C_d)\) when \(d\) is large with respect to \(m\). In the next three paragraphs, assume \(\Char k = p > 0\) and \(dm = p^e - 1\). To ease notation, write \((R,\widetilde C)\) for \((R_m, \widetilde C_d)\). We will use logarithmic sheaves of differentials; see for example \cite[\S2]{EsnaultViehweg}. \tpoint{Lemma}\label{chern-numbers} \emph{The Chern numbers of the pair \((R, \widetilde C)\) are \begin{align*} c_1^2(R,\widetilde C) & \coloneqq c_1^2\big(\Omega^1_{R}(\log \widetilde C)\big) = d(m-3) - m^2 + 6, \\ c_2(R,\widetilde C) & \coloneqq c_2\big(\Omega^1_{R}(\log \widetilde C)\big) = m^2 + 1. \end{align*} In particular, the Chern slopes \(c_1^2(R,\widetilde C)/c_2(R,\widetilde C)\) are unbounded for fixed \(m\) and growing \(d\).} \emph{Proof.} The logarithmic sheaf of differentials fit into a short exact sequence \[ 0 \to \Omega^1_{R} \to \Omega^1_{R}(\log \widetilde C) \to \mathcal{O}_{\widetilde C} \to 0, \] so \(c_1^2(R,\widetilde C) = (K_{R} + \widetilde C)^2\) and \(c_2(R,\widetilde C) = c_2(\Omega^1_{R}) + \widetilde C(K_{R} + \widetilde C)\). Since \(R\) is the blowup of \(\mathbf P^2\) in \(m^2\) points, we get \(K_R^2=9-m^2\) and \(c_2(\Omega^1_R) = 3+m^2\), so the result follows from the computations of the intersection numbers in \parnameref{Corollary}{self-intersection}. \qed \tpoint{Lemma}\label{KC-pseff} \emph{If \(m > 3\) and \(d\) is such that \[ \chi\big(2(K_{R} + \widetilde C)\big) = d(m-3) - m^2 + 5 > 0 \] then \(\mathrm{H}^0(R, 2(K_{R} + \widetilde C)) \neq 0\). In particular, \(K_{R} + \widetilde C\) is pseudoeffective.} \emph{Proof.} The Euler characteristic statement follows from Riemann--Roch, so it remains to show that \(\mathrm{H}^0\big(R,2(K_{R} + \widetilde C)\big) \neq 0\) once \(\chi\big(2(K_{R} + \widetilde C)\big) > 0\). But \(\mathrm{H}^2\big(R,2(K_R+\widetilde C)\big) = \mathrm{H}^0(R,-K_R-2\widetilde C)^\vee\), and the latter vanishes since \(\widetilde C\) is effective and \(-K_R = \mathcal{O}_R(3-m,1)\) by \parnameref{Lemma}{about-R}. \qed For \(d\) large with respect to \(m\), this shows that \((R, \widetilde C)\) falls into the final case considered in \cite[\S 1.2, Case 2]{Hao}, and that the failure of Weak Bounded Negativity stems from the failure of the logarithmic Bogomolov--Miyaoka--Yau inequality: \tpoint{Corollary} \emph{If \(m > 3\) and \(d > \frac{5m^2 - 2}{m - 3}\), then \(K_{R} + \widetilde C\) is pseudoeffective and \[ c_1^2(R,\widetilde C)/c_2(R,\widetilde C) > 4. \] Moreover, the pair \((R, \widetilde C)\) does not lift to the second Witt vectors \(W_2(k)\).} \emph{Proof.} The first part follows from \parnameref{Lemma}{chern-numbers} and \parnameref{Lemma}{KC-pseff}. The final statement follows from \cite[Proposition 4.3]{Langer}, since \((R, \widetilde C)\) violates the logarithmic Bogomolov--Miyaoka--Yau inequality. \qed \pointheader~ On the other hand, the surface \(R_m\) itself does lift to characteristic \(0\). This gives new examples of surfaces \(X \to \Spec \mathbf Z\) such that almost all special fibres \(X_{\bar{\mathbf F}_p}\) (namely those with \(p \nmid m\)) violate bounded negativity. The same property holds for the square \(C \times C\) of a curve \(C \to \Spec \mathbf Z\) of genus \(\geq 2\), which is the classical counterexample to bounded negativity in positive characteristic. However, the rational surface \(X = R_m\) has the additional property that the specialisation maps \(\NS(X_{\bar{\mathbf Q}}) \to \NS(X_{\bar{\mathbf F}_p})\) are isomorphisms for every prime \(p \nmid m\). \tpoint{Question} \emph{Is it possible to determine the effective cone of \(R_m\) for some \(m \geq 4\)? How does it depend on the characteristic of \(k\)? } For example, the curves in \(\mathbf P^2\) cut out by the polynomials \(g_{m-1}\) of \parnameref{Lemma}{lem-generating-function} are smooth of genus \(\tfrac{(m-2)(m-3)}{2}\) in characteristic \(0\) \cite[Thm.\ 1]{RVVZ}, and the equation \(g_{m-1}h_2 = h_{m+1}\) shows that \(V(g_{m-1}) \cup V(x_0-x_1) \cup V(x_1-x_2) \cup V(x_2-x_0)\) contains \[ Z_m' \coloneqq V\left(\begin{array}{c} x_0x_1^{m+1}-x_0^{m+1}x_1,\\ x_1x_2^{m+1}-x_1^{m+1}x_2,\\ x_2x_0^{m+1}-x_2^{m+1}x_0\end{array}\right) = Z_m \cup \left\{[s:t:0],[s:0:t],[0:s:t]\ \big|\ s^m=t^m\right\}. \] Since \(V(g_{m-1})\) has self-intersection \((m-1)^2\) and passes through the \(m^2-3m+2\) points of \(Z_m\) whose coordinates are pairwise distinct, its strict transform on \(R_m\) has self-intersection \(m-1\). On the further blowup \(R'_m\) of \(\mathbf P^2\) in \(Z'_m\), the strict transform has self-intersection \(-2m+2\), but unlike the situation described in \parref{intersections-components}, there does not appear to be an obvious way to produce infinitely many negative curves on a \emph{single} rational surface this way. When \(m = p^e\) for some prime \(p\), the specialisation to characteristic \(p\) collapses \(Z_m\) onto the point \([1:1:1]\), and the smooth curve \(V(g_{m-1})\) becomes a rational curve that is highly singular at \([1:1:1]\). Even though these curves are not negative yet (see \parref{intersections-components}), taking different values of \(e\) does give infinitely many curves on the same rational surface. \pointheader~ As far as we are aware, all known counterexamples to bounded negativity on a smooth projective surface \(X\) over an algebraically closed field \(k\) of characteristic \(p > 0\) consist of a family \(C_i\) of curves on \(X\) for which there exist constants \(a,b\) such that \(C_i^2 = ap^i + b\) for all \(i \in \mathbf N\). \tpoint{Question} \emph{If \(X\) is a surface over an algebraically closed field \(k\) of characteristic \(p > 0\), is there a finite set \(\{(a_i,b_i) \in \mathbf Q^2\ |\ i \in I\}\) such that all integral curves \(C \subseteq X\) with \(C^2 < 0\) satisfy \[ C^2 = a_ip^e+b_i \] for some positive integer \(e\) and some \(i \in I\)? If not, is there some other way in which the self-intersections of negative curves on \(X\) are ``not too scattered''? } We can also consider the following uniform version: \tpoint{Question}\label{q-uniform} \emph{If \(X \to S\) is a smooth projective surface over a finitely generated integral base scheme \(S\), does there exist a finite set \(\{(a_i,b_i) \in \mathbf Q^2\ |\ i \in I\}\) such that every geometrically integral curve \(C \subseteq X_s\) of negative self-intersection in a fibre \(X_s\) with \(\Char \kappa(s) > 0\) satisfies \[ C^2 = a_ip^e+b_i \] for some positive integer \(e\) and some \(i \in I\), where \(p = \Char \kappa(s)\)? } For example, for the surfaces \(R_m \to \Spec\mathbf Z [1/m]\) and the curves \(\widetilde C_d\) of the \hyperref[theorem]{\textbf{Main Theorem}}, we may take \(a=\tfrac{3-m}{m}\) and \(b=\tfrac{-3}{m}\), which do not depend on the characteristic of \(\kappa(s)\). \pointheader~ Despite the failure of bounded negativity in positive characteristic, a positive answer to \parnameref{Question}{q-uniform} still implies bounded negativity in characteristic \(0\) via reduction modulo primes. Indeed, the minimum \(b_{\text{min}} = \min\{b_i\ |\ i \in I\}\) is a lower bound for the self-intersection \(C^2\) of a geometrically integral curve \(C\) in the generic fibre, since the specialisations \(C_s\) of \(C\) satisfy \(C_s^2=C^2\) for all \(s \in S\) and remain geometrically integral for \(s\) in a dense open set \(U \subseteq S\), and \[ \bigcap_{p \in P} \big\{a_ip^e+b_i\ \big|\ i \in I, e \in \mathbf Z_{>0}\big\} \subseteq [b_{\text{min}},\infty) \] for any infinite set of primes \(P\). Thus, \parnameref{Question}{q-uniform} is a natural analogue of the Bounded Negativity Conjecture in positive characteristic. \section*{Acknowledgements} {\small We thank Johan de Jong, Joaqu\'in Moraga, Takumi Murayama, Will Sawin, and John Sheridan for helpful discussions. RvDdB was partly supported by the Oswald Veblen Fund at the Institute for Advanced Study. } \bibliographystyle{alphaurledit}
2,869,038,155,820
arxiv
\section{Introduction} At low temperatures three-dimensional Bose gas undergoes Bose-Einstein condensation, characterized by macroscopic occupation of the zero-momentum state. This feature enabled Bogoliubov in 1947 to develop a mean field theory of weakly-interacting Bose gas \cite{bogoliubov47,stringaribook}. In this theory, the excitation spectrum acquires the so-called Bogoliubov form: \begin{align}\label{bog disp} \varepsilon_q=\sqrt{v^2 q^2+\left(\frac{q^2}{2m}\right)^2}. \end{align} Here $v$ is the sound velocity, $m$ denotes the mass of bosonic particles, while $q$ is the momentum. At low momenta, $q\ll mv$, Bogoliubov quasiparticles are phonons with linear spectrum. At high momenta, $q\gg mv$, the quasiparticle energy (\ref{bog disp}) reproduces the quadratic spectrum of the physical particles forming the Bose gas. Bogoliubov's mean field approach neglects the residual interaction between the quasiparticles. As a result of these interactions, quasiparticles are not entirely free and eventually decay. In three dimensions, the leading mechanism is the decay of a quasiparticle into two others. For quasiparticle excitations of low momenta, $q\ll mv$, the decay rate at zero temperature was found in 1958 by Beliaev \cite{beliaev58,stringaribook}. It scales with the fifth power of the quasiparticle momentum. The decay of quasiparticles is reflected in the dynamic structure factor of interacting bosons. It does not have the form of an infinitely sharp delta function, but rather that of a peak with the width determined by the decay rate. Alternatively, the decay rate can be probed by measuring the cross section for collisions of a quasiparticle with the particles of the condensate. The latter technique was used recently \cite{katz+02-beliaevdamping3d-PhysRevLett.89.220401} (see also Ref.~\cite{hodby+01PhysRevLett.86.2196}) to confirm the predictions of the Beliaev theory in three-dimensional Bose-Einstein condensates. In contrast to the three-dimensional case, bosons in one dimension do not condense due to the enhanced role of quantum fluctuations. Therefore, the Bogoliubov mean-field approach cannot be applied. Instead, Lieb and Liniger \cite{lieb1963exact} studied the model of one-dimensional bosons with contact repulsion, which allows an exact solution. This enabled them to study both the ground state properties of the system \cite{lieb1963exact} and its elementary excitations \cite{lieb1963excitations}. Importantly, unlike the three-dimensional case, there are two branches of elementary excitations, see Fig.~\ref{figLL}. Excitation of type I behaves qualitatively similar to the Bogoliubov mode in three dimensions, and in the limit of weak interaction has been shown \cite{kulish+1976comparison} to have the dispersion (\ref{bog disp}). The second, type II excitation exists in the limited range of momenta determined by the density and describes the so-called dark soliton \cite{kulish+1976comparison,stringaribook}. At the lowest momenta the two branches approach each other, having the common linear part of the spectrum, see Fig.~\ref{figLL}. \begin{figure} \includegraphics[width=0.7\columnwidth]{fig1} \caption{Two branches of excitations in a one-dimensional system of bosons with contact repulsion. At small momenta the excitations on both branches are characterized by the linear spectrum, $\varepsilon_q=v |q|$, represented by the dotted line. At weak interaction, the dispersion of type I excitations deviates from linearity as $|q|^3$, while for type II as $|q|^{5/3}$. Such form of the deviation is actually true above the very small quantum crossover momentum, as we discuss further below.}\label{figLL} \end{figure} The type II branch bends down and thus represents the lowest energy state of the system for a given momentum. Therefore, at zero temperature these excitations cannot decay. On the other hand, momentum and energy conservation laws do not forbid the decay of the excitation of type I. A simple analysis shows that these excitations still cannot decay into two others, but decay into three quasiparticles is allowed. In addition to momentum and energy, integrable models possess a macroscopic number of additional conserved quantities. This prevents any quasiparticle decay. On the other hand, in practice no system is exactly integrable, and even the smallest deviation from integrability leads to a finite decay of quasiparticles. Decay of quasiparticle excitations in one-dimensional quantum liquids is a subject of great current interest \cite{khodas+07PhysRevB.76.155402, gangardt+10PhysRevLett.104.190402, tan+10relaxation, karzig+10PhysRevLett.105.226407, micklitz2011thermalization, ristivojevic_relaxation_2013, Lin+13PhysRevLett.110.016401, matveevfurusaki13, ristivojevic_decay_2014,protopopov2014relaxation, PhysRevB.91.195110}. In this paper we study the decay of Bogoliubov quasiparticles in a system of weakly-interacting bosons. In the limit of high energy of the initial quasiparticle, $q\gg mv$, this problem was addressed in Ref.~\cite{tan+10relaxation}. The integrability of the Lieb-Liniger model was broken by the addition of weak three-body interaction \cite{muryshev_dynamics_2002,mazets_breakdown_2008}. It was shown that this perturbation leads to a finite decay rate that does not depend on the quasiparticle momentum. Unlike Ref.~\cite{tan+10relaxation}, our theory enables one to study analytically the decay of quasiparticles of arbitrary momenta. Furthermore, in addition to the effects of the three-body interaction, we study another integrability-breaking perturbation, which accounts for a finite range of two-body interaction. This complementary term turns out to be an important factor that also affects the decay rate. A summary of our results for the decay of quasiparticles of small momenta, $q\ll mv$, has been reported in Ref.~\cite{ristivojevic_decay_2014}, where we relied on certain phenomenological properties of one-dimensional quantum liquids. The approach of the present paper is fully microscopic and enables us to find the decay rate of Bogoliubov quasiparticles in the whole range of momenta. In the cases $q\gg mv$ and $q\ll mv$, we recover the results of Refs.~\cite{tan+10relaxation} and \cite{ristivojevic_decay_2014}. The description of the excitation spectrum of weakly interacting Bose gas in terms of Bogoliubov quasiparticles and dark solitons is applicable only at sufficiently high momenta, $q\gg q^*$, where $q^*\sim (mv)^{3/2}(\hbar n_0)^{-1/2}\ll mv$ and $n_0$ is the mean particle density \cite{khodas_photosolitonic_2008, imambekov+12RevModPhys.84.1253, pustilnik+14PhysRevB.89.100504}. Below the momentum scale $q^*$ the excitations are effective fermions \cite{rozhkov2005fermionic, imambekov+12RevModPhys.84.1253}, with type I and type II branches corresponding to quasiparticles and quasiholes, respectively. At zero temperature, fermionic quasiparticles decay with the rate that scales as the eighth power of momentum \cite{khodas+07PhysRevB.76.155402, matveevfurusaki13}. We apply the results of Ref.~\cite{matveevfurusaki13} to evaluate this rate in our system, thereby presenting a complete theory of the decay of type I excitations at zero temperature. The paper is organized as follows. In Sec.~\ref{sec:model} we present the hydrodynamic description of the system of weakly interacting bosons. We discuss various terms in the gradient expansion and split the Hamiltonian into a harmonic part describing the Bogoliubov quasiparticles and the anharmonic part that accounts for their interactions. In Sec.~\ref{sec:amplitude} we calculate and analyze the scattering matrix describing the decay of Bogoliubov quasiparticles with momenta $q\gg q^*$. The rate of decay is evaluated in Sec.~\ref{sec:decayrate}. In Sec.~\ref{sec:fermions} we obtain the rate of decay of fermionic quasiparticles at momenta $q\ll q^*$. We discuss our results in Sec.~\ref{sec:discussions}. Some technical details of our work are presented in the appendices. \section{Hamiltonian of weakly interacting bosons}\label{sec:model} \subsection{Microscopic model} In the representation of second quantization, the system of interacting bosons in one dimension is described by the Hamiltonian \begin{equation} \label{H} H=H_{\text{kin}}+H_{\text{int}}, \end{equation} where \begin{gather}\label{Hkin1} H_{\text{kin}}=\frac{\hbar^2}{2m} \int{d} x (\nabla\Psi^\dagger)(\nabla\Psi),\\ \label{Hint1} H_{\text{int}}=\frac{1}{2}\int{d} x{d} x'\,g(x-x')n(x)n(x'). \end{gather} Here Eq.~(\ref{Hkin1}) is the kinetic energy, while Eq.~(\ref{Hint1}) describes the interaction between the bosons. By $\Psi(x)$ and $\Psi^\dagger(x)$ we denote the bosonic single particle field operators that satisfy the standard commutation relation $[\Psi(x),\Psi^\dagger(x')]=\delta(x-x')$. The mass of bosonic particles is $m$. The repulsive two-particle interaction in Eq.~(\ref{Hint1}) is described by the short-ranged function $g(x)$, while $n=\Psi^\dagger\Psi$ denotes the density of particles. In the following we consider the case of weak interaction. This regime is defined by the condition \begin{align}\label{cweak} \int{d} x g(x)\ll \frac{\hbar^2 n_0}{m}, \end{align} where $n_0$ denotes the mean density. The Hamiltonian $H$ provides a microscopic description for an arbitrary system of bosons in one dimension interacting via a pairwise interaction. In some special cases Eqs.~(\ref{H})--(\ref{Hint1}) describe the so-called integrable models. Throughout this paper, we will be particularly interested in the Lieb-Liniger model, which is defined by the contact interaction $g(x)=g \delta(x)$. The integrability of this model allows an exact solution by means of the Bethe ansatz technique \cite{lieb1963exact,lieb1963excitations}. On the other hand, because of integrability there is no decay of quasiparticle excitations in the this model. In this paper we consider leading corrections to the Lieb-Liniger model that break the integrability and thus ensure the decay of quasiparticles. Since there is no well established way to develop perturbation theory starting with Bethe ansatz, here we develop an alternative theoretical description. It is based on the microscopic hydrodynamic approach that enables us to study both the excitations and their decay. Unlike Bethe ansatz, this approach is limited to weak interactions, but it has the advantage that its applicability is not limited to integrable models. Experimentally, the system of one-dimensional bosons can nowadays be routinely realized with cold atomic gases \cite{blochRevModPhys.80.885}. Starting from the three-dimensional system of bosons, one applies an external potential to confine the particle motion to one dimension. At energies smaller than the inter-subband spacing of the confining potential, one effectively obtains a one-dimensional system of interacting bosons. In such situations, making use of the Hamiltonian in the form (\ref{H})--(\ref{Hint1}) to describe the system is a priori not justified. Instead, one must carefully derive the corresponding one-dimensional model. For a typical experimental situation of bosonic atoms in a harmonic confining potential interacting via a short-range potential \cite{olshanii1998}, the effective one-dimensional model is derived in several papers \cite{muryshev_dynamics_2002,mazets_breakdown_2008,tan+10relaxation}. The kinetic energy in the effective model of bosons is still described by Eq.~(\ref{Hkin1}). However, the interaction term takes a more complicated form \begin{align}\label{Hint} H_\text{int}'=\frac{1}{2}\int{d} x{d} x'\,g(x-x')n(x)n(x') -\frac{\hbar^2}{m}\alpha\int{d} x\,n^3. \end{align} In Refs.~\cite{muryshev_dynamics_2002, mazets_breakdown_2008, tan+10relaxation}, the two-body interaction in Eq.~(\ref{Hint}) was found to be of the contact type, $g(x)=g\delta(x)$. In comparison to Eq.~(\ref{Hint1}), the last term in Eq.~(\ref{Hint}) is new and has the meaning of effective three-body interaction. It was obtained \cite{muryshev_dynamics_2002, mazets_breakdown_2008, tan+10relaxation} by accounting for the effect of virtual transitions of bosons into higher radial modes. An important property of the last term in the interaction Hamiltonian (\ref{Hint}) is that it breaks the integrability of the Lieb-Liniger model, and thus enables the decay of quasiparticles. In addition, we modify the interaction Hamiltonian by assuming that the two-body interaction potential $g(x)$ has finite width, which amounts to adding another integrability-breaking perturbation. In the following we refer to $H$ as defined by $H=H_{\text{kin}}+H_{\text{int}}'$ and treat both perturbations on equal footing. \subsection{The density-phase representation} \label{density-phase} The Hamiltonian of the system of weakly interacting bosons, given by Eqs.~(\ref{H}), (\ref{Hkin1}), and (\ref{Hint}), is expressed in terms of the bosonic field operators $\Psi(x)$ and $\Psi^\dagger(x)$. For our purposes it is convenient to apply the hydrodynamic approach \cite{popov72,haldane81prl,cazalilla+2011RMP}, in which the field operators are expressed in terms of the particle density $n(x)$ and its conjugate field $\theta(x)$ that can be thought of as the superfluid phase. In the regime of weak interaction the resulting Hamiltonian is then naturally expressed as a sum of the contribution $H_0$ that is quadratic in the new fields and the higher-order perturbations $V_3$, $V_4$, etc. In this representation $H_0$ naturally accounts for the Bogoliubov quasiparticles, while the perturbations describe the interactions between quasiparticles that enable their decay. We start by expressing the bosonic field operators in terms of the density and phase fields using the the so-called Madelung representation \cite{popov72,haldane81prl} \begin{align}\label{madelung} \Psi=e^{-i\theta}\sqrt n,\quad \Psi^\dagger=\sqrt n\,e^{i\theta}. \end{align} The operators $\Psi(x)$ and $\Psi^\dagger(x)$ expressed in this fashion have the usual bosonic commutation relations provided $[n(x),\theta(x')]=-i\delta(x-x')$. Substituting Eq.~(\ref{madelung}) into the kinetic energy (\ref{Hkin1}) of the Hamiltonian, one obtains \cite{popov72} \begin{align}\label{Hkin} H_{\mathrm{kin}}=\frac{\hbar^2}{2m}\int{d} x\left[n(\nabla\theta)^2+\frac{(\nabla n)^2}{4n}\right]. \end{align} The next step is to express the density as \begin{align}\label{density} n=n_0+\frac{1}{\pi}\nabla\varphi, \end{align} where $n_0$ is the mean particle density and the new bosonic field $\varphi$ satisfies the commutation relation \begin{equation} \label{eq:commutator_phi_theta} [\nabla\varphi(x),\theta(x')]=-i\pi\delta(x-x'). \end{equation} The hydrodynamic approach is applicable as long as the length scale associated with the density fluctuations is large compared with the distance between particles $n_0^{-1}$. In this regime the density fluctuations are small, $|\nabla\varphi|\ll n_0$, and the square root in Eq.~(\ref{madelung}) is real. We now take advantage of the smallness of $|\nabla\varphi|/n_0$ and expand the Hamiltonian in powers of bosonic fields $\varphi$ and $\theta$. The expansion starts with quadratic contributions. The standard Luttinger liquid form \begin{align}\label{HLL} H_{LL}=\int{d} x \left[\frac{\hbar^2n_0}{2m}(\nabla\theta)^2 +\frac{g}{2\pi^2}(\nabla\varphi)^2\right] \end{align} is obtained from the first term in the kinetic energy (\ref{Hkin}) and the first term in Eq.~(\ref{Hint}). Here $g=g_0$, where $g_q=\int{d} x e^{-iq x/\hbar}g(x)$ denotes the Fourier transform of the interaction potential. Apart from Eq.~(\ref{HLL}), there are a number of additional quadratic terms in the Hamiltonian. First, the three-body interaction in Eq.~(\ref{Hint}) upon substitution of Eq.~(\ref{density}) generates the contribution \begin{align}\label{H03} -\frac{3\alpha\hbar^2n_0}{\pi^2 m}\int{d} x (\nabla\varphi)^2. \end{align} Second, the so-called quantum pressure, given by the second term in Eq.~(\ref{Hkin}), and the two-particle interaction term in Eq.~(\ref{Hint}) give rise to \begin{align}\label{H2kinint} \frac{\chi^2\hbar^2}{8\pi^2 m n_0}\int{d} x(\nabla^2\varphi)^2, \end{align} where \begin{align}\label{chi2} \chi^2=1+2mn_0 \frac{{d}^2 g_q}{{d} q^2}\bigg{|}_{q=0}. \end{align} For contact interaction, $g_q=\mathrm{const}$, i.e., $\chi^2=1$. In this special case the two-particle interaction does not contribute to Eq.~(\ref{H2kinint}). Finally, for noncontact interactions the first term in Eq.~(\ref{Hint}) generates contributions proportional to $(\nabla^3\varphi)^2$, $(\nabla^4\varphi)^2$, etc. Such contributions become important only at very high momenta and therefore will be neglected. Collecting the terms of Eqs.~(\ref{HLL}), (\ref{H03}), and (\ref{H2kinint}), we obtain the quadratic Hamiltonian \begin{align}\label{H0} H_0=\frac{\hbar v}{2\pi}\int{d} x \left\{K(\nabla\theta)^2 +\frac{1}{K}\left[(\nabla\varphi)^2 +\frac{2\chi^2\hbar^2}{{q}_0^2}(\nabla^2\varphi)^2\right] \right\}. \end{align} In Eq.~(\ref{H0}), the sound velocity $v$ satisfies \begin{align}\label{v} v^2=\frac{gn_0}{m}-\frac{6\alpha\hbar^2 n_0^2}{m^2}, \end{align} the crossover momentum ${q}_0$ is introduced as \begin{gather} \label{q0} {q}_0=\sqrt{8}mv, \end{gather} while the Luttinger liquid parameter is defined as \begin{align}\label{K} K=\frac{\pi\hbar n_0}{mv}. \end{align} The regime of weak interactions considered in this paper corresponds to $K\gg1$, cf. Eq.~(\ref{cweak}). The strength of the three-particle interaction is quantified by the dimensionless coupling constant $\alpha$ [see Eq.~(\ref{Hint})]. In this paper we will require this perturbation to have only a weak effect on the physical properties of the Bose gas. It is instructive to consider the effect of the three-particle interaction on the sound velocity. From Eq.~(\ref{v}) we conclude that the correction to $v$ is small provided \begin{align}\label{A} A=K^2\alpha \ll 1. \end{align} Since $K\gg1$, this condition is more restrictive than the naive expectation $\alpha\ll1$. We will see below that other physical quantities of interest are also controlled by the parameter $A$ rather than $\alpha$. In addition to $H_0$, the original hydrodynamic Hamiltonian contains a number of higher order in $\varphi$ and $\theta$ contributions that describe the interactions between quasiparticles. The cubic correction to $H_0$ is \begin{align}\label{V3general} V_3={}&\frac{\hbar^2}{m}\int{d} x \biggl[a_1 (\nabla\varphi)(\nabla\theta)^2- \frac{a_2}{n_0^2}(\nabla^2\varphi)^2(\nabla\varphi)\notag\\ &-\frac{\alpha}{\pi^3}(\nabla\varphi)^3 \biggr], \end{align} where for convenience we introduced \begin{align}\label{a1a2} a_1=\frac{1}{2\pi},\quad a_2=\frac{1}{8\pi^3}. \end{align} The first term in Eq.~(\ref{V3general}) arises from the first term in the kinetic energy (\ref{Hkin}). The second term in Eq.~(\ref{V3general}) emerges from the expansion of the second term in Eq.~(\ref{Hkin}). The last term in Eq.~(\ref{V3general}) originates from the second term in Eq.~(\ref{Hint}). In order to evaluate the decay rate of excitations with momenta $q\sim q_0$ one has to account for the quartic in $\varphi$ and $\theta$ contributions to the Hamiltonian. We write the corresponding term as \begin{align}\label{V4general} V_4=\frac{\hbar^2}{mn_0}\int{d} x \left[\frac{{a}_3}{n_0^2} (\nabla^2\varphi)^2(\nabla\varphi)^2+\beta (\nabla\varphi)^4\right], \end{align} where \begin{align}\label{a3} {a}_3=\frac{1}{8\pi^4}, \quad \beta=0. \end{align} The first term in Eq.~(\ref{V4general}) appears from the expansion of the quantum pressure term in Eq.~(\ref{Hkin}). The second term in $V_4$ is not generated in the formal expansion of the Hamiltonian given by Eqs.~(\ref{Hint}) and (\ref{Hkin}). We added it to Eq.~(\ref{V4general}) with a formally vanishing coefficient for completeness and future convenience (see Appendix \ref{section:lagrange}). So far we have expanded our Hamiltonian to the fourth order in the bosonic fields. The terms $V_3$ and $V_4$ will be used to evaluate the decay rate of Bogoliubov quasiparticles with momenta of order $q_0$, where the crossover from linear to quadratic behavior of the quasiparticle dispersion (\ref{bog disp}) occurs. To understand why the subsequent higher-order terms can be neglected, one can analyze the low-energy scaling of the Hamiltonian. Such an analysis is performed in Appendix \ref{scaling}, where we show that our expansion of the Hamiltonian in powers of the bosonic fields $\varphi$ and $\theta$ is in fact expansion in small parameter $1/\sqrt K$. In particular, we find $V_3\propto 1/\sqrt{K}$ and $V_4\propto 1/K$. \subsection{Normal mode expansion} \label{normal} Our next goal is to obtain Bogoliubov quasiparticles as normal modes of the quadratic Hamiltonian (\ref{H0}). To this end we express the fields $\varphi$ and $\theta$ in terms of bosonic quasiparticle operators $b_q$ and $b_q^\dagger$ via the relations \begin{gather}\label{nablaphi} \nabla\varphi(x)=\sum_q \sqrt{\frac{\pi^2 n_0}{2Lm\varepsilon_q}}|q|e^{i q x/\hbar} (b_{-q}^\dagger+b_q),\\ \label{nablatheta} \nabla\theta(x)=\sum_q \sqrt{\frac{m \varepsilon_q}{2 L\hbar^2 n_0}}\,\text{sgn}(q)e^{i q x/\hbar} (b^\dagger_{-q}-b_{q}). \end{gather} Here $L$ denotes the system size. As a result, the Hamiltonian (\ref{H0}) takes the diagonal form \begin{align}\label{H0diag} H_{0}=\sum_q\varepsilon_q b_q^\dagger b_q, \end{align} with the excitation spectrum given by \begin{align}\label{Eq} \varepsilon_q=\sqrt{v^2q^2+\chi^2\left(\frac{q^2}{2m}\right)^2}. \end{align} For the Lieb-Liniger model, we have $\chi=1$, and the spectrum coincides with the well known expression (\ref{bog disp}), Ref.~\cite{kulish+1976comparison}. Deviation of the spectrum (\ref{Eq}) from the form (\ref{bog disp}) appears in the case of nonvanishing range of interactions between the bosons. This deviation is most important at high momenta $q\gg q_0$, where $\varepsilon_q \simeq \chi q^2/2m$ rather than $q^2/2m$. The latter expression represents the energy of a highly excited boson, which essentially does not interact with other bosons because of its high momentum $q$. This physics is not captured by the hydrodynamic theory, which is applicable only at $q\ll \hbar n_0$. As we show in Appendix~\ref{scaling}, the anharmonic terms (\ref{V3general}) and (\ref{V4general}) represent corrections to the quadratic Hamiltonian $H_0$ that are small as $1/\sqrt K$ and $1/K$, respectively. As a result, they do not affect the excitation spectrum significantly. On the other hand, they represent the residual interactions between the quasiparticles that enable finite decay rate. Using the normal mode representation (\ref{nablaphi}) and (\ref{nablatheta}), the cubic anharmonic term (\ref{V3general}) becomes \begin{align}\label{V3} V_3={}&\frac{\pi v^2}{\sqrt{8Lmn_0}} \sum_{q_1,q_2,q_3}\frac{|q_1q_2q_3|} {\sqrt{\varepsilon_{q_1}\varepsilon_{q_2}\varepsilon_{q_3}}} \delta_{q_1+q_2+q_3,0}\notag\\ &\times\biggl[\frac{1}{3}f_+\left(q_1,q_2,q_3\right) (b^\dagger_{q_1}b^\dagger_{q_2}b^\dagger_{q_3} + \text{h.c.})\notag\\ &+f_-\left(q_1,q_2,q_3\right) (b^\dagger_{q_1} b^\dagger_{q_2}b_{-q_3} + \text{h.c.}) \biggr], \end{align} where the dimensionless functions are \begin{align}\label{fpm} f_{\pm}(q_1,q_2,q_3)={}&\frac{{a}_1}{v^2}\left( \frac{\varepsilon_{q_1}\varepsilon_{q_2}} {q_1q_2} \pm \frac{\varepsilon_{q_1}\varepsilon_{q_3}} {q_1q_3} \pm\frac{\varepsilon_{q_2}\varepsilon_{q_3}} {q_2q_3}\right)\notag\\ &+\frac{8\pi^2{a}_2}{q_0^2}(q_1q_2+q_1q_3+q_2q_3) -\frac{3A}{\pi^3}. \end{align} Similarly, the quartic anharmonic term (\ref{V4general}) transforms to \begin{align}\label{V4} V_4={}&\frac{\pi^2 v^2}{4Lmn_0} \sum_{q_1,q_2,q_3,q_4}\biggl[f(q_1,q_2,q_3,q_4) \delta_{q_1+q_2+q_3+q_4,0}\notag\\ &\times\prod_{i=1}^{4}\frac{|q_i|}{\sqrt{\varepsilon_{q_i}}} (b^\dagger_{q_i}+b_{-q_i})\biggr], \end{align} where \begin{align}\label{f3} f(q_1,q_2,q_3,q_4) ={}&-\frac{4\pi^2{a}_3}{3q_0^2} (q_1q_2+ q_1q_3+q_1q_4\notag\\ &+q_2q_3+q_2q_4+q_3q_4)+B, \end{align} where $B=K^2\beta$. We will now apply the results (\ref{V3})--(\ref{f3}) to the evaluation of the decay rate of Bogoliubov quasiparticles. \section{Scattering matrix element}\label{sec:amplitude} The spectrum of a Bogoliubov quasiparticle in a weakly interacting Bose gas is given by Eq.~(\ref{Eq}). The presence in the Hamiltonian of weak anharmonic perturbations, such as $V_3$ and $V_4$, means that the quasiparticles are weakly interacting. This generally leads to their decay. Our goal is to study the decay of a state with a single quasiparticle as a function of its momentum $Q$. For one-dimensional particles with the spectrum (\ref{Eq}), decay into two quasiparticles is incompatible with simultaneous conservation of energy and momentum. The simplest allowed decay process corresponds to three particles in the final state, see Fig.~\ref{fig1}. It will become clear below that this is the dominant decay channel in a weakly interacting Bose gas. \begin{figure} \includegraphics[width=0.6\columnwidth]{fig2} \caption{In a one dimensional Bose gas, a quasiparticle excitation of momentum $Q$ decays into three excitations with momenta $q_1,q_2$, and $q_3$. Using the conservation laws, one finds that two quasiparticles in the final state propagate in the direction of the initial quasiparticle, unlike the remaining one.}\label{fig1} \end{figure} We start our evaluation by considering the scattering matrix element $\mathcal{A}_{fi}$ for the decay of the initial state $|i\rangle=b_Q^\dagger|0\rangle$ into the final one $|f\rangle=b_{q_1}^\dagger b_{q_2}^\dagger b_{q_3}^\dagger|0\rangle$. $\mathcal{A}_{fi}$ is defined in terms of the $T$-matrix as \begin{align} \mathcal{A}_{fi} =\langle 0| b_{q_1} b_{q_2} b_{q_3}|T|b^\dagger_{Q}|0\rangle. \label{matrix_element} \end{align} Such a matrix element can be obtained in a number of ways. The simplest contribution is in the first order in the quartic term $V_4$ [Eq.~(\ref{V4})] that allows for the direct transition between the initial and final states. Alternatively, the same transition can be accomplished in second order in the cubic perturbation $V_3$ [Eq.~(\ref{V3})]. In a weakly interacting Bose gas, i.e., at $K\gg1$, the two perturbations are small, $V_3\propto 1/\sqrt K$ and $V_4\propto1/K$. As a result, the two contributions to the matrix element (\ref{matrix_element}) appear in the same order, $\mathcal{A}_{fi}\propto1/K$. A straightforward argument shows that higher-order anharmonic perturbations to the Hamiltonian $H_0$ give rise to parametrically smaller contributions to the matrix element (\ref{matrix_element}). Accounting only for the leading contributions, we find \begin{align}\label{Afi} \mathcal{A}_{fi}=\langle f|V_4|i\rangle+\sum_m\frac{\langle f|V_3|m\rangle\langle m|V_3|i\rangle} {\varepsilon_Q-E_m}. \end{align} Here the summation is over the intermediate states $|m\rangle$, whose energies are denoted by $E_m$. The contribution to the scattering matrix element due to the quartic anharmonic term (\ref{V4}) arises from the combinations of operators in $V_4$ that contain three creation and one annihilation operator. There are four such terms. After a simple calculation one obtains \begin{align}\label{A4full} \langle f|V_4|i\rangle ={}&\frac{6\pi^2 v^2}{L m n_0} \frac{|Qq_1q_2q_3|}{\sqrt{\varepsilon_{Q} \varepsilon_{q_1} \varepsilon_{q_2} \varepsilon_{q_3}}}\notag\\ &\times f(Q,-q_1,-q_2,-q_3)\delta_{Q,q_1+q_2+q_3}, \end{align} where the function $f$ is defined in Eq.~(\ref{f3}). The calculation of the contribution to the scattering matrix element (\ref{Afi}) that arises from $V_3$ is more involved and deferred to Appendix \ref{Appendix:amplitude}. Accounting for Eq.~(\ref{A4full}), the final result for the scattering matrix element (\ref{Afi}) is \begin{align}\label{Afifinal} \mathcal{A}_{fi}={}&\frac{\pi^2 v^2}{2Lmn_0} \frac{|Qq_1q_2q_3|}{\sqrt{\varepsilon_Q \varepsilon_{q_1} \varepsilon_{q_2} \varepsilon_{q_3}}} \bigl[F(Q,q_1,q_2,q_3)\notag\\ &+F(Q,q_2,q_1,q_3) +F(Q,q_3,q_2,q_1)\notag\\ &+12f(-Q,q_1,q_2,q_3)\bigr] \delta_{Q,q_1+q_2+q_3}, \end{align} where we introduced the dimensionless function \begin{align}\label{Fdef} F&(q_1,q_2,q_3,q_4)= \frac{v^2(q_1-q_2)^2}{\varepsilon_{q_1-q_2}}\notag\\ &\times\biggl[ \frac{f_-(q_4, q_3,-q_3-q_4)f_-(q_1-q_2,q_2,-q_1)} {\varepsilon_{q_1}-\varepsilon_{q_2}-\varepsilon_{q_1-q_2}} \notag\\ &-\frac{f_-(q_1,-q_1+q_2,-q_2)f_+(-q_3-q_4,q_3,q_4)} {\varepsilon_{q_3}+\varepsilon_{q_4}+\varepsilon_{q_1-q_2}} \biggr]. \end{align} Here $\varepsilon_q$ and the functions $f_\pm$ are defined by Eqs.~(\ref{Eq}) and (\ref{fpm}), respectively. The scattering matrix element (\ref{Afifinal}) has some important general properties. Since $F(q_1,q_2,q_3,q_4)=F(q_1,q_2,q_4,q_3)$, the matrix element (\ref{Afifinal}) is symmetric with respect to the exchanges of the momenta of excitations in the final state. This is a manifestation of the fact that Bogoliubov quasiparticles obey bosonic statistics. More importantly, one can show that at \begin{align}\label{LLlimit} A=B=0,\quad \chi=1 \end{align} the result (\ref{Afifinal}) vanishes, provided that $q_1$, $q_2$, $q_3$, and $Q$ satisfy conservation laws of momentum and energy. This is because under the conditions (\ref{LLlimit}) our theory describes the weakly interacting Lieb-Liniger model. The latter is integrable, and its quasiparticles do not decay. We now simplify the scattering matrix element (\ref{Afifinal}) in the regimes of small and large momenta. \subsection{Small momentum region} At small momentum of the initial excitation, $Q\ll {q}_0$, the other three momenta are also small compared to $q_0$. In this regime we have been able to simplify the expression (\ref{Afifinal}) considerably, as discussed in Appendix \ref{appendix:amplitudeexpansion}. The final result takes the form \begin{align}\label{AfifinallowQ} \mathcal{A}_{fi}=\frac{\Lambda}{2Lm n_0}\sqrt{|Qq_1q_2q_3|}\delta_{Q,q_1+q_2+q_3}, \end{align} where the momentum independent $\Lambda$ is given by \begin{align}\label{Lambda<general} \Lambda={}&12\pi^2B-{6\pi^2{a}_1^2}+\frac{24\pi^4{a}_1{a}_2}{\chi^2}\notag\\ &- \frac{{A}}{\pi} \left(18{a}_1+ \frac{24\pi^2{a}_2}{\chi^2}\right). \end{align} Using the values of $a_1$, $a_2$, and $\beta\equiv B/K^2$ given by Eqs.~(\ref{a1a2}) and (\ref{a3}), in the leading order in small $1-\chi$ we obtain \begin{align}\label{LambdalowQ} \Lambda= -\frac{3\Omega}{\pi^2}, \end{align} where we defined \begin{align}\label{Omega} \Omega=4A-\pi^2(1-\chi). \end{align} We observe again that in the Lieb-Liniger limit (\ref{LLlimit}) the scattering matrix element vanishes. \subsection{Large momentum region} At large momentum of the initial excitation, $Q\gg {q}_0$, we have also been able to considerably simplify the matrix element (\ref{Afifinal}). The main steps are described in Appendix \ref{appendix:amplitudeexpansion}, resulting in \begin{gather}\label{AfilargeQ} \mathcal{A}_{fi}=\frac{2mv^2}{L n_0}\Xi \delta_{Q,q_1+q_2+q_3}, \end{gather} where \begin{align}\label{Lambda>} \Xi={}&12\pi^2B-\frac{23\pi^2{a}_1^2}{8}+\frac{13\pi^4{a}_1{a}_2}{\chi^2}+\frac{26\pi^6{a}_2^2}{\chi^4} \notag\\ &-\frac{4\pi^4{a}_3}{\chi^2} -\frac{A}{\pi}\left(\frac{21}{2}{a}_1 +\frac{30\pi^2{a}_2}{\chi^2}\right). \end{align} Substituting the specific values of the parameters of our Hamiltonian from Eqs.~(\ref{a1a2}) and (\ref{a3}), in the leading order in small $1-\chi$ we find \begin{align} \label{LambdalargeQ} \Xi=-\frac{9\Omega}{4\pi^2}. \end{align} As expected, in the Lieb-Liniger case (\ref{LLlimit}) the scattering matrix element $\mathcal{A}_{fi}=0$. \section{Decay rate}\label{sec:decayrate} Let us now evaluate the rate of decay of a quasiparticle of momentum $Q>0$ at zero temperature. The dominant decay process is illustrated in Fig.~\ref{fig1}. The corresponding rate of decay is given by the Fermi golden rule expression \begin{align}\label{rate def} \frac{1}{\tau}=\frac{2\pi}{\hbar} \sum_{q_1,q_2,q_3}\!\!\!\!'\,|\mathcal{A}_{fi}|^2 \delta(\varepsilon_{Q}-\varepsilon_{q_1}- \varepsilon_{q_2}-\varepsilon_{q_3}). \end{align} The matrix element $\mathcal{A}_{fi}$ describing the decay of the initial quasiparticle excitation of momentum $Q$ into three quasiparticles with momenta $q_1$, $q_2$, and $q_3$ is given by Eq.~(\ref{Afifinal}). The prime symbol in Eq.~(\ref{rate def}) denotes the summation over distinct final states. The conservation laws of energy and momentum \begin{gather}\label{momentumcons} Q=q_1+q_2+q_3,\\ \label{energycons} \varepsilon_Q=\varepsilon_{q_1}+ \varepsilon_{q_2}+ \varepsilon_{q_3}, \end{gather} ensure that out of three new quasiparticles two propagate in the same direction as the initial quasiparticle, $q_1,q_2>0$, while the third one is counterpropagating, $q_3<0$, see Fig.~\ref{fig1}. Conditions (\ref{momentumcons}) and (\ref{energycons}) enable us to express the momentum of the counterpropagating quasiparticle as a function of $Q$ and one of the two remaining momenta, for example, $q_1$. We denote it as $q_3\equiv q_3(Q,q_1)$. With the help of the two conservation laws we now easily perform two summations in Eq.~(\ref{rate def}), yielding \begin{align}\label{rate final uneval} \frac{1}{\tau}= \frac{L^2}{4\pi\hbar^3} \int_0^{Q}{d} q_1 \frac{|\mathcal{{A}}(Q,q_1,Q-q_1-q_3,q_3)|^2} {|\varepsilon'_{Q-q_1-q_3}-\varepsilon'_{q_3}|}, \end{align} where \begin{align} \varepsilon'_{q}=v\,\text{sgn}(q) \frac{1+4\chi^2\frac{q^2}{{q}_0^2}} {\sqrt{1+2\chi^2\frac{q^2}{{q}_0^2}}}. \end{align} In the following we use Eq.~(\ref{rate final uneval}) to evaluate the quasiparticle decay rate as a function of $Q$. \subsection{Regime of low momenta} Let us first consider the case of low momentum of the initial excitation, $Q\ll{q}_0$, where we recall the definition (\ref{q0}). In this regime the excitation spectrum is almost linear and thus the denominator in Eq.~(\ref{rate final uneval}) simplifies into $2v$. Using the conservation laws (\ref{momentumcons}) and (\ref{energycons}) we find the leading order result for the momentum of the counterpropagating excitation \begin{align}\label{q3small} q_3= -\frac{3Qq_1}{2{q}_0^2}(Q-q_1). \end{align} Substituting it in Eq.~(\ref{rate final uneval}) with the matrix element given by (\ref{AfifinallowQ}), after integration we obtain \begin{align}\label{ratefinallowQgeneral} \frac{1}{\tau}=\frac{9\sqrt{2}}{5\pi} \frac{\Omega^2}{K^4} \frac{T_d}{\hbar} \left(\frac{Q}{{q}_0}\right)^7. \end{align} Here we introduced the quantum degeneracy temperature $T_d=\hbar^2 n_0^2/m$. In the limit of contact interaction, $\chi=1$, and the decay rate (\ref{ratefinallowQgeneral}) becomes \begin{align}\label{ratefinallowQ} \frac{1}{\tau}=\frac{144\sqrt{2}}{5\pi} \alpha^2\frac{T_d}{\hbar} \left(\frac{Q}{{q}_0}\right)^7. \end{align} This result was found earlier in Ref.~\cite{ristivojevic_decay_2014} using a phenomenological approach, in which the phonon is treated as a mobile impurity. Here we rederived that result fully microscopically and generalized it to the case of noncontact interaction. \subsection{Regime of high momenta} Now we consider the case of large momentum of the initial excitation, $Q\gg{q}_0$. The conservation laws (\ref{momentumcons}) and (\ref{energycons}) can be easily solved when all quasiparticles are in the quadratic part of the spectrum. One finds \begin{gather}\label{eq:q2bigQ} q_2=\frac{1}{2}\left[Q-q_1+\sqrt{(Q-q_1)(Q+3q_1)}\right],\\ \label{eq:q3bigQ} q_3=\frac{1}{2}\left[Q-q_1-\sqrt{(Q-q_1)(Q+3q_1)}\right]. \end{gather} The latter expressions enable us to simplify the denominator in Eq.~(\ref{rate final uneval}), which becomes $2\sqrt{2}v\sqrt{(Q-q_1)(Q+3q_1)}/q_0$. Here we take into account the leading order result in small $1-\chi$. Using the matrix element (\ref{AfilargeQ}) and the expression \begin{align} \int_0^Q \frac{{d} q_1}{\sqrt{(Q-q_1)(Q+3q_1)}}=\frac{2\sqrt{3}\pi}{9}, \end{align} we obtain the decay rate of quasiparticles of large momenta: \begin{align}\label{ratefinalhighQgeneral} \frac{1}{\tau}=\frac{9\sqrt{3}}{8} \frac{\Omega^2}{K^4} \frac{T_d}{\hbar}. \end{align} We note that the expression (\ref{rate final uneval}) contains regions of integration where the momentum $q_1$ is either close to zero or $Q$. In these regions two quasiparticles of the final state are in the linear part of the spectrum, where the approximations (\ref{eq:q2bigQ}) and (\ref{eq:q3bigQ}) fail. We checked that the contributions arising from these boundary regions give only a subleading correction to the decay rate (\ref{ratefinalhighQgeneral}). In the limit of contact interaction $\chi=1$. Equation (\ref{ratefinalhighQgeneral}) then reduces to \begin{align}\label{ratefinalhighQtan} \frac{1}{\tau}=18\sqrt{3}\alpha^2\frac{T_d}{\hbar}. \end{align} This result was obtained earlier in Ref.~\cite{tan+10relaxation} using a different approach. \subsection{The crossover regime} In the regime of intermediate momenta, $Q\sim {q}_0$, complete analytical evaluation of the decay rate (\ref{rate final uneval}) is a challenging problem. However, we are able to express it in the form \begin{align}\label{rate-general} \frac{1}{\tau}=\frac{\Omega^2}{K^4} \frac{T_d}{\hbar}\mathcal{F}\left(\frac{Q}{q_0}\right), \end{align} where $\Omega$ is given by Eq.~(\ref{Omega}). The analytical form for the function $\mathcal{F}$ is given by Eqs.~(\ref{E17})-(\ref{E20}) of Appendix \ref{section:lagrange}. It has the asymptotic behavior \begin{align}\label{FF} \mathcal{F}(X)=\begin{cases}\frac{9\sqrt{2}}{5\pi}X^7,\quad &X\ll 1,\\ \frac{9\sqrt{3}}{8},\quad &X\gg 1. \end{cases} \end{align} The latter result is in agreement with already calculated decay rates in the limiting cases of low [Eq.~(\ref{ratefinallowQgeneral})] and high [Eq.~(\ref{ratefinalhighQgeneral})] momenta. In Fig.~\ref{fig3crossover} we plot the function $\mathcal{F}$. \begin{figure} \includegraphics[width=0.95\columnwidth]{fig3} \caption{Plot of the function $\mathcal{F}(X)$ given by Eqs.~(\ref{E17})-(\ref{E20}) that enters the relaxation rate (\ref{rate-general}). The inset shows the limiting behavior of $\mathcal{F}(X)$ at $X\to 0$.}\label{fig3crossover} \end{figure} \section{Decay of fermionic excitations at low energies}\label{sec:fermions} Description of elementary excitations of weakly interacting Bose gas in terms of phonons with Bogoliubov dispersion (\ref{Eq}) is applicable only at sufficiently high momenta. Indeed, the correction to the linear spectrum $\varepsilon_q=v|q|$ in Eq.~(\ref{Eq}) is due to the term proportional $(\nabla^2\varphi)^2$ in the Hamiltonian (\ref{H0}). At $q\to0$ the relative significance of a perturbation in the Hamiltonian is determined by its scaling dimension, which for the operator $(\nabla^2\varphi)^2$ is four. On the other hand, perturbations $\nabla \varphi (\nabla\theta)^2$ and $(\nabla\varphi)^3$ of lower scaling dimension three are also present in the Hamiltonian, see Eq.~(\ref{V3general}). At the lowest energies, the latter perturbations control the physics of the elementary excitations and their spectrum \cite{rozhkov2005fermionic}. Specifically, the excitations at $q\to0$ are fermions with spectrum \begin{equation} \label{eq:fermion_spectrum} \varepsilon_q=v|q|+\frac{q^2}{2m^*}+\frac16\lambda^* |q|^3+\ldots. \end{equation} Most importantly, unlike the Bogoliubov dispersion (\ref{Eq}), the leading correction is quadratic, with finite effective mass $m^*$. To determine $m^*$ and $\lambda^*$ it is sufficient to consider the low-momentum part of the hydrodynamic Hamiltonian, accounting for the right-moving excitations only. This is accomplished by substituting \begin{eqnarray} \label{eq:phi_chiral} \varphi&=&\frac{\sqrt K}{2} (\phi^L+\phi^R), \\ \label{eq:theta_chiral} \theta&=&\frac{1}{2\sqrt K} (\phi^L-\phi^R) \end{eqnarray} into Eqs.~(\ref{H0}), (\ref{V3general}), and (\ref{V4general}) and limiting oneself to terms containing only the right-moving field $\phi^R$. The leading operator of this form \begin{equation} \label{eq:tilde_H_LL} \tilde H_{LL}=\frac{\hbar v}{4\pi}\int dx \left(\nabla \phi^R\right)^2 \end{equation} is simply the right-moving part of the Luttinger liquid Hamiltonian (\ref{HLL}). It has scaling dimension two and is responsible for the linear part of the excitation spectrum in Eq.~(\ref{eq:fermion_spectrum}). The terms of scaling dimensions three and four can be combined into \begin{equation} \label{eq:H_KdV} H_{\rm KdV}=\frac{\hbar^2}{12\pi m^*}\int dx \left[\left(\nabla \phi^R\right)^3 + a^*\left(\nabla^2 \phi^R\right)^2\right], \end{equation} where \begin{eqnarray} \label{eq:effective_mass} \frac{1}{m^*}&=&\frac{1}{m}\, \frac{3}{4\sqrt K}\left(1-\frac{2}{\pi^2}A\right), \\ a^*&=&\frac{\hbar\chi^2 \sqrt K}{2mv}\left(1-\frac{2}{\pi^2}A\right)^{-1}. \end{eqnarray} The Hamiltonian (\ref{eq:H_KdV}) describes one of the possible realizations of the quantum KdV problem \cite{sasaki_field_1987, pogrebkov_boson-fermion_2003}. The spectrum of elementary excitations in this model has been recently studied in detail in Ref.~\cite{pustilnik_fate_2015}. At $q\to0$ it has Taylor expansion (\ref{eq:fermion_spectrum}) with $\lambda^*=\chi^2/4m^2v$. The crossover from fermionic excitations to phonons with Bogoliubov dispersion occurs at momentum scale $q^*\sim\hbar/a^*\sim q_0/\sqrt K\ll q_0$. At $Q\ll q^*$ type I and type II excitations (see Fig.~\ref{figLL}) correspond to fermionic quasiparticles and quasiholes, respectively. In the absence of integrability, quasiparticles can decay at zero temperature, with the rate that scales as the eighth power of momentum \cite{khodas+07PhysRevB.76.155402,matveevfurusaki13}, \begin{equation} \label{eq:fermionic_decay} \frac1\tau=\frac{3}{5120\pi^3}\, \frac{\tilde\Lambda^2Q^8}{\hbar^5m^* v^2}. \end{equation} A general expression for the coefficient $\tilde\Lambda$ in terms of the parameters $v$, $m^*$ and $\lambda^*$ was obtained in Ref.~\cite{matveevfurusaki13}. At weak interactions, $K\gg 1$, the expression for $\tilde\Lambda$ simplifies significantly, \begin{equation} \label{eq:tilde_Lambda} \tilde\Lambda=-\frac{2\pi}{3m^*} \frac{\partial}{\partial n_0}\left(a^*\sqrt K\right). \end{equation} This result was recently obtained for a one-dimensional Wigner crystal \cite{pustilnik_solitons_2015}, whose low-energy excitations are also described by the Hamiltonian in the form of Eqs.~(\ref{eq:tilde_H_LL}) and (\ref{eq:H_KdV}). In the integrable case of the Lieb-Liniger model achieved at $A=0$ and $\chi=1$ one easily sees that $a^*\sqrt K$ does not depend on particle density $n_0$, and the decay rate vanishes. Taking into consideration the integrability breaking perturbations described by parameters $A$ and $1-\chi$ that both scale linearly with $n_0$ [see Eqs.~(\ref{A}) and (\ref{chi2})], we obtain \begin{equation} \label{eq:tilde_Lambda_result} \tilde\Lambda=-\frac{2\hbar^2\Omega}{3m^*m^2v^2}. \end{equation} Substituting this expression into Eq.~(\ref{eq:fermionic_decay}) we find the decay rate of the fermionic quasiparticle in the form \begin{equation} \label{eq:fermionic_decay_result} \frac{1}{\tau}=\frac{9}{20\pi}\, \frac{\Omega^2}{K^{7/2}}\, \frac{T_d}{\hbar}\, \left(\frac{Q}{q_0}\right)^8. \end{equation} Reassuringly, at the crossover between Bogoliubov phonons and fermions, i.e., at $Q\sim q_0/\sqrt K$, both the expressions (\ref{ratefinallowQgeneral}) and (\ref{eq:fermionic_decay_result}) predict a very small rate $\tau^{-1}\sim\Omega^2(T_d/\hbar)K^{-15/2}$. \section{Discussion}\label{sec:discussions} In this paper we studied the decay of type~I excitations in a one-dimensional system of weakly interacting bosons at zero temperature. The approach we used was based on the hydrodynamic description of the system, which limits the momenta of the bosons to $Q\ll\hbar n_0$. Two additional momentum scales play important roles in this system. First, the momentum $q_0=\sqrt8mv\sim \hbar n_0/K$ determines the crossover between the linear and quadratic dependences of the excitation energy (\ref{Eq}) on momentum. Second, at the momentum scale $q^*\sim q_0/\sqrt K$ the nature of type~I excitations changes from fermionic quasiparticles at $Q\ll q^*$ to phonons at $Q\gg q^*$. We note that at weak interactions the Luttinger liquid parameter $K\gg1$, thus $q^*\ll q_0\ll \hbar n_0$. Our main result (\ref{rate-general}) applies in the region $q^*\ll Q\ll\hbar n_0$ and accurately describes the crossover region $Q\sim q_0$. In addition, we obtained the decay rate of the fermionic quasiparticles at $Q\ll q^*$. Although we are not able to describe the crossover at $Q\sim q^*$, our results (\ref{ratefinallowQgeneral}) and (\ref{eq:fermionic_decay_result}) for $Q\gg q^*$ and $Q\ll q^*$, respectively, give the decay rate of the same order of magnitude when extrapolated to $Q\sim q^*$. This strongly indicates that no additional crossover regions remain unexplored. It is instructive to compare our result (\ref{rate-general}) to those in the earlier work on weakly interacting bosons. In the case of contact two-body repulsion the system is described by the Lieb-Liniger model, in which case the integrability prevents decay of excitations. A small perturbation commonly added to the system to break integrability is the three-body interaction given by the second term in Eq.~(\ref{Hint}). In this case the regimes $Q\gg q_0$ and $q^*\ll Q\ll q_0$ were studied in Refs.~\cite{tan+10relaxation} and \cite{ristivojevic_decay_2014}, respectively. Our main result (\ref{rate-general}) recovers the corresponding expressions (\ref{ratefinalhighQtan}) and (\ref{ratefinallowQ}) for the decay rate and accurately describes the crossover between them. An alternative way to break the integrability of the Lieb-Liniger model is by considering two-body interaction of small but finite range. Our theory incorporates this perturbation on equal footing with the three-body interactions. The relative significance of the two perturbations depends on the specific model of interacting bosons. In the case of atoms confined to one dimension by a trap, we expect the three-body interaction to dominate \cite{ristivojevic+matveev-unpublished}. On the other hand, noncontact interactions in a purely one-dimensional model should generate the three-body interactions in the effective low-energy theory, in which case both perturbations may be of the same order of magnitude. To illustrate this point, we have considered the hyperbolic Calogero-Sutherland model in the regime of weak short-range interaction. It is defined by the two body interaction of the form \cite{sutherland} \begin{align}\label{sinh22} g(x)=\frac{\hbar^2}{m} \frac{\lambda(\lambda-1)\kappa^2} {\sinh^2(\kappa x)}. \end{align} In the limit when $\kappa\to+\infty$ and $\lambda\to+0$, such that $c=2\kappa\lambda$ is kept fixed, the scattering matrix of the potential (\ref{sinh22}) coincides with that of the potential $g(x)=(\hbar^2 c/m)\delta(x)$ \cite{sutherland}. Therefore, in this limit the model (\ref{sinh22}) is equivalent to the Lieb-Liniger model. We then obtained the excitation spectrum of the model (\ref{sinh22}) at large but finite $\kappa$, see Appendix \ref{sec:calogero-sutherland}. Using the latter, we have found the values of parameters $\alpha$ and $\chi$ that quantify the two integrability-breaking perturbations: \begin{align} \alpha=-\frac{\pi^2c^2}{24\kappa^2},\quad \chi=1+\frac{\pi^2 cn_0}{6\kappa^2}. \end{align} We observe that for the integrable model (\ref{sinh22}), the combination (\ref{Omega}) becomes \begin{align}\label{univratio} \Omega=4K^2\alpha-\pi^2(1-\chi)=0. \end{align} We therefore conclude that the two perturbations give comparable contributions to the scattering amplitude corresponding to the decay process, which for the model (\ref{sinh22}) cancel each other. This cancellation was, of course, expected, as the hyperbolic Calogero-Sutherland model (\ref{sinh22}) is integrable for any $\kappa$ and $\lambda$ \cite{sutherland}. \section*{Acknowledgements} We acknowledge stimulating discussions with L.~I.~Glazman and M.~Pustilnik. K.~A.~M. is grateful to Laboratoire de Physique Th\'{e}orique, Toulouse, where part of the work was performed, for hospitality. Work by K.~A.~M.~was supported by the U.S.~Department of Energy, Office of Science, Materials Sciences and Engineering Division.
2,869,038,155,821
arxiv
\section{Introduction} Two fundamental structures for understanding the geometrical aspects of quantum states are the quantum metric tensor formulated by Provost and Vallee~\cite{Provost1980,Wootters1981} and the geometric phases, in particular, the phase discovered by Berry~\cite{Berry45}. The quantum metric tensor is defined in the parameter space and measures the distance between two states corresponding to infinitesimally different parameters. Remarkably, the singularities of this metric are associated with quantum phase transitions exhibited by the corresponding system~\cite{Zanardi2007,SHI-JIAN2010}. Further, the geodesics induced by this metric can also indicate the presence of quantum phase transitions~\cite{Kumar2012,Kumar2014}. In general, the quantum metric tensor played an essential role in diverse physical phenomena (see Ref.~\cite{Ozawa2018} and references therein). Berry's phase is the extra phase acquired by the wave function when the system undergoes an adiabatic excursion along a closed path in the parameter space and can be understood as an integral of a curvature~\cite{Simon1983}, the so-called Berry curvature. This phase was analyzed in various contexts~\cite{Wilczek1984,Zhang2005,Mikko2008,Xiao2010}, and, interestingly, it is also connected with quantum phase transitions~\cite{Zhu2006}. These approaches to quantum phase transitions based on the metric and the Berry phase can be unified in terms of the critical singular behavior of the quantum geometric tensor~\cite{VenutiZanardi2007,SHI-LIANG2008}, whose real part gives the quantum metric tensor whereas the imaginary part gives the Berry curvature. On the other hand, Berry's phase possesses a classical counterpart known as Hannay's angle~\cite{Hannay1985}. For classical integrable systems, it is an extra angle shift picked up by the angle variables of the system when the parameters undergo a closed adiabatic excursion in the parameter space. This classical angle was investigated in a variety of systems~\cite{Khein1993,BerryMorgan1996,Golin1989,Chattopadhyay2018}, and the semiclassical relation between it and Berry's phase was established in Ref.~\cite{Berry1985} and has been verified in many systems~\cite{Berry1985,Datta1989,Biswas1990,Brihaye1993}. In the light of this and given the close relationship between the quantum metric tensor and Berry's curvature, a natural question arises: What about the classical analog of the quantum metric tensor? It is well known that, in the context of thermodynamic systems, Weinhold~\cite{Weinhold1975} and later Ruppeiner~\cite{Ruppeiner1979} proposed classical metrics in the parameter space which are defined as the Hessian of a thermodynamic potential. For Weinhold's metric, the potential is the internal energy, whereas for Ruppeiner's metric the potential is the entropy. In spite of the existence of these classical metrics, there is so far no evidence for that in the context of classical mechanical systems. In this paper, we present a meaningful metric tensor for classical integrable systems, which is defined in the parameter space and is the classical analog of the quantum metric tensor. These metrics are analogous in the sense that both yield the same parameter structure, modulo the use of the Bohr-Sommerfeld quantization rule for action variables. It means that we can extract out the same (or almost the same) ``relevant'' information from either of these metrics. This important feature will be exhibited by the three examples that we have considered: the generalized harmonic oscillator, the generalized harmonic oscillator with a linear term, and the quartic anharmonic oscillator. Another important property of this classical metric, which is shared with Hannay's angle, is that it is gauge invariant in the parameter space in that it does not depend on the choice of the point of origin from which we measure the angle variables. The fundamental building blocks from which the classical metric is constructed are certain functions that generate displacements in the parameter space. By promoting these classical functions to quantum operators, we also find alternative expressions for the quantum metric tensor and Berry's connection. The paper is organized as follows. In Sec.~\ref{sec:QIM} we briefly review some basics about the quantum metric tensor. In Sec.~\ref{sec:classical} we define the notion of distance on the parameter space between points in phase space and derive the classical analog of the quantum metric tensor. In Sec.~\ref{Examples} we compute and compare this classical metric and the quantum metric tensor for the considered systems. Section \ref{sec:alternative} presents alternative expressions for the quantum metric tensor and Berry's connection. Finally, Sec. \ref{sec:Conclusions} is devoted to conclusions and directions for future research. \section{Quantum metric tensor}\label{sec:QIM} In this section, we shortly review the definition of the quantum metric tensor. We start by considering a quantum theory defined by a set of phase space operators $\hat{q}=\{\hat{q}^a\}$ and $\hat{p}=\{\hat{p}_a\}$ ($a,b,\dots\! =1,\dots,n$) together with a Hamiltonian operator $\hat{H}(\hat q, \hat p ; x)$, that depends of this set and also smoothly depends on a set of $N\geq2$ external parameters $x=\{x^i\}$ ($i,j,\dots\!=1,\dots,N$) that are regarded as slowly varying functions of the time~$t$ (adiabatic parameters) and parametrize some $N$-dimensional parameter manifold $\mathcal{M}$. Assuming that $\hat{H}[x(t)]$ has at least one eigenvector $\ket{n[x(t)]}$ with nondegenerate eigenvalue $E_n[x(t)]$, the adiabatic theorem states that if the system is initially prepared in $\ket{n[x(0)]}$, then during the quantum adiabatic evolution it will remain in the same state $\ket{n[x(t)]}$. This fact means that, under the small change of points $x \rightarrow x'=x+\delta x$ in $\mathcal{M}$, the state $\ket{n(x)}$ will become $\ket{n(x')}$. In consequence, the distance between the states $\ket{n(x)}$ and $\ket{n(x')}$ is defined by \begin{equation}\label{QIMdistance} dl^2\equiv 1 - | \braket{n(x)}{n(x')} |^2, \end{equation} where $f=| \braket{n(x)}{n(x')}|$ is the \textit{fidelity} and measures the similarity between states. After expanding $\ket{n(x')}$ into a second-order Taylor series, Eq.~(\ref{QIMdistance}) can be expressed as $dl^2\simeq g^{(n)}_{ij} \delta x^i \delta x^j$ where \begin{eqnarray}\label{QIM} g^{(n)}_{ij}(x) \equiv {\rm Re} \left(\braket{\partial_i n}{\partial_j n} - \braket{\partial_i n}{n}\braket{ n}{\partial_j n}\right), \end{eqnarray} is the \textit{(abelian) quantum metric tensor}~\cite{Provost1980}. An alternative expression for this metric derived from the Lagrangian formalism is given in Ref.~\cite{Alvarez-Jimenez2017}. Throughout this paper, we adopt the convention that repeated indices $i,j,\dots,$ are summed from $1$ to $N$, and $\partial_i := \partial/ \partial x^i$. For the purposes of this paper, it is convenient to cast Eq.~(\ref{QIM}) in terms of operators. Let $\hat{P}_i$ be Hermitian operators and consider that $\hat{P}_i\delta x^i$ is the generator of the displacement $\ket{n(x)}\rightarrow\ket{n(x')}$. Thus, the translated state is \begin{equation}\label{QIMOperator} \ket{n(x')}=\exp\left(-\frac{{\rm i }}{\hbar} \delta x^i\hat{P}_i\right)\ket{n(x)}. \end{equation} From this equation, by considering a Taylor expansion, we have \begin{equation}\label{QIMOperator2} {\rm i } \hbar \ket{\partial_i n(x)}=\hat{P}_i \ket{n(x)}, \end{equation} which substituted into Eq.~(\ref{QIM}) leads to~\cite{Provost1980} \begin{equation}\label{QIM2} g^{(n)}_{ij}(x) = \frac{1}{\hbar^2} {\rm Re} \left( \langle \hat{P}_i \hat{P}_j \rangle_n -\langle \hat{P}_i \rangle_n \langle \hat{P}_j\rangle_n \right), \end{equation} where $\langle \hat{X} \rangle_n\equiv\bra{n}\hat{X}\ket{n}$ is the expectation value of $\hat{X}$ with respect to the state $\ket{n}$. It should be noted that because of the Hermiticity of $\hat{P}_i$, the right-hand side (r.h.s) of Eq.~(\ref{QIM2}) is symmetric. Furthermore, the line element $dl^2 = g^{(n)}_{ij} \delta x^i \delta x^j$ now reads \begin{equation}\label{Qdistance} dl^2=\frac{1}{\hbar^2 }\langle \Delta \hat{P}^2 \rangle_n, \qquad (\Delta \hat{P}=\Delta \hat{P}_i \delta x^i), \end{equation} where $\Delta \hat{P}_i:=\hat{P}_i-\langle \hat{P}_i \rangle_n$. Then, using operators, the distance $dl^2$ can be seen as the variance of the generator $\hat{P}_i\delta x^i$. This last remark will be the key point to obtain the classical counterpart of the quantum metric in the next section. \section{Classical analog of the quantum metric tensor}\label{sec:classical} We now turn to the classical setting. Let us consider a classical integrable system with $n$ degrees of freedom described by the time-dependent Hamiltonian $H[q,p;x(t)]$, where $q=\{q^a\}$ and $p=\{p_a\}$ are the canonical coordinates and momenta, and $x=\{x^i\}\in\mathcal{M}$ is the set of slow time-dependent parameters. Since the system is integrable (for all values of $x \in \mathcal{M}$), we can introduce the action-angle variables, $I=\{I_a\}$ and $\varphi=\{\varphi^a\}$, which satisfy Hamilton's equations of motion with the new Hamiltonian \begin{equation}\label{classical:K} K(\varphi,I;x)=H(I;x) - G_i(\varphi,I;x) \dot{x}^i, \end{equation} where $H(I;x) \equiv H[q(\varphi,I;x),p(\varphi,I;x);x]$ depends only on the action variables and the parameters, and $G_i(\varphi,I;x):= G_i[q(\varphi,I;x),I;x]$ with \begin{equation}\label{classical:G} G_i(q,I;x) := - ( \partial_i S^{(\alpha)} )_{q,I}, \end{equation} where $S^{(\alpha)}(q,I;x)$ is the generating function of the canonical transformation $(q,p) \rightarrow (\varphi,I)$. Also, $\dot{x}^i := dx^i/dt$ and $\alpha$ label different branches of the multivalued function $S^{(\alpha)}(q,I;x)$. We recall that the second term in the r.h.s of Eq.~(\ref{classical:K}) comes from $( \partial S^{(\alpha)}/ \partial t)_{q,I}=(\partial S^{(\alpha)}/\partial x^{i})_{q,I} \dot{x}^i$, which is a consequence of the fact that $H[q,p,x(t)]$ (and hence $S^{(\alpha)}[q,I;x(t)]$ also) depends explicitly on time through the parameters $x$. The explicit form of $G_i$ in terms of the action-angle variables is \begin{equation}\label{classical:G2} G_i(\varphi,I;x) = p_a ( \partial_i q^a)_{\varphi,I} - ( \partial_i S )_{\varphi,I}, \end{equation} where $p_a=p_a(\varphi,I;x)$, $q^a=q^a(\varphi,I;x)$ and we defined the single-valued function $S(\varphi,I;x):=S^{(\alpha)}[q(\varphi,I;x),I;x]$ with $0\leq \varphi < 2\pi$. We use the notation that repeated indices $a,b,\dots,$ are summed from $1$ to $n$. As our first step towards the classical counterpart of Eq.~(\ref{QIM2}), we find that under the action of an infinitesimal displacement of the parameters $x \rightarrow x'=x+\delta x$ in $\mathcal{M}$, the function $G_i \delta x^i$ is the generator of the infinitesimal canonical transformation \begin{equation}\label{classical:inf} [q(x),p(x)] \rightarrow [q(x)+\bar{\delta}q ,p(x)+\bar{\delta}p], \end{equation} where \begin{subequations} \begin{eqnarray} \bar{\delta}q^a:=q^a(x')-q^a(x)=( \partial_i q^a )_{\varphi,I} \delta x^i, \label{classical:deltaq}\\ \bar{\delta}p_a:=p_a(x')-p_a(x)=( \partial_i p_a )_{\varphi,I} \delta x^i.\label{classical:deltap} \end{eqnarray} \end{subequations} Notice that another form of Eq.~(\ref{classical:deltaq}) is $\bar{\delta}q^a=\delta q^a-\tilde{\delta} q^a$, where $\delta q^a := q'^a(x')-q^a(x)$ is the total variation and $\tilde{\delta} q^a := q'^a(x)-q^a(x)$ is the variation with ``frozen'' parameters. A similar expression follows for $\bar{\delta}p_a$. To prove the above statement it is sufficient to show that $G_i$ satisfy \begin{subequations} \begin{eqnarray} ( \partial_i q^a )_{\varphi,I}&=& \{ q^a , G_i \}_{q,p} =\frac{\partial G_{i}}{\partial p_{a}}, \label{classical:Gq}\\ \left( \partial_i p_a \right)_{\varphi,I}&=& \{ p_a , G_i \}_{q,p}=-\frac{\partial G_{i}}{\partial q^{a}}, \label{classical:Gp} \end{eqnarray} \end{subequations} which are the equations of the infinitesimal canonical transformation (\ref{classical:inf})~\cite{KOLODRUBETZ20171}. Here $\{ \cdot , \cdot \}$ denotes the Poisson bracket. To do this, we first take the partial derivative with respect to $x^i$, holding $(\varphi,I)$ fixed, of the familiar relation $p_a \tilde{d} q^a - I_a \tilde{d} \varphi^a = \tilde{d} F$, where $F=S^{(\alpha)}(q,I;x)-I_a \varphi^a$ and $\tilde{d}$ is the fixed-time differential (or equivalently with fixed parameters $x$). From this we obtain \begin{equation}\label{classical:dG1} \left( \partial_i p_a\right)_{\varphi,I} \tilde{d} q^a + p_a \tilde{d} \left( \partial_i q^a \right)_{\varphi,I} = \tilde{d} \left( \partial_i S \right)_{\varphi,I}, \end{equation} where we used $(\partial_i \tilde{d} f)_{\varphi,I} =\tilde{d}(\partial_i f)_{\varphi,I} $. Next, combining Eq.~(\ref{classical:dG1}) with the differential of Eq.~(\ref{classical:G2}) at fixed $x$, namely \begin{equation} \tilde{d}G_i=\tilde{d}p_a ( \partial_i q^a)_{\varphi,I} +p_a \tilde{d}( \partial_i q^a)_{\varphi,I}- \tilde{d}( \partial_i S )_{\varphi,I}, \end{equation} we have \begin{equation}\label{classical:G3} \tilde{d}G_i=- \left( \partial_i p_a \right)_{\varphi,I} \tilde{d} q^a + \left( \partial_i q^a \right)_{\varphi,I} \tilde{d} p_a. \end{equation} Then, taking $G_i$ as a function of $(q,p)$, it follows that \begin{equation}\label{classical:G4} \tilde{d}G_i=\frac{\partial G_i}{\partial q^a} \tilde{d} q^a + \frac{\partial G_i}{\partial p_a} \tilde{d} p_a. \end{equation} Equating the coefficients of $\tilde{d} q^a$ and $\tilde{d} p_a$ on the r.h.s of Eqs.~(\ref{classical:G3}) and (\ref{classical:G4}), we read off Eqs.~(\ref{classical:Gq}) and (\ref{classical:Gp}), which completes the proof. Given the fact that $G_i \delta x^i$ generates an infinitesimal displacement in $x$ of points in phase space, and in complete analogy with the quantum case [see Eq.~(\ref{Qdistance})], we can naturally define the distance between the points $[q(x),p(x)]$ and $[q(x)+\bar{\delta}q ,p(x)+\bar{\delta}p]$ as \begin{equation}\label{gclas:distance} ds^2:=\left<\Delta G^2\right>\qquad (\Delta G=\Delta G_i \delta x^i), \end{equation} where $\Delta G_i :=G_i - \left<G_i\right>$ and \begin{equation} \left< f(\varphi,I;x) \right>=\frac{1}{(2 \pi)^{n}}\oint d\varphi f(\varphi,I;x), \end{equation} with $\oint d\varphi =\prod_{a=1}^{n} \int_{0}^{2 \pi}d\varphi^a$, is the average of $f(\varphi,I;x)$ over the (fast) angle variables. Defined in this way, the classical distance $ds^2$ is nothing more than the variance of the generator $G_i \delta x^i$. Clearly, if the parameters $x$ are frozen, then $G_i\delta x^i=0$, and hence $ds^2$ also vanishes, as expected. Notice that $ds^2$ depends only on the action variables~$I$ and the parameters~$x$. In this regard, it is important to emphasize that, according to the classical adiabatic theorem~\cite{Arnold2006}, while the parameters vary slowly with time, the action variables are adiabatic invariants\footnote{Nevertheless, for Hamiltonian systems with $n\geq2$ there may exist conditions for which the adiabatic approximation is not optimal~\cite{Arnold2006}.} $\dot{I}_a\approx0$. That is, during the adiabatic evolution from $[q(x),p(x)]$ to $[q(x)+\bar{\delta}q ,p(x)+\bar{\delta}p]$ the action variables $I$ remain constant. This effect is similar to the quantum case where the quantum number $n$ remains constant as the parameters vary. On the other hand, note also that in this scenario, the average $\left< \cdot \right>$ in Eq.~(\ref{gclas:distance}) is the classical counterpart of the quantum average $\langle \cdot \rangle_n$ in Eq.~(\ref{Qdistance}). By expanding Eq.~(\ref{gclas:distance}), we find that the distance $ds^2=g_{ij} \delta x^i \delta x^j$ induces the metric \begin{equation}\label{gclas:metric} g_{ij}(I;x):=\left< G_i G_j\right> - \left<G_i\right> \left<G_j\right>, \end{equation} where $G_i=G_i(\varphi,I;x)$ is given by Eq.~(\ref{classical:G2}). The metric $g_{ij}(I;x)$ corresponds to the \textit{classical analog of the quantum metric tensor}~(\ref{QIM}) [or Eq.~(\ref{QIM2})], and provides a measure of the distance between the nearby points $[q(x),p(x)]$ and $[q(x)+\bar{\delta}q ,p(x)+\bar{\delta}p]$ on the parameter manifold $\mathcal{M}$. It should be pointed out that, in contrast to the quantum metric tensor, the classical metric (\ref{gclas:metric}) is restricted to the case where classical motion is integrable. This restriction is to be expected since it is the same as that found in Hannay's angle~\cite{Hannay1985}, which also involves the action variables and is the classical counterpart of Berry's phase~\cite{Berry1985}. We now proceed to check some properties of $g_{ij}(I;x)$. Let us first show that, under a coordinate transformation, $g_{ij}(I;x)$ transforms as a tensor. By considering a coordinate change $y = y(x)$ and using Eq.~(\ref{classical:G2}), it follows that the transformation law for $G_i$ is \begin{equation} G'_{i}(\varphi,I;y)=\frac{\partial x^j}{\partial y^i} G_{j}[\varphi,I;x(y)]. \end{equation} This result, together with Eq.~(\ref{gclas:metric}), leads to the expected transformation law for the metric \begin{equation} g'_{ij}(I;y)=\frac{\partial x^k}{\partial y^i} \frac{\partial x^l}{\partial y^j} g_{kl}[I;x(y)]. \end{equation} We now prove that $g_{ij}(I;x)$ is positive semidefinite. This is straightforward and follows from the fact that $ds^2=\left<\Delta G^2\right>\geq0$ since the variance is nonnegative. In this light, it is interesting to note that the quantum metric tensor (\ref{QIM}) is also positive semidefinite~\cite{chruscinski2012geometric,amari2016information}. Analogously as the quantum metric $g^{(n)}_{ij}(x)$ is independent of the gauge transformation\footnote{In Ref.~\cite{Alvarez-Jimenez2016} is shown, however, that under a more general gauge transformation, the quantum metric tensor depends on the gauge.} $\ket{n'(x)} = \exp[i \alpha_n(x)] \ket{n(x)}$ where $\alpha_n(x)$ is an arbitrary real function of $x$, the classical metric $g_{ij}(I;x)$ is invariant under the (gauge) canonical transformation \begin{equation}\label{gclas:gauge} \varphi'^a = \varphi^a + \frac{\partial \lambda(I;x)}{\partial I_a}, \qquad I'_a=I_a, \end{equation} which is generated by the function $F_2=\varphi^a I'_a + \lambda(I';x)$ where $\lambda(I';x)$ is an arbitrary function of $I'$ and $x$. The proof of this statement is as follows. The Hamiltonian for the new action-angle variables $(\varphi',I')$ is \begin{equation} K'(\varphi',I';x)=H(I';x) - G'_i(\varphi',I';x) \dot{x}^i, \end{equation} where $H(I';x)=H(I;x)$ with $I'=I$, and \begin{equation}\label{gclas:Gprima} G'_i(\varphi',I';x)=G_i(\varphi',I';x)-[\partial_i \lambda(I';x)]_{I'}, \end{equation} where $G_i(\varphi',I';x):=G_i[\varphi(\varphi',I';x),I';x]$ are the functions $G_i$ for $(\varphi,I)$ expressed in terms of the variables $(\varphi',I')$. Since $G'_i(\varphi',I';x)$ satisfy Eqs.~(\ref{classical:Gq}) and (\ref{classical:Gp}) with $(\varphi',I')$ instead of $(\varphi,I)$, it follows that $G'_i \delta x^i$ generates a canonical transformation of the same type as Eq.~(\ref{classical:inf}) with $\bar{\delta}'q^a=( \partial_i q^a )_{\varphi',I'} \delta x^i$ and $\bar{\delta}'p_a=( \partial_i p_a )_{\varphi',I'} \delta x^i$ instead of Eqs.~(\ref{classical:deltaq}) and (\ref{classical:deltap}), respectively. With this in mind, we can apply Eq.~(\ref{gclas:metric}), and write the classical metric associated with the variables $(\varphi',I')$ as \begin{equation}\label{gclas:metric2} g'_{ij}(I';x)=\left< G'_i G'_j\right>' - \left<G'_i\right>' \left<G'_j\right>', \end{equation} where $G'_i=G'_i(\varphi',I';x)$ and $\left< \cdot \right>'$ stands for the average over the angle variables $\varphi'$. By using Eq.~(\ref{gclas:Gprima}), the average $\left<G'_i\right>'$ gives \begin{eqnarray}\label{gclas:promGprima} &&\left<G'_i(\varphi',I';x)\right>' \!=\!\frac{1}{(2 \pi)^{n}}\oint d\varphi' G_i(\varphi',I';x)\!-\![\partial_i \lambda(I';x)]_{I'} ,\nonumber\\ &&=\frac{1}{(2 \pi)^{n}} \int_{-b_n}^{2 \pi-b_n}\dots \int_{-b_1}^{2 \pi-b_1} d\varphi^1 \dots d\varphi^n G_i(\varphi,I;x) \nonumber \\ &&\ \ \ -[\partial_i \lambda(I;x)]_{I}, \nonumber\\ &&= \left<G_i(\varphi,I;x)\right> -[\partial_i \lambda(I;x)]_{I}, \end{eqnarray} where in the second line we made the change of variables from $\varphi'$ to $\varphi$ and defined $b_a:=\partial \lambda(I;x) / \partial I_a$, whereas in the last line we used the fact that $p$, $(\partial_i q)_{\varphi,I}$, and $(\partial_i S)_{\varphi,I}$ are periodic functions of each angle variable $\varphi^a$ with period $2\pi$, which by virtue of Eq.~(\ref{classical:G2}) implies that $G_i(\varphi,I;x)$ are also periodic functions of each $\varphi^a$. The periodicity of $(\partial_i S)_{\varphi,I}$ is easily seen by writing $S(q,I;x)=\sum_{a=1}^{n}S_{a}(q_a,I;x)$ and recalling that each $S_{a}(\varphi,I;x)\equiv S_{a}[q_a(\varphi,I;x),I;x]$ satisfies $S_a(\varphi+2\pi,I;x)-S_a(\varphi,I;x)=2\pi I_a$. In the same fashion, the average $\left<G'_iG'_j\right>'$ leads to \begin{eqnarray}\label{gclas:promGGprima} &&\left<G'_i(\varphi',I';x)G'_j(\varphi',I';x)\right>'=\left<G_i(\varphi,I;x)G_j(\varphi,I;x)\right> \nonumber \\ && - \left<G_i(\varphi,I;x)\right> [\partial_j \lambda(I;x)]_{I} - \left<G_j(\varphi,I;x)\right> [\partial_i \lambda(I;x)]_{I} \nonumber \\ && + [\partial_i \lambda(I;x)]_{I} [\partial_j \lambda(I;x)]_{I}. \end{eqnarray} It remains to substitute Eqs.~(\ref{gclas:promGprima}) and (\ref{gclas:promGGprima}) into Eq.~(\ref{gclas:metric2}). By doing so, all the terms involving the derivatives of $\lambda(I;x)$ cancel among themselves and thus the metric $g'_{ij}(I';x)$ becomes \begin{equation}\label{gclas:metricgauge} g'_{ij}(I';x)=\left< G_i G_j\right> - \left<G_i\right> \left<G_j\right>= g_{ij}(I;x), \end{equation} which is the desired result. Therefore, although the angle variables are not unique but only defined up to the canonical transformation (\ref{gclas:gauge}), the metric $g_{ij}(I;x)$ is unique and independent of this (gauge) transformation, as expected for a metric tensor on~$\mathcal{M}$. It is interesting to note from Eq.~(\ref{gclas:promGGprima}) that term $\left< G_i G_j\right>$ is not invariant under the transformation~(\ref{gclas:gauge}), and hence it cannot be used alone to define a metric on~$\mathcal{M}$. As shown above, it must be combined with $\left<G_i\right> \left<G_j\right>$, in the precise form given by Eq.~(\ref{gclas:metric}), to produce a gauge invariant metric. This is the essence of the nontrivial gauge invariance of $g_{ij}(I;x)$ under Eq.~(\ref{gclas:gauge}); it arises as a consequence of the particular combination of both $\left< G_i G_j\right>$ and $\left<G_i\right> \left<G_j\right>$. Reinforcing the analogy made between the classical metric $g_{ij}(I;x)$ and the quantum metric tensor $g^{(n)}_{ij}(x)$, since the latter being a combination of $\braket{\partial_i n}{\partial_j n}$ and $\braket{\partial_i n}{n}\braket{ n}{\partial_j n}$ is gauge invariant, but the term $\braket{\partial_i n}{\partial_j n}$ alone is not~\cite{Provost1980}. To end this section, let us add some comments on the significance of $G_i$. Notice that Eq.~(\ref{gclas:Gprima}) reveals that under Eq.~(\ref{gclas:gauge}) the functions $G_i$ transform as an abelian gauge potential, which is not surprising since these functions are the generators of translations in~$\mathcal{M}$~\cite{KOLODRUBETZ20171}. The average of $G_i$ can be identified as the components of the connection 1-form on $\mathcal{M}$ associated with Hannay's angle, namely $A(I;x)=A_i dx^i$ with\footnote{In the literature, however, it is often found that the term $\left<( \partial_i S )_{\varphi,I}\right>$ is dropped from Eq.~(\ref{gclas:connection}) since it does not contribute to Hannay's angle~\cite{Gozzi1987,chruscinski2012geometric}.} \begin{equation}\label{gclas:connection} A_i(I;x):=\left<G_i(\varphi,I;x)\right>=\left<p_a ( \partial_i q^a)_{\varphi,I}\right> - \left<( \partial_i S )_{\varphi,I}\right>. \end{equation} This means, according to Eq.~(\ref{gclas:promGprima}), that under the transformation (\ref{gclas:gauge}) the components (\ref{gclas:connection}) transform as those of an abelian gauge potential~\cite{Littlejohn1988} \begin{equation} A'_i(I';x)=A_i(I;x)-[\partial_i \lambda(I;x)]_{I}. \end{equation} Besides, it follows from Eq.~(\ref{gclas:connection}) that the curvature 2-form $F(I;x)=dA(I;x)$ of this connection can be written as $F(I;x)=(1/2) F_{ij}dx^i\wedge dx^j$ with \begin{equation}\label{gclas:Curv} F_{ij}(I;x)=\left< \left( \partial_i p_a \right)_{\varphi,I} \left( \partial_j q^a \right)_{\varphi,I} - \left( \partial_j p_a \right)_{\varphi,I} \left( \partial_i q^a \right)_{\varphi,I}\right>. \end{equation} Upon using Eqs. (\ref{classical:Gq}) and (\ref{classical:Gp}), we find that these components take the form \begin{equation}\label{gclas:curvature} F_{ij}(I;x)=-\left< \{ G_i , G_j \}_{q,p} \right>. \end{equation} In this way the functions $G_i$ can be regarded as the fundamental building blocks that underlie the classical metric~(\ref{gclas:metric}) and Hannay's curvature~(\ref{gclas:curvature}). \section{Illustrative examples}\label{Examples} In this section, we set out some examples of classical integrable systems to illustrate the appearance of the metric~(\ref{gclas:metric}). At the same time, we compare the results of this classical metric with those found by using the quantum metric tensor~(\ref{QIM}) associated with the quantum counterpart of each system. We shall see that these results corroborate that the metric~(\ref{gclas:metric}) is the classical analog of~(\ref{QIM}) [or equivalently Eq.~(\ref{QIM2})]. \subsection{Generalized harmonic oscillator} As our first example, let us take the generalized harmonic oscillator, whose classical Hamiltonian is given by \begin{equation}\label{gho:classicalH} H=\frac{1}{2}\left(X q^2+2Yqp+Zp^2\right), \end{equation} where $x=\{x^{i}\}=(X,Y,Z)\in \mathbb{R}^3$ ($i,j,\dots\!=1,2,3$) are the adiabatic parameters, which are assumed to satisfy $XZ-Y^{2}>0$. The transformation from the variables $(q,p)$ to the action-angle variables $(\varphi,I)$ is well known and turns out to be \begin{subequations} \begin{eqnarray} &&q(\varphi,I;x)=\left(\frac{2ZI}{\omega}\right)^{1/2}\sin\varphi,\label{gho:q}\\ &&p(\varphi,I;x)=\left( \frac{2ZI}{\omega} \right)^{1/2} \left(-\frac{Y}{Z}\sin\varphi+\frac{\omega}{Z}\cos\varphi\right),\label{gho:p} \end{eqnarray} \end{subequations} where $\omega:=(XZ-Y^{2})^{1/2}$ is the parameter-dependent angular frequency. Moreover, the generating function of this transformation, in terms of action-angle variables, is \begin{equation}\label{gho:S} S(\varphi,I;x)=-\frac{YI}{\omega}\sin^{2}\varphi+I(\varphi+\sin\varphi\cos\varphi). \end{equation} Then, putting Eqs.~(\ref{gho:q}), (\ref{gho:p}), and (\ref{gho:S}) into Eq.~(\ref{classical:G2}), we obtain the functions $G_i(\varphi,I;x)$: \begin{subequations} \begin{eqnarray} &&G_{1}(\varphi,I;x)=-\frac{ZI}{2\omega^{2}}\sin\varphi\cos\varphi, \label{gho:G1}\\ &&G_{2}(\varphi,I;x)=\frac{I\sin\varphi}{\omega^{2}}\left(Y\cos\varphi+\omega\sin\varphi\right),\label{gho:G2}\\ &&G_{3}(\varphi,I;x)=\frac{I\sin\varphi}{2Z\omega^{2}}\left[(XZ-2Y^{2})\cos\varphi-2Y\omega\sin\varphi\right].\label{gho:G3}\nonumber\\ \end{eqnarray} \end{subequations} Rewriting these functions in terms of the variables $(q,p)$, they satisfy Eqs.~(\ref{classical:Gq}) and (\ref{classical:Gp}). With this at hand, Eq.~(\ref{gclas:metric}) can now be readily applied to Eqs.~(\ref{gho:G1}), (\ref{gho:G2}), and (\ref{gho:G3}). This yields the components of the corresponding classical metric $g_{ij}(I;x)$, which can be expressed as \begin{equation}\label{gho:classmetric} g_{ij}(I;x)=\frac{I^{2}}{32\omega^{4}}\left(\begin{array}{ccc} Z^{2} & -2YZ & 2Y^{2}-XZ\\ -2YZ & 4XZ & -2XY\\ 2Y^{2}-XZ & -2XY & X^{2} \end{array}\right). \end{equation} The idea is now to compare this metric with that coming from the quantum metric tensor~(\ref{QIM}). In the quantum case, the time-dependent Hamiltonian operator $\hat{H}$ of the system is \begin{equation}\label{gho:quantumH} \hat{H}=\frac{1}{2}\left[X\hat{q}^{2}+Y(\hat{q}\hat{p}+\hat{p}\hat{q})+Z\hat{p}^{2}\right], \end{equation} and leads to the Schr\"odinger equation (with fixed parameters) \begin{equation} -\frac{Z\hbar^{2}}{2}\frac{d^{2}\psi_{n}}{dq^{2}}-i\hbar Yq\frac{d\psi_{n}}{dq}+\left(\frac{Xq^{2}}{2}-i\hbar\frac{Y}{2}\right)\psi_{n}=E_{n}\psi_{n}, \end{equation} which has the normalized solution \begin{equation}\label{gho:wavefunctgen} \psi_{n}(q;x)=\left(\frac{\omega}{Z\hbar}\right)^{1/4}\chi_{n}\left(q\sqrt{\frac{\omega}{Z\hbar}}\right)\exp\left(-\frac{iYq^{2}}{2Z\hbar}\right), \end{equation} where $\omega:=(XZ-Y^{2})^{1/2}$ which implies $XZ-Y^{2}>0$, and $\chi_{n}(\xi)=\left(2^{n}n!\sqrt{\pi}\right)^{-1/2}e^{-\xi^{2}/2}H_{n}(\xi)$ are the Hermite functions, with $H_{n}(\xi)=(-1)^n e^{\xi^2} \frac{d^n}{d\xi^n}e^{-\xi^2}$ being the Hermite polynomials. Furthermore, the energy eigenvalues are given by $E_{n}=(n+1/2)\hbar\omega$ where $n$ are nonnegative integers. Substituting the wave function (\ref{gho:wavefunctgen}) into $\braket{n}{\partial_in}=\int_{-\infty}^{\infty}dq\psi_{n}^{*}(q,x) \partial_i\psi_{n}(q,x) $ and $\braket{\partial_i n}{\partial_j n}=\int_{-\infty}^{\infty}dq\partial_i\psi_{n}^{*}(q,x) \partial_j\psi_{n}(q,x)$, and bearing in mind the following properties of the Hermite functions \begin{eqnarray}\label{Hermite} &&\int_{-\infty}^{\infty}d\xi \ \chi_{m}(\xi)\chi_{n}(\xi)=\delta_{mn},\nonumber\\ &&\frac{d }{d\xi} \chi_{n} = \sqrt{\frac{n}{2}} \ \chi_{n-1} - \sqrt{\frac{n+1}{2}} \ \chi_{n+1}, \nonumber\\ &&\xi \, \chi_{n} = \sqrt{\frac{n}{2}} \ \chi_{n-1} + \sqrt{\frac{n+1}{2}} \ \chi_{n+1}, \end{eqnarray} the components of the quantum metric~(\ref{QIM}) become \begin{equation}\label{gho:QIM} g^{(n)}_{ij}(x)\!=\!\frac{n^{2}\!+\!n\!+\!1}{32\omega^{4}}\!\left(\!\begin{array}{ccc} Z^{2} & -2YZ & 2Y^{2}-XZ\\ -2YZ & 4XZ & -2XY\\ 2Y^{2}-XZ & -2XY & X^{2} \end{array}\!\right). \end{equation} Comparing the metrics (\ref{gho:classmetric}) and (\ref{gho:QIM}), it is clear that they are related as follows: \begin{equation}\label{gho:relation} g^{(n)}_{ij}(x) = \gamma\, g_{ij}(I;x), \end{equation} where \begin{equation} \gamma:=\frac{n^{2}+n+1}{I^{2}}. \end{equation} Therefore, for the generalized harmonic oscillator, the quantum metric tensor $g^{(n)}_{ij}(x)$ can be determined from the classical metric $g_{ij}(I;x)$, modulo the parameter-independent constant factor~$\gamma$. This result is nontrivial and supports our claim that the metric (\ref{gclas:metric}) is the classical counterpart of the metric (\ref{QIM}). Note that if we take into account the Bohr-Sommerfeld quantization rule for action variable \begin{equation}\label{gho:action} I=\left(n+\frac{1}{2}\right)\hbar, \end{equation} then $\gamma $ turns out to be proportional to~$1/\hbar^2$. Additionally, by using Eq.~(\ref{gho:action}), the metrics (\ref{gho:classmetric}) and (\ref{gho:QIM}) can also be related by \begin{equation}\label{gho:relation2} \frac{\partial}{\partial n} g^{(n)}_{ij}(x) = \frac{1}{\hbar} \frac{\partial}{\partial I} g_{ij}(I;x). \end{equation} On another hand, it is worth pointing out that the determinants of the metrics (\ref{gho:classmetric}) and (\ref{gho:QIM}) are zero, which indicates that the corresponding Hamiltonians (\ref{gho:classicalH}) and (\ref{gho:quantumH}) involve more parameters than the effective ones. Actually, these metrics have rank two and hence, to have metrics with nonvanishing determinants, we must leave one of the parameters fixed (but different from zero). In this case, the degeneration in the metric only indicates that one of the parameters is redundant and can be set equal to a constant. To show this explicitly, let us consider for a moment that $\{y^{i'}\}=(X,Y)$ ($i',j',\dots\!=1,2$) are the adiabatic parameters and suppose that $Z=Z_0$ is a nonvanishing constant. In this case, the classical metric (\ref{gclas:metric}) becomes \begin{equation}\label{gho:XY} g_{i'j'}(I;y)=\frac{Z_0 I^{2}}{32\omega^{4}}\left(\begin{array}{cc} Z_0 & -2Y \\ -2Y & 4X \end{array}\right), \end{equation} and its determinant, $\det\left[g_{i'j'}(I;y) \right]= \frac{Z_0^2 I^4}{256 \omega^{6} }$, is not zero. Besides, the corresponding quantum metric tensor $g^{(n)}_{i'j'}(y)$, which can be obtained from Eq.~(\ref{gho:XY}) by replacing $ I^{2}$ with $n^{2}+n+1$, also has a nonvanishing determinant. Before concluding this example, it may be interesting to obtain the components of the connection and curvature associated with Hannay's angle through Eqs.~(\ref{gclas:connection}) and (\ref{gclas:curvature}), respectively. Using the functions $G_i$ given by Eqs.~(\ref{gho:G1}), (\ref{gho:G2}), and (\ref{gho:G3}), the components of connection (\ref{gclas:connection}) lead to \begin{equation}\label{gho:Hconnetion} A_{1}(I;x)=0,\qquad A_{2}(I;x)=\frac{I}{2\omega},\qquad A_{3}(I;x)=-\frac{YI}{2Z\omega}, \end{equation} whereas the components of the curvature (\ref{gclas:curvature}) give \begin{eqnarray}\label{gho:Hcurvature} &&F_{12}(I;x)=-\frac{ZI}{4\omega^{3}},\qquad F_{13}(I;x)=\frac{YI}{4\omega^{3}},\nonumber \\ &&F_{23}(I;x)=-\frac{XI}{4\omega^{3}}. \end{eqnarray} It is also instructive to compare Eqs.~(\ref{gho:Hconnetion}) and (\ref{gho:Hcurvature}) with their quantum analogues, namely Berry's connection and its curvature, respectively. Then, using the wave function~(\ref{gho:wavefunctgen}) and Eq.~(\ref{Hermite}), the components $A^{(n)}_i(x) := - {\rm Im} (\braket{n}{\partial_i n})$ of Berry's connection become \begin{equation}\label{gho:Bconnetion} A_{1}^{(n)}(x)=0,\qquad A_{2}^{(n)}(x)=\frac{c_n}{2\omega}, \qquad A_{3}^{(n)}(x)=-\frac{c_n Y}{2Z\omega}, \end{equation} and the components $F^{(n)}_{ij}(x):=\partial_i A^{(n)}_j-\partial_j A^{(n)}_i$ of its curvature yield \begin{eqnarray}\label{gho:Bcurvature} &&F_{12}^{(n)}(x)=-\frac{c_n Z}{4\omega^{3}},\qquad F_{13}^{(n)}(x)=\frac{c_nY}{4\omega^{3}}, \nonumber\\ &&F_{23}^{(n)}(x)=-\frac{c_n X}{4\omega^{3}}, \end{eqnarray} where $c_n:=\left(n+1/2\right)$. Comparing Eqs.~(\ref{gho:Hconnetion}) and (\ref{gho:Bconnetion}) as well as Eqs.~(\ref{gho:Hcurvature}) and (\ref{gho:Bcurvature}), it is straightforward to see the following relations: \begin{subequations} \begin{eqnarray} A_{i}^{(n)}(x)=\beta \, A_{i}(I;x), \label{gho:Arelation}\\ F_{ij}^{(n)}(x)=\beta \, F_{ij}(I;x), \label{gho:Frelation} \end{eqnarray} \end{subequations} where \begin{equation}\label{gho:beta} \beta:=\frac{n+\frac{1}{2}}{I}. \end{equation} This entails that $A_{i}^{(n)}(x)$ and $F_{ij}^{(n)}(x)$ can be obtained, respectively, from $A_{i}(I;x)$ and $F_{ij}(I;x)$, modulo the parameter-independent constant factor $\beta$. Moreover, after using Eq.~(\ref{gho:action}), this factor reduces to $\beta=1/\hbar$. Some comments are in order. First, it is noteworthy to emphasize that connection defined by dropping $\left<( \partial_i S )_{\varphi,I}\right>$ from Eq.~(\ref{gclas:connection}), namely $A_i(I;x)=\left<p_a ( \partial_i q^a)_{\varphi,I}\right>$, does not lead to Eqs.~(\ref{gho:Hconnetion}) and therefore does not satisfy the relation (\ref{gho:Arelation}). Of course, the curvature of such a connection, which is also given by Eq.~(\ref{gclas:Curv}), implies Eq.~(\ref{gho:Hcurvature}). Second, notice that Eq.~(\ref{gho:Frelation}) is in complete agreement with the semiclassical relation between Berry's curvature and the curvature associated with Hannay's angle reported in Ref.~\cite{Berry1985}. Finally, it is worth mentioning that the multiplicative constants involved in the relation (\ref{gho:relation}) and the relations (\ref{gho:Arelation}) and (\ref{gho:Frelation}) are different: while $\gamma$ is proportional to $1/I^2$, $\beta$ is proportional to $1/I$. \subsection{Generalized harmonic oscillator with a linear term} For our second example we shall consider the generalized harmonic oscillator with a linear term in the position. Thus the Hamiltonian under consideration is \begin{equation}\label{ghoWq:classicalH} H=\frac{1}{2}\left(X q^2+2Yqp+Zp^2\right)+Wq, \end{equation} where $x=\{x^{i}\}=(W,X,Y,Z)$ with $i,j,\dots\!=0,1,2,3$ are the adiabatic parameters. Assuming $XZ-Y^{2}>0$, we find that the variables $(q,p)$ in terms of action-angle variables $(\varphi,I)$ read \begin{subequations} \begin{eqnarray} &&q(\varphi,I;x)=\left(\frac{2ZI}{\omega}\right)^{1/2}\sin\varphi-\frac{WZ}{\omega^{2}},\label{ghoWq:q}\\ &&p(\varphi,I;x)=\left( \frac{2ZI}{\omega} \right)^{1/2} \left(-\frac{Y}{Z}\sin\varphi+\frac{\omega}{Z}\cos\varphi\right)+\frac{WY}{\omega^{2}},\nonumber\\\label{ghoWq:p} \end{eqnarray} \end{subequations} where $\omega:=(XZ-Y^{2})^{1/2}$ is the angular frequency of the system, which is independent of $W$. Furthermore, we get that the generating function $S(\varphi,I;x)$ of the transformation $(q,p) \rightarrow (\varphi,I)$ is \begin{eqnarray}\label{ghoWq:S} S(\varphi,I;x)&=&-\frac{Y}{2Z}\left[\left(\frac{2ZI}{\omega}\right)^{1/2} \sin\varphi-\frac{WZ}{\omega^{2}}\right]^{2} \nonumber\\ &&+I(\varphi+\sin\varphi\cos\varphi). \end{eqnarray} With these ingredients at hand, it is straightforward to obtain $G_i(\varphi,I;x)$ from Eq.~(\ref{classical:G2}). The resulting functions, in compact form, are \begin{equation}\label{ghoWq:Gi} G_i(\varphi,I;x)=f_i(x) qp +g_i(x) q^2 + h_i(x) \left(p + \frac{Y}{Z} q \right) , \end{equation} where $p=p(\varphi,I;x)$ and $q=q(\varphi,I;x)$ are given by Eqs.~(\ref{ghoWq:q}) and (\ref{ghoWq:p}), respectively, while \begin{subequations} \begin{eqnarray} &&f_i(x):= \frac{\omega}{2Z} \, \partial_i\left( \frac{Z}{\omega}\right), \label{ghoWq:f}\\ &&g_i(x):= \frac{Y}{Z} f_i(x) + \frac{1}{2} \, \partial_i \left(\frac{Y}{Z}\right), \label{ghoWq:g}\\ &&h_i(x):= \frac{W}{2\omega} \, \partial_i \left( \frac{Z}{\omega}\right) - \partial_i \left( \frac{WZ}{\omega^2}\right).\label{ghoWq:h} \end{eqnarray} \end{subequations} It can be verified that $G_i$ given by Eq.~(\ref{ghoWq:Gi}) satisfy Eqs.~(\ref{classical:Gq}) and (\ref{classical:Gp}). In addition, if the parameter $W$ is fixed to 0, it is not difficult to realize that these functions reduce to those of Eqs.~(\ref{gho:G1}), (\ref{gho:G2}), and~(\ref{gho:G3}). By inserting Eq.~(\ref{ghoWq:Gi}) into Eq.~(\ref{gclas:metric}), the corresponding components of the classical metric $g_{ij}(I;x)$ are \begin{widetext} \begin{multline}\label{ghoWq:classmetric} g_{ij}(I;x)=\frac{I^{2}}{32\omega^{4}}\begin{pmatrix}0 & 0 & 0 & 0\\ 0 & Z^{2} & -2YZ & 2Y^{2}-XZ\\ 0 & -2YZ & 4XZ & -2XY\\ 0 & 2Y^{2}-XZ & -2XY & X^{2} \end{pmatrix}\\ +\frac{I}{\omega^{7}}\begin{pmatrix}Z\omega^{4} & -WZ^{2}\omega^{2} & 2WYZ\omega^{2} & -WY^{2}\omega^{2}\\ -WZ^{2}\omega^{2} & W^{2}Z^{3} & -2W^{2}YZ^{2} & W^{2}Y^{2}Z\\ 2WYZ\omega^{2} & -2W^{2}YZ^{2} & W^{2}Z(3Y^{2}+XZ) & -W^{2}Y(Y^{2}+XZ)\\ -WY^{2}\omega^{2} & W^{2}Y^{2}Z & -W^{2}Y(Y^{2}+XZ) & W^{2}XY^{2} \end{pmatrix}. \end{multline} \end{widetext} Notice that this metric has an extra term, as compared to the metric~(\ref{gho:classmetric}), which is proportional to $I/\omega^{7}$ and is a consequence of the linear modification introduced in the Hamiltonian~(\ref{ghoWq:classicalH}). Certainly, by fixing $W=0$ in the above expression and eliminating the corresponding row and column, we can recover the metric (\ref{gho:classmetric}). To contrast Eq.~(\ref{ghoWq:classmetric}) with the quantum metric tensor, we consider the following Hamiltonian operator: \begin{equation}\label{ghoWq:quantumH} \hat{H}=\frac{1}{2}\left[X\hat{q}^{2}+Y(\hat{q}\hat{p}+\hat{p}\hat{q})+Z\hat{p}^{2}\right]+W\hat{q}. \end{equation} In this case the Schr\"odinger equation reads \begin{equation} -\frac{Z\hbar^{2}}{2}\frac{d^{2}\psi_{n}}{dq^{2}}-i\hbar Yq\frac{d\psi_{n}}{dq}+\left(\frac{Xq^{2}}{2}\!+\!Wq\!-\! i\hbar\frac{Y}{2}\right)\!\psi_{n}\!=\! E_{n}\psi_{n}, \end{equation} and the eigenfunctions $\psi_{n}(q;x)$ are of the form \begin{eqnarray}\label{ghoWq:wavefunctgen} &&\psi_{n}(q;x)\equiv\nonumber\\ &&\left(\frac{\omega}{Z\hbar}\right)^{1/4}\chi_{n}\! \left[\left(q+\frac{WZ}{\omega^{2}}\right)\sqrt{\frac{\omega}{Z\hbar}}\right]\exp\!\left(-\frac{iYq^{2}}{2Z\hbar}\right), \end{eqnarray} where once again $\omega=(XZ-Y^{2})^{1/2}$ which entails $XZ-Y^{2}>0$. By substituting Eq.~(\ref{ghoWq:wavefunctgen}) into Eq.~(\ref{QIM}) and using Eq.~(\ref{Hermite}), we get that the components of the quantum metric $g^{(n)}_{ij}(x)$ are given by \begin{widetext} \begin{multline}\label{ghoWq:QIM} g^{(n)}_{ij}(x)=\frac{n^{2}+n+1}{32\omega^{4}}\begin{pmatrix}0 & 0 & 0 & 0\\ 0 & Z^{2} & -2YZ & 2Y^{2}-XZ\\ 0 & -2YZ & 4XZ & -2XY\\ 0 & 2Y^{2}-XZ & -2XY & X^{2} \end{pmatrix}\\ +\frac{n+\frac{1}{2}}{\hbar\omega^{7}}\begin{pmatrix}Z\omega^{4} & -WZ^{2}\omega^{2} & 2WYZ\omega^{2} & -WY^{2}\omega^{2}\\ -WZ^{2}\omega^{2} & W^{2}Z^{3} & -2W^{2}YZ^{2} & W^{2}Y^{2}Z\\ 2WYZ\omega^{2} & -2W^{2}YZ^{2} & W^{2}Z(3Y^{2}+XZ) & -W^{2}Y(Y^{2}+XZ)\\ -WY^{2}\omega^{2} & W^{2}Y^{2}Z & -W^{2}Y(Y^{2}+XZ) & W^{2}XY^{2} \end{pmatrix}. \end{multline} \end{widetext} We can see that the classical metric~(\ref{ghoWq:classmetric}) and the quantum metric (\ref{ghoWq:QIM}) have exactly the same functional dependence on the adiabatic parameters. Hence we corroborate once again that the metric~(\ref{gclas:metric}) is the classical analog of the quantum metric tensor~(\ref{QIM}). Remarkably, by using the Bohr-Sommerfeld quantization rule~(\ref{gho:action}), it follows that the relation~(\ref{gho:relation2}) also holds for the metrics (\ref{ghoWq:classmetric}) and~(\ref{ghoWq:QIM}). Note that in this example, as well as in the previous one, the metrics $g_{ij}(I;x)$ and $g^{(n)}_{ij}(x)$ have vanishing determinant. However, here the rank of the metrics (\ref{ghoWq:classmetric}) and (\ref{ghoWq:QIM}) is three, which shows the existence of a redundant parameter. In particular, if we take $\{y^{i'}\}=(W,X,Y)$ ($i',j',\dots\!=0,1,2$) as the adiabatic parameters and $Z=Z_0$ as a nonvanishing constant, then the classical metric reads \begin{multline}\label{ghoWq:WXY} g_{i'j'}(I;y)=\frac{Z_0 I^{2}}{32\omega^{4}}\begin{pmatrix}0 & 0 & 0 \\ 0 & Z_0 & -2Y \\ 0 & -2Y & 4X \end{pmatrix}\\ +\frac{Z_0 I}{\omega^{7}}\begin{pmatrix}\omega^{4} & -W^{2}Z_0\omega^{2} & 2WY\omega^{2} \\ -WZ_0\omega^{2} & W^{2}Z_0^{2} & -2W^{2}YZ_0 \\ 2WY\omega^{2} & -2W^{2}YZ_0 & W^{2}(3Y^{2}+XZ_0) \end{pmatrix}, \end{multline} and its determinant $\det[g_{i'j'}(I;y)]=\frac{Z_0^3 I^4}{256 \omega^{12}} (I \omega^{3}+8W^2 Z_0)$ is different from zero. To conclude this example, let us obtain the corresponding classical and quantum connections and curvatures. Classically, by applying Eqs.~(\ref{gclas:connection}) and (\ref{gclas:curvature}) to the functions $G_i$ given by Eq.~(\ref{ghoWq:Gi}), we obtain the components of the connection, \begin{eqnarray}\label{ghoWq:Hconnetion} &&A_{0}(I;x)=A_{1}(I;x)=0,\quad A_{2}(I;x)=\frac{I}{2\omega}+\frac{W^{2}Z}{2\omega^{4}}, \nonumber\\ &&A_{3}(I;x)=-\frac{YI}{2Z\omega}-\frac{W^{2}Y}{2\omega^{4}}, \end{eqnarray} and the components of the curvature, which are displayed in matrix form, \begin{eqnarray}\label{ghoWq:Hcurvature} && F_{ij}(I;x)=\frac{I}{4\omega^{3}}\begin{pmatrix}0 & 0 & 0 & 0\\ 0 & 0 & -Z & Y\\ 0 & Z & 0 & -X\\ 0 & -Y & X & 0 \end{pmatrix} \nonumber\\ &&+\frac{1}{\omega^{6}}\begin{pmatrix}0 & 0 & WZ\omega^{2} & -WY\omega^{2}\\ 0 & 0 & -W^{2}Z^{2} & W^{2}YZ\\ -WZ\omega^{2} & W^{2}Z^{2} & 0 & -W^{2}Y^{2}\\ WY\omega^{2} & -W^{2}YZ & W^{2}Y^{2} & 0 \end{pmatrix} \!, \end{eqnarray} respectively. On the quantum side, Berry's connection and curvature obtained from the eigenfunctions~(\ref{ghoWq:wavefunctgen}) are \begin{eqnarray}\label{ghoWq:Bconnetion} &&A_{0}^{(n)}(x)=A_{1}^{(n)}(x)=0,\qquad A_{2}^{(n)}(x)=\frac{n+\frac{1}{2}}{2\omega}+\frac{W^{2}Z}{2\hbar\omega^{4}},\nonumber\\ &&A_{3}^{(n)}(x)=-\frac{\left(n+\frac{1}{2}\right)Y}{2Z\omega}-\frac{W^{2}Y}{2\hbar\omega^{4}}, \end{eqnarray} and \begin{eqnarray}\label{ghoWq:Bcurvature} &&F^{(n)}_{ij}(x)=\frac{n+\frac{1}{2}}{4\omega^{3}}\begin{pmatrix}0 & 0 & 0 & 0\\ 0 & 0 & -Z & Y\\ 0 & Z & 0 & -X\\ 0 & -Y & X & 0 \end{pmatrix} \nonumber \\ && +\frac{1}{\hbar\omega^{6}} \! \begin{pmatrix}0 & 0 & WZ\omega^{2} & -WY\omega^{2}\\ 0 & 0 & -W^{2}Z^{2} & W^{2}YZ\\ -WZ\omega^{2} & W^{2}Z^{2} & 0 & -W^{2}Y^{2}\\ WY\omega^{2} & -W^{2}YZ & W^{2}Y^{2} & 0 \end{pmatrix}, \ \end{eqnarray} respectively. Here we used once again Eq.~(\ref{Hermite}). By comparing Eqs.~(\ref{ghoWq:Hconnetion}) and~(\ref{ghoWq:Bconnetion}) as well as Eqs.~(\ref{ghoWq:Hcurvature}) and~(\ref{ghoWq:Bcurvature}), it turns out that the relations (\ref{gho:Arelation}) and (\ref{gho:Frelation}) hold provided that the Bohr-Sommerfeld quantization rule~(\ref{gho:action}) is taken into account, i.e., when $\beta=1/\hbar$ in Eq.~(\ref{gho:beta}). \subsection{Quartic anharmonic oscillator} In this example, we focus on the classical quartic anharmonic oscillator which is defined by the Hamiltonian \begin{eqnarray}\label{quartic:Hamiltonial} H=\frac{p^{2}}{2m}+\frac{k}{2}q^{2}+ \frac{\lambda}{4!}q^{4}, \end{eqnarray} where $x=\{x^{i}\}=(m,k,\lambda)$ with $i=1,2,3$ are the adiabatic parameters. In this case, in contrast to the previous examples, we need to resort to the canonical perturbation theory in order to find the functions $G_i$. With this in mind, the starting point is to decompose the Hamiltonian~(\ref{quartic:Hamiltonial}) in the form $H=H_{0}+\lambda H_{1}$ where \begin{equation}\label{quartic:H0H1} H_{0}=\frac{p^{2}}{2m}+\frac{k}{2}q^{2}, \qquad H_{1}=\frac{q^{4}}{4\text{!}}, \end{equation} and we assume $0\leq\lambda\ll1$. Here, $H_{0}$ is the Hamiltonian of the unperturbed problem, for which the action-angle variables $(\varphi_{0},I_{0})$ are well known and allow us to express the variables $(q,p)$ as \begin{eqnarray} q(\varphi_{0},I_{0};x)&=&\left(\frac{2I_{0}}{m\omega_{0}}\right)^{1/2} \sin\varphi_{0}, \label{quartic:q}\\ p(\varphi_{0},I_{0};x)&=&\left(2m\omega_{0}I_{0} \right)^{1/2}\cos\varphi_{0},\label{quartic:p} \end{eqnarray} where $\omega_{0}=\left(k/m\right)^{1/2}$ is the unperturbed frequency. Furthermore, $H_{1}$ is the perturbative potential. Next we assume that the type 2 generating function $W(\varphi_{0},I;x)$ of the canonical transformation from $(\varphi_{0},I_{0})$ to the action-angle variables $(\varphi,I)$ of the total problem $H(I;x)$ can be expanded in a power series of $\lambda$: \begin{eqnarray} W(\varphi_{0},I;x)&=&\varphi_{0}I+\lambda W_{1}(\varphi_{0},I;x)+\lambda^{2}W_{2}(\varphi_{0},I;x)\nonumber\\ &&+\lambda^{3}W_{3}(\varphi_{0},I;x)+{\cal O}(\lambda^{4}), \end{eqnarray} where $W_{1},W_{2},\dots,$ are functions to be determined. Thus, the equations of the canonical transformation, $\varphi(\varphi_{0},I;x)=\partial W(\varphi_{0},I;x)/\partial I$ and $I_{0}(\varphi_{0},I;x)=\partial W(\varphi_{0},I;x)/\partial \varphi_{0}$, take the form \begin{eqnarray} \varphi(\varphi_{0},I;x)&=&\varphi_{0}+\lambda\frac{\partial W_{1}(\varphi_{0},I;x)}{\partial I}+\lambda^{2}\frac{\partial W_{2}(\varphi_{0},I;x)}{\partial I}\nonumber\\ &&+\lambda^{3}\frac{\partial W_{3}(\varphi_{0},I;x)}{\partial I}+{\cal O}(\lambda^{4})\label{quartic:varphi0}, \end{eqnarray} and \begin{eqnarray} I_{0}(\varphi_{0},I;x)&=&I+\lambda\frac{\partial W_{1}(\varphi_{0},I;x)}{\partial\varphi_{0}}+\lambda^{2}\frac{\partial W_{2}(\varphi_{0},I;x)}{\partial\varphi_{0}}\nonumber\\ &&+\lambda^{3}\frac{\partial W_{3}(\varphi_{0},I;x)}{\partial\varphi_{0}}+{\cal O}(\lambda^{4}), \label{quartic:I0} \end{eqnarray} respectively. Following the canonical perturbation theory and working up to the third order in $\lambda$, the functions $W_{1}$, $W_{2}$, and $W_{3}$ can be obtained by solving the differential equations~\cite{goldstein2000,dittrich2017} \begin{equation}\label{quartic:diffW} \omega_{0}\frac{\partial W_{\mu}(\varphi_{0},I;x)}{\partial\varphi_{0}}=\left<\Phi_{\mu}(\varphi_{0},I;x)\right>_0-\Phi_{\mu}(\varphi_{0},I;x), \end{equation} where $\left< \cdot \right>_{0}$ denotes the average with respect to $\varphi_{0}$ and, in our case, $\Phi_{1}=H_{1}$, $\Phi_{2}=\frac{\partial W_{1}}{\partial\varphi_{0}} \frac{\partial H_{1}}{\partial I}$ and $\Phi_{3}=\frac{1}{2}\left(\frac{\partial W_{1}}{\partial\varphi_{0}}\right)^2\frac{\partial^2 H_{1}}{\partial I^2}+\frac{\partial W_{2}}{\partial\varphi_{0}}\frac{\partial H_{1}}{\partial I}$. Explicitly these functions are given by \begin{subequations} \begin{eqnarray} &&\Phi_{1}(\varphi_{0},I;x)=\frac{I^{2}\sin^{4}\varphi_{0}}{6m^{2}\omega_{0}^{2}},\\ &&\Phi_{2}(\varphi_{0},I;x)=-\frac{I^{3}\sin^{4}\varphi_{0}}{144m^{4}\omega_{0}^{5}}\left(8\sin^{4}\varphi_{0}-3\right),\\ &&\Phi_{3}(\varphi_{0},I;x)=\frac{I^{4}\sin^{4}\varphi_{0}}{13824m^{6}\omega_{0}^{8}}\left(320\sin^{8}\varphi_{0}-144\sin^{4}\varphi_{0}-25\right), \nonumber\\ \end{eqnarray} \end{subequations} and together with Eq.~(\ref{quartic:diffW}) they imply \begin{subequations} \begin{eqnarray} &&W_{1}(\varphi_{0},I;x)=\frac{I^{2}}{192m^{2}\omega_{0}^{3}}(8\sin2\varphi_{0}-\sin4\varphi_{0}),\\ &&W_{2}(\varphi_{0},I;x)=\frac{I^{3}}{55296m^{4}\omega_{0}^{6}} \left(-384\sin2\varphi_{0}+132\sin4\varphi_{0}\right.\nonumber\\ &&\left.-32\sin6\varphi_{0}+3\sin8\varphi_{0}\right),\\ &&W_{3}(\varphi_{0},I;x)=\frac{I^{4}}{5308416m^{6}\omega_{0}^{9}}(9264\sin2\varphi_{0}-4101\sin4\varphi_{0}\nonumber\\ &&+1624\sin6\varphi_{0}\!-\!441\sin8\varphi_{0}\!+\!72\sin10\varphi_{0}\!-\!5\sin12\varphi_{0}). \ \ \ \ \ \ \end{eqnarray} \end{subequations} Having found $W(\varphi_{0},I;x)$, it is straightforward to obtain the generating function $S(\varphi,I;x)$ of the canonical transformation $(q,p) \rightarrow (\varphi,I)$. Indeed, since the transformation $(q,p) \rightarrow (\varphi,I)$ can be regarded as the composition of the successive canonical transformations $(q,p)\rightarrow(\varphi_{0},I_{0})$ and $(\varphi_{0},I_{0})\rightarrow(\varphi,I)$, generated, respectively, by $S_{0}(q,I_{0};x)$ and $W(\varphi_{0},I;x)$, we have that $S(\varphi,I;x)$ is given by \begin{equation} \label{quartic:S} S(q,I;x)=S_{0}(q,I_{0};x)+W(\varphi_{0},I;x)-\varphi_{0}I_{0}, \end{equation} where the function $S_{0}$ in terms of the variables $(\varphi_{0},I_{0})$ reads \begin{equation}\label{quartic:S0} S_{0}(\varphi_{0},I_{0};x)=I_{0}(\varphi_{0}+\sin\varphi_{0}\cos\varphi_{0}). \end{equation} Note that substituting Eq.~(\ref{quartic:I0}) into Eqs.~(\ref{quartic:q}), (\ref{quartic:p}), and (\ref{quartic:S}), we can write the variables $q$ and $p$ and the function $S$ in terms of $\varphi_{0}$ and~$I$. Now, to compute the metric~(\ref{gclas:metric}) we need to first obtain the functions $G_{i}(\varphi,I;x)=p(\partial_{i}q)_{\varphi,I}-(\partial_{i}S)_{\varphi,I}$, where the derivatives are taken at fixed action-angle variables $(\varphi,I)$. These derivatives can be achieved by employing the following useful formula: \begin{equation}\label{quartic:formula} \left(\partial_i \mathcal{F}\right)_{\varphi,I}=\left(\partial_i \mathcal{F}\right)_{\varphi_{0},I}-\frac{(\partial \mathcal{F}/\partial\varphi_{0})_{I,x}}{(\partial\varphi/\partial\varphi_{0})_{I,x}}\left(\partial_i \varphi\right)_{\varphi_{0},I}, \end{equation} where $\mathcal{F}(\varphi_{0},I;x)=\mathcal{F}(\varphi(\varphi_{0},I;x),I;x)$ is $q(\varphi_{0},I;x)$ or $S(\varphi_{0},I;x)$. Notice that we can use Eq.~(\ref{quartic:varphi0}) to compute $(\partial\varphi/\partial\varphi_{0})_{I,x}$ and $\left(\partial_i \varphi\right)_{\varphi_{0},I}$. After carrying out these calculations and retaining terms correct to second order in $\lambda$ (since derivatives with respect to this parameter are involved), we arrive at the functions $G_i$ in terms of $\varphi_{0}$ and~$I$, namely \begin{equation}\label{quartic:G1} G_i(\varphi_{0},I;x)=\alpha_{i0}+\alpha_{i1} \lambda +\alpha_{i2} \lambda^2, \end{equation} where in the case $i=1$: \begin{subequations} \begin{eqnarray} \alpha_{10}&=&-\frac{I \sin 2 \varphi _0}{4 m},\\ \alpha_{11}&=&-\frac{I^2 \sin^3\varphi _0}{48 \sqrt{k^3 m^3}}\left(\cos 3 \varphi _0-2 \cos \varphi _0\right),\\ \alpha_{12}&=&-\frac{I^3 }{55296 k^3 m^2} \left(318 \sin 2 \varphi _0-204 \sin 4 \varphi _0\right.\nonumber\\ &&\left.+95 \sin 6 \varphi _0-27 \sin 8 \varphi _0+3 \sin 10 \varphi _0\right), \end{eqnarray} \end{subequations} in the case $i=2$: \begin{subequations} \begin{eqnarray} \alpha_{20}&=&-\frac{I \sin 2 \varphi _0}{4 k},\\ \alpha_{21}&=&\frac{I^2}{384 \sqrt{k^5 m}}\left(23 \sin 2 \varphi _0-7 \sin 4 \varphi _0+\sin 6 \varphi _0\right), \ \ \ \ \ \\ \alpha_{22}&=&-\frac{I^3 }{18432 k^4 m}\left(362 \sin 2 \varphi _0-156 \sin 4 \varphi _0\right.\nonumber\\ &&\left.+53 \sin 6 \varphi _0-11 \sin 8 \varphi _0+\sin 10 \varphi _0\right), \end{eqnarray} \end{subequations} and in the case $i=3$: \begin{subequations} \begin{eqnarray} &&\alpha_{30}=\frac{I^2 }{192 \sqrt{k^3 m}}\left(\sin 4 \varphi _0-8 \sin 2 \varphi _0\right)\\ &&\alpha_{31}=\frac{I^3}{27648 k^3 m}\left(384 \sin 2 \varphi _0-132 \sin 4 \varphi _0\right.\nonumber\\ &&\left.\hspace{9mm}+32 \sin 6 \varphi _0-3 \sin 8 \varphi _0\right),\\ &&\alpha_{32}=\frac{I^4 }{1769472 \sqrt{k^{9} m^{3}}} \left(-9264 \sin 2 \varphi _0+4101 \sin 4 \varphi _0\right.\nonumber\\ &&\left.-1624 \sin 6 \varphi _0+441 \sin 8 \varphi _0-72 \sin 10 \varphi _0+5 \sin 12 \varphi _0\right).\nonumber\\ \end{eqnarray} \end{subequations} Finally, substituting $G_i(\varphi_{0},I;x)$ into Eq.~(\ref{gclas:metric}) and writing the average over the angle variable~$\varphi$ as \begin{equation} \left< f(\varphi) \right>\!=\!\frac{1}{2 \pi}\!\int_0^{2 \pi} \! d\varphi f(\varphi)\!=\!\frac{1}{2 \pi}\! \int_0^{2 \pi} \! d\varphi_0 \! \left(\frac{\partial\varphi}{\partial\varphi_{0}}\right)_{I,x} f(\varphi_0),\nonumber \end{equation} we obtain the components of the classical metric $g_{ij}(I;x)$ correct to second order in $\lambda$: \begin{eqnarray}\label{quartic:classmetric} &&g_{11}(I;x)=\frac{I^{2}}{32m^{2}}-\frac{\lambda I^{3}}{256\sqrt{m^{5}k^{3}}}+\frac{47\lambda^{2} I^{4}}{32768m^{3}k^{3}},\nonumber\\ &&g_{12}(I;x)=\frac{I^{2}}{32mk}-\frac{7\lambda I^{3}}{768\sqrt{m^{3}k^{5}}}+\frac{347\lambda^{2} I^{4}}{98304m^{2}k^{4}},\nonumber\\ &&g_{13}(I;x)=\frac{I^{3}}{192\sqrt{m^{3}k^{3}}}-\frac{103\lambda I^{4}}{49152m^{2}k^{3}}+\frac{15\lambda^{2} I^{5}}{16384\sqrt{m^{5}k^{9}}},\nonumber\\ &&g_{22}(I;x)=\frac{I^{2}}{32k^{2}}-\frac{11\lambda I^{3}}{768\sqrt{mk^{7}}}+\frac{1919\lambda^{2}I^{4}}{294912mk^{5}},\nonumber\\ &&g_{23}(I;x)=\frac{I^{3}}{192\sqrt{mk^{5}}}-\frac{439\lambda I^{4}}{147456mk^{4}}+\frac{7\lambda^{2} I^{5}}{4608\sqrt{m^{3}k^{11}}},\nonumber\\ &&g_{33}(I;x)\!=\!\frac{65I^{4}}{73728mk^{3}}\!-\!\frac{89\lambda I^{5}}{147456\sqrt{m^{3}k^{9}}}\!+\!\frac{130621 \lambda^{2} I^{6}}{382205952m^{2}k^{6}}. \nonumber\\ \end{eqnarray} Now we are interested in contrasting Eq.~(\ref{quartic:classmetric}) with its quantum counterpart. For the ground state of the quantum quartic anharmonic oscillator, by using Eq.~(\ref{QIM}) and following a perturbative treatment, we find the corresponding components of the quantum metric tensor $g^{(0)}_{ij}$ with terms up to second order in $\lambda$ (see the Appendix for the details): \begin{eqnarray}\label{quartic:QIM} &&g^{(0)}_{11}(x)=\frac{1}{32m^{2}}-\frac{3\hbar\lambda}{512\sqrt{m^{5}k^{3}}}+\frac{59\hbar^{2}\lambda^{2}}{16384m^{3}k^{3}},\nonumber\\ &&g^{(0)}_{12}(x)=\frac{1}{32mk}-\frac{7\hbar\lambda}{512\sqrt{m^{3}k^{5}}}+\frac{143\hbar^{2}\lambda^{2}}{16384m^{2}k^{4}},\nonumber\\ &&g^{(0)}_{13}(x)=\frac{\hbar}{128\sqrt{m^{3}k^{3}}}-\frac{21\hbar^{2}\lambda}{4096m^{2}k^{3}}+\frac{2353\hbar^{3}\lambda^{2}}{589824\sqrt{m^{5}k^{9}}},\nonumber\\ &&g^{(0)}_{22}(x)=\frac{1}{32k^{2}}-\frac{11\hbar\lambda}{512\sqrt{mk^{7}}}+\frac{785\hbar^{2}\lambda^{2}}{49152mk^{5}},\nonumber\\ &&g^{(0)}_{23}(x)=\frac{\hbar}{128\sqrt{mk^{5}}}-\frac{89\hbar^{2}\lambda}{12288mk^{4}}+\frac{3841\hbar^{3}\lambda^{2}}{589824\sqrt{m^{3}k^{11}}},\nonumber\\ &&g^{(0)}_{33}(x)=\frac{13\hbar^{2}}{6144mk^{3}}\!-\!\frac{31\hbar^{3}\lambda}{12288\sqrt{m^{3}k^{9}}}\!+\!\frac{57227\hbar^{4}\lambda^{2}}{21233664m^{2}k^{6}}. \ \ \ \ \end{eqnarray} Note that by multiplying the components $g^{(0)}_{ij}$ in Eq.~(\ref{quartic:QIM}) by $\hbar^2$ and comparing the result with the corresponding components $g_{ij}(I;x)$ in Eq.~(\ref{quartic:classmetric}), we have that terms with the same powers of $\hbar$ and $I$ have exactly the same functional dependence on the parameters $m$, $k$, and $\lambda$. Then, to match Eqs.~(\ref{quartic:classmetric}) and~(\ref{quartic:QIM}), it is reasonable to consider $g^{(0)}_{ij}=\frac{1}{\hbar^2} g_{ij}(I;x)$. By doing this, we find the following identifications for the ground state: $I^{2}=\hbar^{2}$, $I^{3}=\frac{3}{2}\hbar^{3}$, and $I^{6}=\frac{1030086}{130621}\hbar^{6}$. In the case of $I^{4}$ and $I^{5}$, the identifications are not unique, but they differ from each other slightly. Actually, for $I^{4}$ we find $I^4\approx2.4\hbar^{4},2.43\hbar^{4},2.44\hbar^{4},2.45\hbar^{4},2.47\hbar^{4},2.51\hbar^{4}$, whereas for $I^{5}$ we find $I^5\approx4.18\hbar^{5},4.29\hbar^{5},4.35\hbar^{5}$. Therefore, $g^{(0)}_{ij}$ can only be obtained in an approximate way from $g_{ij}(I;x)$ through the relation $g^{(0)}_{ij}\approx\frac{1}{\hbar^2} g_{ij}(I;x)$ with the appropriate identifications. This result is somehow expected because we have been dealing with perturbation theories to arrive at Eqs.~(\ref{quartic:classmetric}) and~(\ref{quartic:QIM}). Finally, it is worth mentioning that the determinants of the metrics defined by the components~(\ref{quartic:classmetric}) and~(\ref{quartic:QIM}), obtained by keeping terms up to second order in $\lambda$, are zero. Nonetheless, if one of the parameters is left fixed, then the corresponding (classical or quantum) metric could have a nonvanishing determinant. \section{Alternative expressions for the quantum metric tensor and Berry's connection}\label{sec:alternative} We now extend the use of the classical functions $G_i$ given by Eq.~(\ref{classical:G}) to the quantum case. To this end we start by promoting the functions $G_i(q, p;x)$, expressed in terms of the variables $(q,p)$, to quantum operators $\hat{G}_i(\hat{q},\hat{p};x)$ which we assume to be Hermitian. By analogy with the classical case where $G_i \delta x^i$ generates a displacement in the parameter space, it is reasonable to consider $\delta x^i\hat{G}_i(\hat{q}, \hat{p};x)$ as the generator the infinitesimal displacement of states $\ket{n(x)}\rightarrow\ket{n(x')}$, namely $\ket{n(x')}=\exp(- \frac{{\rm i }}{\hbar} \delta x^i\hat{G}_i)\ket{n(x)}$. Then, we can replace the operators $\hat{P}_i$ in Eq.~(\ref{QIMOperator2}) by the operators $\hat{G}_i$, obtaining \begin{equation}\label{QG:def} {\rm i } \hbar \ket{\partial_i n(x)}=\hat{G}_i \ket{n(x)}. \end{equation} This allows us to write down the quantum metric tensor~(\ref{QIM}) [or Eq.~(\ref{QIM2})] in terms of $\hat{G}_i(\hat{q},\hat{p};x)$ as \begin{equation}\label{QG:QIM} g^{(n)}_{ij}(x) = \frac{1}{\hbar^2} {\rm Re} \left( \langle \hat{G}_i \hat{G}_j \rangle_n - \langle \hat{G}_i \rangle_n \langle \hat{G}_j\rangle_n \right). \end{equation} Note that on account of the Hermiticity of $\hat{G}_i$, the r.h.s of this expression is symmetric. Similarly, we can recast Berry's connection in terms of the operators $\hat{G}_i(\hat{q},\hat{p};x)$. Indeed, using Eq.~(\ref{QG:def}) and recalling that the expectation values $\langle \hat{G}_i \rangle_n$ are real (by virtue of the Hermiticity of $\hat{G}_i$), we can rewrite Berry's connection, $A^{(n)}_i(x):= - {\rm Im} (\braket{n}{\partial_i n})$, in the following form: \begin{equation}\label{QG:A} A^{(n)}_i(x) = \frac{1}{\hbar} \langle \hat{G}_i \rangle_n. \end{equation} Finally, we note that that taking into account Eqs.~(\ref{QG:def}) and~(\ref{QG:A}), the action of the operator $\Delta \hat{G}_i:=\hat{G}_i-\langle \hat{G}_i \rangle_n$ on the state $\ket{n(x)}$ can be written as \begin{equation} \Delta \hat{G}_i\ket{n(x)}=\left({\rm i }\hbar\partial_{i}-\langle\hat{G}_i\rangle_{n}\right)|n\rangle=i\hbar\left(\partial_{i}+iA_{i}^{(n)}\right)|n\rangle, \end{equation} which resembles the structure of the covariant derivative $D_{i}^{(n)}=\partial_{i}+iA_{i}^{(n)}$ with connection $A_{i}^{(n)}$. In the following example we shall see that Eqs.~(\ref{QG:QIM}) and (\ref{QG:A}) yield the expected results. \subsection*{Example: Generalized harmonic oscillator with a linear term} For this example we consider the quantum generalized harmonic oscillator with a linear term described by the Hamiltonian operator~(\ref{ghoWq:quantumH}). The notation used here is the same as in Example B of Sec.~\ref{Examples}. The starting point is to promote the corresponding classical functions $G_i$ given by (\ref{ghoWq:Gi}) to the quantum operators: \begin{equation}\label{ghoWq2:Goperator} \hat{G}_i(\hat{q},\hat{p};x)\!=\!\frac{1}{2}f_i(x)\! \left( \hat{q}\hat{p}+\hat{p}\hat{q} \right) + g_i(x) \hat{q}^2 + h_i(x) \left( \hat{p} + \frac{Y}{Z}\hat{q} \right) , \end{equation} where $f_i(x)$, $g_i(x)$, and $h_i(x)$ are given by Eqs.~(\ref{ghoWq:f}), (\ref{ghoWq:g}), and (\ref{ghoWq:h}), respectively. Notice that by construction~(\ref{ghoWq2:Goperator}) is Hermitian. Then, using the eigenfunctions~(\ref{ghoWq:wavefunctgen}) and the properties of the Hermite functions~(\ref{Hermite}), we compute the quantum metric tensor~(\ref{QG:QIM}) with the operators~(\ref{ghoWq2:Goperator}), obtaining \begin{eqnarray}\label{ghoWq2:QIM} &&g^{(n)}_{ij}(x) = \frac{1+n+n^2}{2}\left(f_i(x) f_j(x) + \frac{Z^2}{\omega^2} l_i(x) l_j(x) \right) \nonumber\\ && +\frac{\left(n+\frac{1}{2}\right)\omega}{\hbar Z} \left(\frac{4W^2 Z^4}{\omega^6} l_i(x) l_j(x) + m_i(x) m_j(x) \right)\!, \ \ \ \ \ \end{eqnarray} where $l_i(x):=g_i(x)-\frac{Y}{Z} f_i(x)$ and $m_i(x):=h_i(x)-\frac{W Z}{\omega^2} f_i(x)$. It can be verified, by explicit calculation, that Eq.~(\ref{ghoWq2:QIM}) leads directly to Eq.~(\ref{ghoWq:QIM}), which corroborates the validity of Eq.~(\ref{QG:QIM}). Finally, we apply Eq.~(\ref{QG:A}) to the operators (\ref{ghoWq2:Goperator}). The result is \begin{equation} A^{(n)}_i(x) = \left( g_i(x)-\frac{Y}{Z} f_i(x) \right) \left[\left(n+\frac{1}{2}\right)\frac{Z}{\omega} + \frac{W^2 Z^2}{\hbar \omega^4}\right], \end{equation} which leads to Eq.~(\ref{ghoWq:Bconnetion}) and, hence, verifies the validity of Eq.~(\ref{QG:A}). \section{Conclusion}\label{sec:Conclusions} In this paper, we have introduced the metric (\ref{gclas:metric}) for classical integrable systems and shown through examples that it corresponds to the classical counterpart of the quantum metric tensor~(\ref{QIM}). The classical metric is defined on the parameter space and provides a measure of the distance between nearby points in phase space, which is induced by the adiabatic evolution of the classical system. We investigate the main features of this classical metric. In particular, we show that this metric is gauge invariant in the parameter space in the sense that it remains unchanged when we perform the canonical transformation~(\ref{gclas:gauge}), meaning that this classical metric is independent of the ``zero'' point from which we measure the angle variables. Most importantly, we find for the considered examples that this metric agrees with the quantum metric tensor in rank and the functional dependence on the parameters. This allowed us to establish the exact relation between both metrics for the generalized harmonic oscillator and the generalized harmonic oscillator with a linear term, provided the Bohr-Sommerfeld quantization rule for action variable. For the nontrivial example of the quartic anharmonic oscillator, these metrics were calculated by using perturbation theories, and hence we find an approximate relation between them. We use the generating function (\ref{classical:G2}) of translations in the parameter space as the fundamental object to build the aforementioned classical metric and demonstrated that Hannay's curvature could also be expressed in terms of it; thereby providing a unified treatment for both geometric structures. Finally, we extend the use of this classical generating function to the quantum case and obtain alternative expressions for the quantum metric tensor and Berry's connection, which are verified for the case of the quantum generalized harmonic oscillator with a linear term. We would like to close by pointing out some remarks. First, it would be interesting to address the possible existence of the classical analog of the non-Abelian quantum metric tensor proposed in Ref.~\cite{YuQuan2010}, and the generalization of the metric~(\ref{gclas:metric}) for the case of classical systems with chaotic dynamics along the lines of Ref.~\cite{Robbins631}. In particular, those authors first study the quantum case, where they resort to time dependence to find an expression for Berry's curvature. Then they introduce a semiclassical approximation and get an expression for the classical curvature, where instead of employing action-angle variables, they perform the integration restricted to a particular energy shell. This classical curvature then reduces to Hannay's curvature when the system is integrable. In this spirit, as both the quantum and classical cases are still tractable, we may as well generalize our proposed classical metric. Nevertheless, for a nonintegrable Hamiltonian that slightly differs from an integrable Hamiltonian, the classical metric~(\ref{gclas:metric}) might shed some light on the chaotic behavior through the application of the canonical perturbation theory. A further generalization of Eq.~(\ref{gclas:metric}) is one wherein the classical metric is invariant under a more general gauge transformation where the shift $\lambda$ in Eq.~(\ref{gclas:gauge}) also depends on the angle variables, this motivated by the work of Ref.~\cite{Alvarez-Jimenez2016}. Another interesting and useful future consideration is how to generalize the metric~(\ref{gclas:metric}) for a classical field theory. Apart from possible generalizations, the metric~(\ref{gclas:metric}), being the classical analog of the quantum metric tensor, may help to provide more insight into the investigation of quantum phase transitions. Furthermore, the metric~(\ref{gclas:metric}) may be relevant in the context of shortcuts to adiabatic processes in classical integrable systems, which consist of the use of a control Hamiltonian $K_c(\varphi,I;x)$ that turns out to be $K_c= G_i(\varphi,I;x) \dot{x}^i$ and achieves a constant action variable $I$ with arbitrarily fast changing parameters $x$~\cite{Deng2013}. In this line of thought, it may be noted that in Ref.~\cite{Bravetti2017} is proposed a metric analogous to Eq.~(\ref{gclas:metric}) that emerges in the study of the thermodynamic cost of shortcuts to adiabaticity and defines a distance between the initial and final statistical states of the classical system. \acknowledgments We thank Juan Carlos Del Valle for his valuable help in computing the wave function and the spectrum for the quantum quartic anharmonic oscillator. This work was partially supported by DGAPA-PAPIIT Grants No. IN103716 and No. IN103919, CONACyT project 237503. Daniel Guti\'errez-Ruiz is supported with a CONACyT Ph.D. scholarship (No. 332577). Diego Gonzalez is supported with a DGAPA-UNAM postdoctoral fellowship.
2,869,038,155,822
arxiv
\section{} 1. \emph{Introduction.} \smallskip Let $S$ be an algebraic space and $X$ an $S$-algebraic space. One says that $X$ is a \emph{non-degenerate} $S$-\emph{abelian fibration} if there is an $S$-abelian algebraic space $A$ such that $X$ is an $A$-torsor on $S$ for the \'{e}tale topology in which case $A$ is then the \emph{albanese} of this fibration and hence is uniquely determined. In the following \S 3 we define the notion for $X$ to be an \emph{almost non-degenerate} $S$-\emph{abelian fibration} and with each such associate its \emph{albanese} over its \emph{ramification} $S$-\emph{stack} and our goal is to support the \emph{Principle} : \emph{Non-uniruled abelian fibrations are almost non-degenerate}. This support is however very partial, as the purity theorem below (\cite{grothendieck_abelian} 4.5) on which our arguments at one step crucially rely fails in positive or mixed characteristics : \smallskip \emph{If $T$ is a locally noetherian regular algebraic space of residue characteristics zero and if $U$ is an open sub-algebraic space of $T$ such that $\mathrm{codim}(T-U, T)\geq 2$, then the functor $A\mapsto A|U$, from the category of $T$-abelian algebraic spaces to the category of $U$-abelian algebraic spaces, is an equivalence.} \smallskip When the conclusion of this purity theorem holds, it suffices largely to consider non-uniruled abelian degenerations over spectra of discrete valuation rings, for which our results are in \S 4. The extension to the global situation in characteristic zero with hypothesis of purity and para-factoriality is in the last section \S 5. Besides some terminologies, in \S 2 there is a write-up of Gabber's theorem of \emph{purity of branch locus}. \smallskip 2. \emph{Rational curves, uniruled irreducible components, regular minimal models and purity of branch locus.} \smallskip Recall some terminologies. \smallskip Let $k$ be an algebraically closed field. \smallskip A $k$-\emph{rational curve} is by definition a separated integral $k$-scheme $C$ of finite type such that $k(\eta)$ is a purely transcendental extension of $k$ of transcendence degree $1$, where $\eta$ denotes the generic point of $C$. \smallskip Let $V$ be a $k$-algebraic space. One says that $V$ \emph{does not contain $k$-rational curves} if every $k$-morphism from a $k$-rational curve $C$ to $V$ factors as $C\to \mathrm{Spec}(k)\to V$ for a certain $k$-point $\mathrm{Spec}(k)\to V$ of $V$. \smallskip If an algebraic space $V$ is quasi-separated locally of finite type over $k$ and does not contain $k$-rational curves, then the base change of $V$ to every algebraically closed extension $k'$ of $k$ does not contain $k'$-rational curves. \smallskip For a proper $k$-algebraic space $V$, saying that $V$ does not contain $k$-rational curves amounts to saying that every $k$-morphism from the $k$-projective line to $V$ factors through a $k$-point of $V$. Thus, as the $k$-projective line is simply connected, if a proper $k$-algebraic space does not contain $k$-rational curves, neither does its quotient by any finite \'{e}tale $k$-equivalence relation. \smallskip The following result of Murre and Chow, which has origin in one of Zariski's proofs of his Connectedness/Main Theorem, explains the significance of non-existence of rational curves. \smallskip {\bf Lemma 2.1.} --- \emph{Let $S$ be an algebraic space and $T\to S$ a morphism to $S$ from a connected locally noetherian regular algebraic space $T$. Let $X$ be a proper $S$-algebraic space whose geometric $S$-fibers do not contain rational curves.} \smallskip \emph{Then every $S$-morphism from a non-empty open sub-algebraic space of $T$ to $X$ extends uniquely to an $S$-morphism from $T$ to $X$.} \begin{proof} This is \cite{grothendieck_rational} 2.6. \end{proof} \smallskip Let $F$ be a finitely generated field over $k$. One says that $F$ is \emph{ruled} over $k$ or that $F/k$ is \emph{ruled}, if $F$ has a $k$-sub-field $K$ such that $\mathrm{Spec}(F)\to\mathrm{Spec}(K)$ is geometrically regular, that is, the extension $F/K$ is \emph{regular} in the sense of Weil, and furthermore such that $\overline{K}\otimes_KF$ is a purely transcendental extension of $\overline{K}$, where $\overline{K}$ is an algebraic closure of $K$. One says that $F$ is \emph{uniruled} over $k$ or that $F/k$ is \emph{uniruled}, if $F$ has a finite extension $F'/k$ which is ruled. \smallskip Let $V$ be a quasi-separated algebraic space locally of finite type over $k$, $\eta$ a maximal point of $V$ and $Z$ the irreducible component of $V$ with reduced induced structure with generic point $\eta$. If $k(\eta)/k$ is ruled (resp. uniruled), one says that $V$ is \emph{ruled} (resp. \emph{uniruled}) at $\eta$ and that $Z$ is a \emph{ruled} (resp. \emph{uniruled}) irreducible component of $V$. \smallskip {\bf Lemma 2.2.} --- \emph{An abelian variety over an algebraically closed field $k$ does not contain $k$-rational curves and thus in particular is not uniruled. A connected smooth algebraic group over an algebraically closed field is not ruled only if it is an abelian variety.} \begin{proof} See \cite{neron_model} 9.2/4, 9.2/1. \end{proof} \smallskip Let $S$ be the spectra of a discrete valuation ring and $t$ the generic point of $S$. Let $A$ be an $S$-abelian scheme and $X_t$ an $A_t$-torsor on $t$ for the \'{e}tale topology. Recall (\cite{raynaud_minimal}, p. 82, line -2) that $X_t$ with its $A_t$-action admits a unique extension to an $S$-scheme $X$ with an action by $A$ such that $X$ is projective flat over $S$, regular and such that the morphism \[A\times_SX\to X\times_SX,\ (a, x)\mapsto (a+x, x)\] is finite surjective. The geometric $S$-fibers of $X$ are irreducible and do not contain rational curves. Following N\'{e}ron--Raynaud, one says that $X$ is the \emph{regular} $S$-\emph{minimal model of} $X_t$. The formation of regular minimal models commutes with every formally smooth faithfully flat base change $S'\to S$ of spectra of discrete valuation rings. \smallskip {\bf Lemma 2.3.} --- \emph{Let $S$ be the spectra of a discrete valuation ring, $t$ the generic point of $S$, $A$ an $S$-abelian scheme and $X_t$ an $A_t$-torsor on $t$ for the \'{e}tale topology. Assume that $X_t$ extends to an $S$-algebraic space $X$ which is proper over $S$, regular and with an irreducible closed $S$-fiber.} \smallskip \emph{Then $X$ is the regular $S$-minimal model of $X_t$.} \begin{proof} Let $s$ be the closed point of $S$ and $x$ the generic point of $X_s$. Notice that $X$ is connected and that there is an open neighborhood of $x$ in $X$ which is a scheme (\cite{raynaud_specialization} 3.3.2). Let $Y$ be the regular $S$-minimal model of $X_t$. The identity $X_t=Y_t$ extends uniquely by (2.1) to an $S$-morphism $f: X\to Y$. Being proper birational, $f$ maps $x$ to the generic point of $Y_s$ and induces an isomorphism between spectra of discrete valuation rings \[\mathrm{Spec}(\mathcal{O}_{X, x})\ \widetilde{\to}\ \mathrm{Spec}(\mathcal{O}_{Y, f(x)}).\] By the theorem of \emph{purity of branch locus} (2.4) below, $f$ is \'{e}tale and hence is an isomorphism. \end{proof} \smallskip The following theorem answers the question EGA IV 21.12.14 (v). \smallskip {\bf Theorem 2.4} (Gabber). --- \emph{Let $f: X\to Y$ be a morphism, essentially of finite type, from a normal scheme $X$ to a locally noetherian regular scheme $Y$ such that $f$ is essentially \'{e}tale at all points of $X$ of codimension $\leq 1$.} \smallskip \emph{Then $f$ is essentially \'{e}tale.} \begin{proof} When $f$ is moreover finite, $f$ is \'{e}tale by SGA 2 X 3.4. It follows that $f$ is essentially \'{e}tale at a point $x\in X$ if it is essentially quasi-finite at $x$. For, letting $X_{(x)}$ (resp. $Y_{(f(x))}$, resp. $f_{(x)}$) be the henselization of $X$ (resp. $Y$, resp. $f$) at $x$ (resp. $f(x)$, resp. $x$), one can apply \emph{loc.cit.} to $f_{(x)}: X_{(x)}\to Y_{(f(x))}$, as $f_{(x)}$ is finite by Zariski's Main Theorem. \smallskip So it amounts to showing that $f$ is essentially quasi-finite. It suffices to see that $f$ is essentially quasi-finite at a point $x\in X$ if it is at all generalizations of $x$. \smallskip One may assume that $X$ is local of dimension $\geq 2$ with closed point $x$, that $Y$ is local with closed point $f(x)$, that $f|(X-\{x\})$ is essentially \'{e}tale and that $x=f^{-1}(f(x))$. Notice that $f$ is essentially quasi-finite at $x$ if and only if the extension $k(x)/k(f(x))$ is finite. \smallskip --- \emph{Reduction to the case where $Y$ is excellent }: \smallskip The completion $Y'$ of $Y$ along $f(x)$ is excellent, so is $X'=X\times_YY'$, and the normalization $X''$ of the excellent scheme $X'$ is finite over $X'$. Write $y'$ for the closed point of $Y'$ and let $x'$ be the unique point of $X'$ with image $x$ in $X$ and with image $y'$ in $Y'$. The projection $X''\to X'$ restricts to an isomorphism over $X'-\{x'\}$, since $X'-\{x'\}$, being essentially \'{e}tale over $Y'$, is regular. It suffices to show that the composition $X''\to X'\to Y'$ is essentially quasi-finite at a point $x''$ of $X''$ above $x'$. For then the extension $k(x'')/k(y')$, a priori $k(x')/k(y')$ as well, is finite and hence the extension $k(x)/k(f(x))$ is finite. \smallskip --- \emph{Assume $Y$ excellent. Reduction to the case $\mathrm{dim}(X)=2$ }: \smallskip One has $\mathrm{dim}(Y)>0$. Let $h\in\Gamma(Y, \mathcal{O}_Y)$ be part of a regular system of parameters at $f(x)$, let $Y'=V(h)$ and let $X'=V(h\mathcal{O}_X)=X\times_YY'$. The normalization $X''$ of the excellent scheme $X'$ is finite over $X'$ and the projection $X''\to X'$ restricts to an isomorphism over $X'-\{x\}$, as $X'-\{x\}$, being essentially \'{e}tale over $Y'$, is regular. It suffices to show that the composition $X''\to X'\to Y'$ is essentially quasi-finite at a point $x''\in X''$ above $x\in X'$, for the extensions $k(x'')/k(f(x))$ and $k(x)/k(f(x))$ are then finite. Note that $X''$ is purely of dimension $\mathrm{dim}(X)-1$, which is $\geq 2$ if and only if $\mathrm{dim}(X)>2$. \smallskip --- \emph{Assume $Y$ excellent and $\mathrm{dim}(X)=2$. Then $f^{!}\mathcal{O}_Y=\mathcal{O}_X$ }: \smallskip Now $Y$ being regular local, $\mathcal{O}_Y$ is a dualizing object on $Y$, so is $f^{!}\mathcal{O}_Y$ on $X$. As $X$ being normal local of dimension $2$ is Cohen--Macaulay, $f^{!}\mathcal{O}_Y$ has one only non-zero cohomology, say $L$, which is Cohen--Macaulay, concentrated in degree, say $d$. Let $U=X-\{x\}$ and $j: U\to X$ the open immersion. Since $f$ is essentially \'{e}tale on $U$, \[(fj)^{!}\mathcal{O}_Y=j^{!}f^{!}\mathcal{O}_Y=j^*f^{!}\mathcal{O}_Y=j^*(L[-d])=(L|U)[-d]\] is canonically isomorphic to $\mathcal{O}_U$ in $D(U, \mathcal{O}_U)$. That is, $d=0$ and $L|U$ is canonically isomorphic to $\mathcal{O}_U$. Such an isomorphism uniquely extends to an isomorphism from $\mathcal{O}_X$ onto $L=f^{!}\mathcal{O}_Y$, as $\mathrm{prof}_x(\mathcal{O}_X)=\mathrm{prof}_x(L)=2$. \smallskip --- \emph{Assume $Y$ excellent and $\mathrm{dim}(X)=2$. Then $X$ is regular }: \smallskip One has $\mathrm{dim}(Y)>0$. Let $h\in\Gamma(Y, \mathcal{O}_Y)$ be part of a regular system of parameters at $f(x)$, let $Y'=V(h)$, $i: Y'\to Y$ the canonical closed immersion, $f'=f\times_YY': X'\to Y'$, let the normalization of $X'=V(h\mathcal{O}_X)=X\times_YY'$ be $p: X''\to X'$ and let $f''=f'p: X''\to X'\to Y'$ be the composition. \smallskip Notice that $f: X\to Y$ and $i: Y'\to Y$ are tor-independent over $Y$. So, from the identity $f^{!}\mathcal{O}_Y=\mathcal{O}_X$, it follows that $f^{'!}\mathcal{O}_{Y'}=\mathcal{O}_{X'}$. \smallskip Notice next that $f''$ is essentially of complete intersection, for $Y'$ is regular and $X''$, being normal of dimension $1$, is regular. As $f''$ is furthermore essentially \'{e}tale at all points of $X''$ above $X'-\{x\}$, the object $f^{''!}\mathcal{O}_{Y'}$ has a unique non-zero cohomology in degree $0$, which is an invertible $\mathcal{O}_{X''}$-module, and there is a canonical homomorphism, the fundamental class of $f''$, $c(f''): \mathcal{O}_{X''}\to f^{''!}\mathcal{O}_{Y'}$. This $c(f'')$ corresponds by duality to a morphism $p_{*}\mathcal{O}_{X''}\to f^{'!}\mathcal{O}_{Y'}$, which is injective and which when composed with the canonical homomorphism $\mathcal{O}_{X'}\to p_{*}\mathcal{O}_{X''}$ induces the above identity $\mathcal{O}_{X'}=f^{'!}\mathcal{O}_{Y'}$, as one verifies at each point of $X'-\{x\}$ over which $p$ is an isomorphism. \smallskip So $\mathcal{O}_{X'}=p_{*}\mathcal{O}_{X''}$ and $X'$ is regular. So $X$ is regular. \smallskip --- \emph{Assume $Y$ excellent, $X$ regular and $\mathrm{dim}(X)=2$. Then $f$ is essentially \'{e}tale }: \smallskip Now $f$ is essentially of complete intersection. Its cotangent complex $L_f$ is a perfect complex in $D^{[-1,0]}(X, \mathcal{O}_X)$. With $L_f$ one associates a canonical ``theta divisor'' homomorphism \[\theta: \mathcal{O}_X\to \mathrm{det}(L_f),\] which, as $f|(X-\{x\})$ is essentially \'{e}tale, is an isomorphism outside $x$. As $\mathrm{prof}_x(X)=2$ and $\mathrm{det}(L_f)$ is an invertible $\mathcal{O}_X$-module, $\theta$ is an isomorphism also at $x$. So $f$ is essentially \'{e}tale at $x$ by the Jacobian Criterion. \end{proof} \smallskip 3. \emph{Almost non-degenerate abelian fibrations.} \smallskip {\bf Definition 3.1.} --- \emph{Let $S$ be an algebraic space. We say that an $S$-algebraic space $X$ is an almost non-degenerate $S$-abelian fibration if there exists in the category of $S$-algebraic spaces a groupoid whose nerve $(X_., d_., s_.)$ satisfies the conditions a) and b) }: \smallskip \emph{a) $X=X_o$.} \smallskip \emph{b) $d_1: X_1\to X_o$ is the structural morphism of an $X_o$-abelian algebraic space with zero section $s_o: X_o\to X_1$.} \smallskip \emph{If for every smooth morphism $S'\to S$ $\mathrm{Coker}(d_o\times_SS', d_1\times_SS')=S'$ in the category of $S'$-algebraic spaces, we say that $X$ is a geometric almost non-degenerate $S$-abelian fibration. If $\mathrm{Coker}(d_o\times_SS', d_1\times_SS')=S'$ holds in the category of $S'$-algebraic spaces for every base change $S'\to S$, we say that $X$ is a universal almost non-degenerate $S$-abelian fibration.} \smallskip \emph{We say that the $S$-stack $[X_.]$ defined by the groupoid $X_.$ is the ramification $S$-stack of $X$.} \smallskip \emph{By a morphism $X\to X'$ of almost non-degenerate $S$-abelian fibrations, we mean a morphism of $S$-groupoids $X_.\to X'_.$.} \smallskip Given every such morphism $f$ from $X$ to $X'$, $f_1: X_1\to X'_1$ is an $f_o$-homomorphism with respect to the abelian algebraic space structures $(d_1, s_o, +)$ on $X_1/X_o$ and on $X'_1/X'_o$ (``Geometric Invariant Theory'' 6.4). \smallskip The isomorphism (``renversement des fl\`{e}ches'') $X_1\to X_1$ transports to $d_o: X_1\to X_o$ from $d_1: X_1\to X_o$ an $X_o$-abelian algebraic space structure with zero section $s_o: X_o\to X_1$. \smallskip The structural morphism $X\to S$ factors, which is the essence of (3.1), as \[X\to [X_.]\to S\] in such a tautological way that the projection $X\to [X_.]$ is a torsor for the lisse-\'{e}tale topology under an $[X_.]$-abelian algebraic stack $A$ verifying $A\times_{[X_.]}X=X_1$. We call $A/[X_.]$ the \emph{albanese} of $X/S$, cf. \cite{basic} 7.2+7.3. \smallskip Notice that $X/S$ is flat (resp. proper, resp. of finite presentation) if and only if $[X_.]/S$ is flat (resp. proper, resp. of finite presentation). \smallskip For every $S$-algebraic space $S'$, $X\times_SS'$ is an almost non-degenerate $S'$-abelian fibration with defining $S'$-groupoid $X_.\times_SS'$. \smallskip Let $E$ be an algebraic stack. We say that an $E$-algebraic stack $F$ is an \emph{almost non-degenerate} $E$-\emph{abelian fibration} if there is a smooth surjective morphism from an algebraic space $S$ to $E$ such that $X=F\times_ES$ is an almost non-degenerate $S$-abelian fibration endowed with a descent datum on its $S$-groupoid $X_.$ relative to $S\to E$. We say that $F/E$ is \emph{geometric} (resp. \emph{universal}) if $X/S$ is. \smallskip If the $S$-groupoid $X_.$ is simply connected with $\mathrm{Coker}(d_o, d_1)=S$, that is, if $[X_.]=S$, then $X/S$ is a \emph{non-degenerate abelian fibration}, namely, a torsor for the \'{e}tale topology under an $S$-abelian algebraic space. One says correspondingly that an almost non-degenerate abelian fibration $F/E$ as above is \emph{non-degenerate} when $F\times_ES/S$ is non-degenerate. \smallskip 4. \emph{Non-uniruled abelian fibrations over spectra of discrete valuation rings.} \smallskip {\bf Theorem 4.1.} --- \emph{Let $S$ be the spectra of a discrete valuation ring, $t$ the generic point of $S$, $s$ the closed point and $\overline{s}$ the spectrum of an algebraic closure of $k(s)$. Let $A_t$ be a $t$-abelian variety and $X_t$ an $A_t$-torsor on $t$ for the \'{e}tale topology. Assume that $X_t$ extends to a separated faithfully flat $S$-algebraic space $X$ of finite type such that not all irreducible components of $X_{\overline{s}}$ are uniruled.} \smallskip \emph{Then }: \smallskip \emph{a) There is a spectrum $t'$ of a finite separable extension of $k(t)$ such that, if $S'$ denotes the normalization of $S$ in $t'$, $A_t\times_tt'$ extends to an $S'$-abelian scheme $A'$.} \smallskip \emph{b) Exactly one irreducible component of $X_{\overline{s}}$ is not uniruled. In particular, $X_{\overline{s}}$ is irreducible if it does not have uniruled irreducible components.} \smallskip {\bf Theorem 4.2.} --- \emph{Keep the assumptions of $(4.1)$. Assume furthermore that $X_{\overline{s}}$ is connected, proper and separable.} \smallskip \emph{Then $X$ is proper over $S$ and normal, $X_{\overline{s}}$ has a unique non-uniruled irreducible component, $A_t$ extends to an $S$-abelian scheme $A$, the regular $S$-minimal model of $X_t$ is an $A$-torsor $F$ on $S$ for the \'{e}tale topology, the canonical birational map from $X$ to $F$ is an $S$-rational map $p$ whose domain of definition contains all points where $X$ is geometrically factorial of equal characteristic or regular, $p$ is \'{e}tale at precisely the points of $\mathrm{Dom}(p)$ outside the image in $X$ of the uniruled irreducible components of $X_{\overline{s}}$ and $p^{-1}$ extends to a proper $S$-birational morphism from $F$ onto $X$ if $X_{\overline{s}}$ does not contain $\overline{s}$-rational curves.} \smallskip {\bf Corollary 4.3.} --- \emph{Keep the assumptions of $(4.1)$. Assume furthermore that $X_{\overline{s}}$ is proper, separable and does not have uniruled irreducible components and that $X$ is at each of its geometric codimension $\geq 2$ points either geometrically factorial of equal characteristic or regular.} \smallskip \emph{Then $A_t$ extends to an $S$-abelian scheme $A$ and $X$ is an $A$-torsor on $S$ for the \'{e}tale topology.} \smallskip {\bf Lemma 4.4.} --- \emph{Let $S$ be the spectra of an excellent discrete valuation ring with generic point $t$. Let $S'\to S$ be a surjective morphism to $S$, essentially of finite type, from a local integral scheme $S'$ of dimension $1$ with generic point $t'$ such that the extension $k(t')/k(t)$ is regular of transcendence degree $d\geq 1$.} \smallskip \emph{Then there is an $S$-scheme $S_o$, which is the spectra of a discrete valuation ring and which is quasi-finite surjective over $S$, and there exists a sequence of morphisms, $S_d\to S_{d-1}\to\cdots\to S_1\to S_o$, each of which is smooth, surjective, purely of relative dimension $1$, with geometrically connected fibers, such that the normalization $S''$ of $S'\times_SS_o$ is $S_o$-isomorphic to a localization of $S_d$. In particular, $S''$ is essentially smooth over $S_o$.} \begin{proof} This is \cite{dejong} 2.13. \end{proof} \smallskip {\bf Lemma 4.5.} --- \emph{Let $p: V\to S$ be a morphism, essentially of finite type, from a regular local scheme $V$ of dimension $1$ with closed point $v$ to a regular local scheme $S$ of dimension $>1$ with closed point $s$ such that $p$ is birational and that $p(v)=s$.} \smallskip \emph{Then $k(v)$ has a sub-$k(s)$-extension $K$ such that the extension $k(v)/K$ is purely transcendental.} \begin{proof} Recall the argument of Zariski. Let $f: X\to S$ be the blow up of $S$ along $s$ (EGA II 8.1.3). Then $X$ is regular (EGA IV 19.4.3, 19.4.4), $f$ is proper and there is one only $S$-morphism $p_1: V\to X$ by the valuative criterion of properness. Write $s_1=p_1(v)$, $S_1=\mathrm{Spec}(\mathcal{O}_{X, s_1})$. Denote the canonical morphism $V\to S_1$ again by $p_1$. Blowing up $S_1$ along $s_1$ and localizing, one obtains similarly $p_2: V\to S_2$. Continuing this way, one finds a projective system of regular local schemes ``$\underleftarrow{\mathrm{Lim}}$'' $S_i$, indexed by $\mathbf{N}$, with $S_o=S$, with each transition morphism being birational, local and essentially of finite type. There is a unique $S$-morphism $(p_i): V\to$ ``$\underleftarrow{\mathrm{Lim}}$'' $S_i$ with $p_o=p$. Write $s_n$ for the closed point of $S_n$, $n\in\mathbf{N}$. \smallskip It suffices to show that the projective system ``$\underleftarrow{\mathrm{Lim}}$'' $S_i$ is essentially constant and has $V$ as its limit. For, if $n$ is the smallest integer such that $V=S_n$, the extension $k(v)/k(s_{n-1})$ is purely transcendental. \smallskip Let $\mathfrak{m}$ denote the ideal of $\mathcal{O}_S$ defining the closed point $s$. For every coherent $S$-ideal $I$ that is non-zero and distinct from $\mathcal{O}_S$, $I\mathcal{O}_X$ is a non-zero sub-ideal of $\mathfrak{m}\mathcal{O}_X=\mathcal{O}_X(1)$ (EGA II 8.1.7). Thus \[I\mathcal{O}_X\otimes_X\mathcal{O}_X(-1)=I_X(-1)\] is a non-zero ideal of $\mathcal{O}_X$. Write $I_1$ for the localization of $I_X(-1)$ at $s_1$. Then $I\mathcal{O}_V$ is \emph{strictly} contained in $I_1\mathcal{O}_V$. If $I_1$ is distinct from $\mathcal{O}_{S_1}$ and if $S_2$ is distinct from $S_1$, one obtains as above an ideal $I_2$ of $\mathcal{O}_{S_2}$ such that $I_1\mathcal{O}_V$ is strictly contained in $I_2\mathcal{O}_V$. As $V$ is noetherian, this process of producing ideals $I_1, I_2, \cdots$ from a given $I$ eventually stops. That is, for a certain integer $N\geq 1$, either $S_N$ is of dimension $1$ (thus is equal to $V$) or $I_N=\mathcal{O}_{S_N}$. Assume that the latter case holds for each coherent $S$-ideal $I$ that is distinct from $0$ and $\mathcal{O}_S$. For otherwise the above assertion is already proven. \smallskip Choose $a_j, b_j\in\Gamma(S, \mathcal{O}_S)$, $a_j\mathcal{O}_V=b_j\mathcal{O}_V$ non-zero, $j=1,\cdots, d$, where $d=\mathrm{deg.tr.}(k(v)/k(s))$, such that the images of the fractions \[a_1/b_1,\ \cdots,\ a_d/b_d\] in $k(v)$ form a basis of transcendence over $k(s)$. By the assumption just made applied to the ideals $a_j\mathcal{O}_S, b_j\mathcal{O}_S$, there exists an integer $N_j\geq 1$ for each $j=1,\cdots, d$ such that either $a_j/b_j$ or $b_j/a_j$ is a section of $\mathcal{O}_{S_{N_j}}$ over $S_{N_j}$. Being invertible on $V$, $a_j/b_j$ or equivalently $b_j/a_j$ is invertible on $S_{N_j}$. Hence, for an integer $N\geq 1$, $k(v)$ is algebraic over $k(s_N)$ and $V$ is essentially quasi-finite over $S_N$. By Zariski's Main Theorem, $V$ is isomorphic to $S_N$. \end{proof} \smallskip 4.6. \emph{Proof of} (4.1). \smallskip See \cite{basic} \S 10 for an application of (4.1) \emph{b}). \smallskip We reduce the proof to that of (4.2). \smallskip --- \emph{Reduction to the case where $S$ is strictly henselian }: \smallskip If $\overline{t}$ denotes the spectrum of a separable closure of $k(t)$ and if $\ell$ is a prime number prime to the characteristic of $k(s)$, the claim \emph{a}) is equivalent to the claim that the $\ell$-adic monodromy representation associated with the $t$-abelian variety $A_t$, \[\rho_{\ell, \overline{t}}: \pi_1(t, \overline{t})\to \mathrm{GL}(H^1(A_{\overline{t}}, \mathbf{Q}_{\ell})),\] when restricted to an inertia subgroup relative to $S$, has finite image (\cite{neron_model} 7.4/5). For both assertions \emph{a}), \emph{b}), replacing $S$ by its strict henselization $S_{(\overline{s})}$ at $\overline{s}$, $X$ by $X\times_SS_{(\overline{s})}$ and $A_t$ by $A_t\times_tt^{hs}$, where $t^{hs}$ denotes the generic point of $S_{(\overline{s})}$, one may assume that $S$ is strictly henselian. \smallskip --- \emph{Assume $S$ strictly henselian. Reduction to the case where $S$ is complete }: \smallskip Let $S'$ be the completion of $S$ along $s$, $t'$ the generic point of $S'$ and $\overline{t'}$ the spectrum of a separable closure of $k(t')$ containing a separable closure $k(\overline{t})$ of $k(t)$. \smallskip The projection $t'\to t$ induces by SGA 4 X 2.2.1 an isomorphism \[\pi_1(t', \overline{t'})\ \widetilde{\to}\ \pi_1(t, \overline{t}),\] relative to which the proper base change isomorphism (SGA 4 XII 5.1) \[H^1(A_{\overline{t}}, \mathbf{Q}_{\ell})\ \widetilde{\to}\ H^1(A_{\overline{t'}}, \mathbf{Q}_{\ell})\] is equivariant. Replacing $S$ by $S'$, $X$ by $X\times_SS'$ and $A_t$ by $A_t\times_tt'$, one may assume moreover that $S$ is complete. \smallskip --- \emph{Assume $S$ strictly henselian and complete. Reduction to the case where $X$ is a proper flat $S$-scheme }: \smallskip Let the maximal points of $X_s$ be $x_1,\cdots, x_n$. There is an open neighborhood $V$ of $\{x_1,\cdots, x_n\}$ in $X$ which is a scheme (\cite{raynaud_specialization} 3.3.2). Replacing $X$ by $V$, one may assume that $X$ is a separated flat $S$-scheme of finite type. By Nagata, $X$ admits an open $S$-immersion into a proper $S$-scheme $P$. One may then replace $X$ by its closed image in $P$ and assume that $X$ is a proper flat $S$-scheme. \smallskip --- \emph{Assume $S$ strictly henselian, complete and that $X$ is a proper flat $S$-scheme. Reduction to the case where $X_s$ is separable over $k(s)$ }: \smallskip Notice that $X$ is integral. For, one has $\mathrm{Ass}(X)=\mathrm{Ass}(X_t)$, as $X$ is flat over $S$ (EGA IV 3.3.1). Notice also that $S$, being complete, is excellent. Let the maximal points of $X_s$ be $x_1,\cdots, x_n$. By applying (4.4) to the morphisms $\mathrm{Spec}(\mathcal{O}_{X, x_i})\to S$, one finds an $S$-scheme $S_o$, which is the spectra of a discrete valuation ring and which is finite surjective over $S$, such that the normalization of $X\times_SS_o$, $X_o$, is smooth over $S_o$ at all maximal points of the closed fiber of $X_o/S_o$. Again, at least one irreducible component of the geometric closed fiber of $X_o/S_o$ is not uniruled. Observe that $X/S$ verifies the claims \emph{a})+\emph{b}) as long as $X_o/S_o$ does. Thus, replacing $S$ by $S_o$, $X$ by $X_o$ and $A_t$ by $A_t\times_tt_o$, where $t_o$ denotes the generic point of $S_o$, one may assume that $X$ is smooth over $S$ at all maximal points of $X_s$. \smallskip The fiber $X_{\overline{s}}$ is connected, as $X$ is proper over $S$. To finish it suffices to apply (4.2). \smallskip \begin{flushright} $\square$ \end{flushright} \smallskip 4.7. \emph{Proof of} (4.2), (4.3). \smallskip See \cite{basic} 10.2 for an application of (4.3). \smallskip One may assume that $S$ is strictly henselian. \smallskip Being separated faithfully flat over $S$ of finite type with geometric fibers proper and connected, $X$ is by EGA IV 15.7.10 proper over $S$. Moreover, as $X_s$ is separable, $X$ is normal. And, the open sub-algebraic space $V$ of $X$ consisting of all points at which $X\to S$ is smooth is $S$-schematically dense in $X$. \smallskip Fix an $S$-section $o$ of $V$ (EGA IV 17.16.3) by means of which one identifies $X_t=V_t$ with $A_t$. \smallskip Consider a smoothening of $X$ (\cite{neron_model} 3.1/1, 3.1/3), $f: X'\to X$, which is so constructed as the composition of a finite sequence of blow-ups with centers lying above the complement of $V$ in $X$. Let $W'$ be the open sub-algebraic space of $X'$ consisting of all points at which $X'\to S$ is smooth. Naturally, $V$ can be identified with an open sub-algebraic space of $W'$ and one has $W'_t=V_t$. Let $d=\mathrm{dim}(A_t)$. Choose a non-zero section $\omega'\in\Gamma(W', \Omega^d_{W'/S})$ such that the support of the divisor $\mathrm{Div}_{W'}(\omega')$ is \emph{strictly} contained in $W'_s$. Such a section $\omega'$ clearly exists and is unique up to multiplication by a unit of $\Gamma(S, \mathcal{O}_S)$. \smallskip The morphism $W'_t=A_t$, where the identification is provided by the above section $o$, has a unique extension to an $S$-morphism \[p: W'\to A,\] where $A$ denotes the $S$-N\'{e}ron model of $A_t$. \smallskip In the language of N\'{e}ron models, $W'$ is a weak $S$-N\'{e}ron model of $W'_t$ (\cite{neron_model} 3.5/1, 3.5/2). Its open sub-algebraic space $U'=W'-\mathrm{Supp}(\mathrm{Div}_{W'}(\omega'))$ admits a canonical $S$-birational group law (\emph{loc.cit.} 4.3/5) which extends the group structure of $U'_t=A_t$ over $t$. And, the restriction of $p$ to $U'$, $p|U': U'\to A$, is an $S$-schematically dense open immersion, which solves the universal problem of extending the $S$-birational group law of $U'$ to an $S$-group law (\emph{loc.cit.} 4.3/6, 5.1/5). \smallskip Let the maximal points of $X_s$ be $x_1,\cdots, x_n$. They lie in $V$ and can thus be considered as points of $W'$. Notice that there is an open neighborhood of $\{x_1,\cdots, x_n\}$ in $W'$ which is a scheme (\cite{raynaud_specialization} 3.3.2). \smallskip Observe that if a point $x$ among $x_1,\cdots, x_n$ belongs to $V-U'$, that is, if $\mathrm{Div}_{W'}(\omega')_{x}$ is not zero, then \[p: \mathrm{Spec}(\mathcal{O}_{W', x})\to \mathrm{Spec}(\mathcal{O}_{A, p(x)})\] is not an isomorphism. This implies by (4.5) that $X_{\overline{s}}$ is uniruled at $x$. Here we have denoted again by $x$ the unique point of $X_{\overline{s}}$ that projects to $x$ in $X_s$. \smallskip By hypothesis there is at least one point of $\{x_1,\cdots, x_n\}$, say $x_1$, at which $X_{\overline{s}}$ is not uniruled. One finds thus $x_1\in V\cap U'$ and that $A_{\overline{s}}$ is not uniruled at $p(x_1)$. So $A_{\overline{s}}$ is an $\overline{s}$-abelian variety (2.2). So $A$ is an $S$-abelian scheme, $U'_{\overline{s}}$ is irreducible, $X_{\overline{s}}$ is by (4.5) uniruled at all its maximal points other than $x_1$ and $p$ is \'{e}tale at precisely the points of $U'$. \smallskip If $X_{\overline{s}}$ does not contain $\overline{s}$-rational curves, the rational map $p^{-1}$ extends by (2.1) to an $S$-morphism, hence a proper $S$-birational morphism, from $A$ onto $X$. \smallskip Finally, being a trivial $A_t$-torsor, $X_t$ has a trivial $A$-torsor $F$ as its regular $S$-minimal model. The assertion on the points where the canonical birational map from $X$ to $F$ is \'{e}tale (resp. is defined) follows by (2.4) (resp. (5.2)+(5.3) below). Recall (EGA IV 21.13.9, 21.13.11) that a noetherian normal local scheme of dimension $\geq 2$ is factorial (resp. geometrically factorial) if and only if it is para-factorial (resp. geometrically para-factorial) at all of its points (resp. geometric points) of codimension $\geq 2$. With the conditions of (4.3), such a birational map is everywhere defined and is an $S$-isomorphism. \begin{flushright} $\square$ \end{flushright} \smallskip {\bf Lemma 4.8.} --- \emph{Let $n, \delta_1,\cdots, \delta_n$ be integers $\geq 1$ such that the greatest common divisor of $\delta_1,\cdots, \delta_n$ is $1$. Let $S$ be an algebraic space, $A$ an $S$-abelian algebraic space, $X$ an $A$-torsor on $S$ for the \'{e}tale topology and $S_i$ an $S$-algebraic space finite flat of finite presentation over $S$ of constant degree $\delta_i$, $i=1,\cdots, n$. Suppose that $X$ has sections over all $S_i$, $i=1,\cdots, n$.} \smallskip \emph{Then $X$ is a trivial $A$-torsor.} \begin{proof} By considering $A$ as $\mathrm{Pic}^o_{A^{*}/S}$, $A^{*}$ being the dual abelian algebraic space of $A$, one defines for each $i=1,\cdots, n$ the norm homomorphism \[N_i: \prod_{S_i/S} A_{S_i}\to A.\] The composition \[A\to \prod_{S_i/S}A_{S_i}\stackrel{N_i}{\longrightarrow} A\] is equal to $\delta_i\mathrm{Id}_A$, where \[A\to \prod_{S_i/S}A_{S_i},\ x\mapsto x_{S_i}=x\times_SS_i\] is the adjunction morphism associated with the pair of adjoint functors \[\prod_{S_i/S}-\ ,\ S_i\times_S-.\] Let $\sigma_i$ be a section of $X$ over $S_i$, $i=1,\cdots, n$, and choose integers $e_1,\cdots, e_n$ such that $e_1\delta_1+\cdots+e_n\delta_n=1$. Consider the $S$-morphism \[q: X\to A,\ x\mapsto \sum^n_{i=1}e_i.N_i(x_{S_i}-\sigma_i),\] where $x_{S_i}-\sigma_i$ denotes the unique local section $a_i$ of $A_{S_i}$ satisfying $a_i+\sigma_i=x_{S_i}$. For each local $S$-section $a$ (resp. $x$) of $A$ (resp. $X$), one has \[q(a+x)=\sum^n_{i=1}e_i(N_i(a_{S_i})+N_i(x_{S_i}-\sigma_i))=(\sum^n_{i=1}e_i\delta_i.a)+q(x)=a+q(x).\] Being thus an $A$-equivariant morphism between $A$-torsors, $q$ is an isomorphism. \end{proof} \smallskip {\bf Theorem 4.9.} --- \emph{Keep the assumptions of $(4.1)$. Assume furthermore that $X_{\overline{s}}$ is connected, proper, of total multiplicity prime to the characteristic of $k(\overline{s})$ and that $X$ is regular.} \smallskip \emph{Then there is a non-degenerate abelian fibration $F/E$ over an $S$-algebraic stack $E$ with $E\times_St=t$ which extends $X_t$ over $t$ where $E$ is finite flat over $S$, tame along $s$ and regular. The identity $X_t=F\times_EE_t$ extends to a proper $S$-morphism $p$ from $X$ to $F$ which is \'{e}tale at precisely the points outside the image of the uniruled irreducible components of $X_{\overline{s}}$. Such $(F/E, p)$ is unique up to unique $S$-isomorphisms and its formation commutes with every formally smooth faithfully flat base change $T\to S$ of spectra of discrete valuation rings.} \smallskip {\bf Theorem 4.10.} --- \emph{Keep the hypothesis of $(4.9)$ except that $X$ be regular. Assume that $X$ is at each of its geometric points either geometrically factorial of equal characteristic or regular. Assume furthermore that $X_{\overline{s}}$ does not have uniruled irreducible components.} \smallskip \emph{Then $X$ is regular and is a universal almost non-degenerate $S$-abelian fibration and $X_{\overline{s}}$ does not contain $\overline{s}$-rational curves. If $A/E$ denotes the albanese of $X/S$, the action of $A_t$ on $X_t$ extends uniquely to an action on $X$ by the $S$-N\'{e}ron model \[\prod_{E/S}A\] of $A_t$.} \smallskip 4.11. \emph{Proof of} (4.9). \smallskip --- \emph{Case where $S$ is strictly henselian }: \smallskip Let $\overline{t}$ be the spectrum of a separable closure of $k(t)$ and let $\overline{\eta}$ be a geometric generic point of $X_{\overline{t}}$. The projection $X_{\overline{t}}\to X$ induces the specialization homomorphism (SGA 1 X 2) \[sp: \pi_1(X_{\overline{t}}, \overline{\eta})\to \pi_1(X, \overline{\eta}).\] The image of $sp$ is a normal subgroup of finite index (\cite{raynaud_specialization} 6.3.5) and its associated monodromy representation \[\pi_1(X, \overline{\eta})\to \mathrm{Coker}(sp)=G\] corresponds by Galois theory to an $X$-algebraic space $X'$ which is connected and finite \'{e}tale Galois over $X$ with Galois group $G$. \smallskip Let $S'=\mathrm{Spec}\ \Gamma(X', \mathcal{O}_{X'})$, which is the spectra of a discrete valuation ring. Let the generic (resp. closed) point of $S'$ be $t'$ (resp. $s'$). Then $X'_{t'}=X_t\times_tt'$. By \cite{raynaud_specialization} 6.3.5+6.3.7, $X'_{s'}$ is of total multiplicity $1$, $k(s)=k(s')$ and $G$ is cyclic of order equal to the total multiplicity $\delta$ of $X_s$, as $\delta$ is prime to the characteristic of $k(s)$ and $X$ being regular has geometrically factorial local rings. \smallskip Let the maximal points of $X'_{s'}$ be $x'_1,\cdots, x'_n$ and let $Z'_i$ be the closed image of $\mathrm{Spec}(\mathcal{O}_{X'_{s'}, x'_i})\to X'_{s'}$ in $X'_{s'}$, $i=1,\cdots, n$. By \cite{raynaud_specialization} 7.1.2, for each $i=1,\cdots, n$, there is a regular closed $S'$-immersion $S'_i\hookrightarrow X'$ such that $S'_i$ is finite flat over $S'$ of rank equal to the total multiplicity $\delta'_i$ of $Z'_i/s'$ and such that $S'_i$ intersects $X'_{s'}$ at one unique point of $Z'_i\backslash\sum_{j\neq i}Z'_j$. The greatest common divisor of $\delta'_1,\cdots, \delta'_n$ is by definition the total multiplicity of $X'_{s'}$, that is, $1$. Thus by (4.8) the $(A_t\times_tt')$-torsor $X'_{t'}=X_t\times_tt'$ admits a $t'$-point; by means of one such $t'$-point, one identifies $X'_{t'}$ with $A_t\times_tt'$. \smallskip Write $W'$ for the open sub-algebraic space of $X'$ which consists of all points at which $X'\to S'$ is smooth. Each $t'$-point of $X'_{t'}$ uniquely extends to an $S'$-section of $W'$, as $X'$ is proper over $S'$ and regular. That is, $W'$ is a weak $S'$-N\'{e}ron model of $W'_{t'}=X'_{t'}$ (\cite{neron_model} 3.5/1). Let $d=\mathrm{dim}(W'_{t'})$. If one chooses a non-zero section $\omega'\in\Gamma(W', \Omega^d_{W'/S'})$ such that the divisor $\mathrm{Div}_{W'}(\omega')$ has support strictly contained in $W'_{s'}$, the open $U'=W'-\mathrm{Supp}(\mathrm{Div}_{W'}(\omega'))$ has an $S'$-birational group law which extends the group structure of $U'_{t'}=X'_{t'}=A_t\times_tt'$ over $t'$. \smallskip One argues as in (4.7) that the $S'$-N\'{e}ron model of $A_t\times_tt'$ is an $S'$-abelian scheme $A'$ and that the regular $S'$-minimal model $F'$ of $X'_{t'}$ is a trivial $A'$-torsor. The identity $X'_{t'}=F'_{t'}$ extends uniquely by (2.1) to an $S'$-morphism $p': X'\to F'$, which is equivariant with respect to the canonical action of $G$ on $X'$ and on $F'$. Moreover, $p'$ is proper surjective and is \'{e}tale at precisely the points of $U'$ and $U'$ is the complement of the image of the uniruled irreducible components of $X'_{\overline{s'}}$, where $\overline{s'}$ denotes the spectrum of an algebraic closure of $k(s')$. The quotient of $p'$ by $G$, \[p=[p'/G]: [X'/G]=X\to [F'/G]=F,\] is consequently proper and is \'{e}tale at precisely the points outside the image of the uniruled irreducible components of $X_{\overline{s}}$. The projection \[[F'/G]=F\to [S'/G]=E\] is a non-degenerate abelian fibration with albanese $[A'/G]=A$. The algebraic stack $E$ is finite flat over $S$, regular and tame along $s$, as $S'$ is. Over $t$, one has $E\times_St=[S'_{t}/G]=[t'/G]=t$. \smallskip The formation of $(F/E, p)$ evidently commutes with every formally smooth faithfully flat base change $T\to S$ of spectra of strictly henselian discrete valuation rings. \smallskip It remains to characterize $(F/E, p)$ up to unique $S$-isomorphisms. Let $(F^{\natural}/E^{\natural}, p^{\natural})$ be an alternative with albanese $A^{\natural}$. Let $U=U'/G$ be the complement in $X$ of the image of the uniruled irreducible components of $X_{\overline{s}}$. There is a unique $U$-isomorphism of $U$-abelian algebraic spaces $A^{\natural}\times_{E^{\natural}}U=A\times_EU$ extending the identity morphism of $A_t\times_tX_t$. For, the restriction functor from the category of $U$-abelian algebraic spaces to the category of $X_t$-abelian algebraic spaces is fully faithful. Let $U_1^{\natural}$ be the open sub-$U$-algebraic space of $A^{\natural}\times_{E^{\natural}}U=A\times_EU=A_U$ which has image $U\times_{E^{\natural}}U$ by the isomorphism \[r^{\natural}=(\mu^{\natural}, p_2^{\natural}): A^{\natural}\times_{E^{\natural}}F^{\natural}\ \widetilde{\to}\ F^{\natural}\times_{E^{\natural}}F^{\natural},\] where $p_2^{\natural}$ (resp. $\mu^{\natural}$) is the second projection (resp. represents the action of $A^{\natural}$ on $F^{\natural}$). Write $U_1^{\natural'}$ for the inverse image of $U_1^{\natural}$ by the projection $A'_{U'}\to A_U$ and write $x'$ (resp. $y'$) for the generic point of $U'_{s'}$ (resp. $A'_{U'}\times_{S'}s'$). Now $U^{\natural'}$ contains $y'$, and $r^{\natural}$ induces an $S$-morphism \[(\mathrm{Spec}(\mathcal{O}_{U^{\natural'}_1, y'})\rightrightarrows\mathrm{Spec}(\mathcal{O}_{U', x'}))\to (U\times_{E^{\natural}}U\rightrightarrows U),\] which in turn by quotient induces an $S$-morphism $S'\to E^{\natural}$. This latter is smooth surjective, since the composition $U'\to S'\to E^{\natural}$ is. As both $S'$ and $E^{\natural}$ are finite over $S$, $S'\to E^{\natural}$ is finite \'{e}tale surjective and hence, for each integer $n\geq 0$, $\mathrm{cosq}_o(S'/E^{\natural})_n$ is the normalization of $S$ in $\mathrm{cosq}_o(t'/t)_n$. So $E^{\natural}=E$. To prove that $A^{\natural}=A$, it suffices to prove the equality $A^{\natural}\times_ES'=A\times_ES'=A'$ and that the descent data on $A'$ relative to $S'\to E$ corresponding to $A^{\natural}$ and to $A$ coincide. One has a unique $S'$-isomorphism of $S'$-abelian algebraic spaces $A^{\natural}\times_ES'=A\times_ES'$ which extends the identity morphism of $A'_{t'}$, for the restriction functor from the category of $S'$-abelian algebraic spaces to the category of $t'$-abelian algebraic spaces is fully faithful. The coincidence of the two descent data is deduced in the same way, since the restriction functor from the category of abelian algebraic spaces over $G_{S'}$ (resp. $(G\times G)_{S'}$) to the category of abelian algebraic spaces over $G_{t'}$ (resp. $(G\times G)_{t'}$) is fully faithful. The identity $F^{\natural}=F$ follows by a similar argument based on the uniqueness of regular minimal models. Finally, $p^{\natural}=p$, as one has $(p^{\natural}\times_ES')|X'_{t'}=(p\times_ES')|X'_{t'}$. \smallskip --- \emph{General case }: \smallskip Let $S_{(\overline{s})}$ denote the strictly henselization of $S$ at $\overline{s}$. Let $\pi\in\Gamma(S, \mathcal{O}_S)$ be a uniformizer and $f: X\to S$ the structural morphism. Notice that the cycle $\Delta=f^*\mathrm{Div}_S(\pi)/\delta$ is integral, where $\delta$ denotes the total multiplicity of $X_{\overline{s}}$, for $\Delta\times_SS_{(\overline{s})}$ is on $X\times_SS_{(\overline{s})}$. With $\Delta$ one associates a canonical $\mu_{\delta}$-torsor on $X$ for the \'{e}tale topology, $X'\to X$, which after the base change $S_{(\overline{s})}\to S$ corresponds to the specialization homomorphism of fundamental groups above. In particular, it suffices to define $E$ to be $[S'/\mu_{\delta}]$, where $S'$ is defined to be $\mathrm{Spec}\ \Gamma(X', \mathcal{O}_{X'})$. There is by quotient by $\mu_{\delta}$ an $S$-morphism \[[X'/\mu_{\delta}]=X\to [S'/\mu_{\delta}]=E.\] Let $t'$ be the generic point of $S'$. One verifies after the base change $S_{(\overline{s})}\to S$ that the $S'$-N\'{e}ron model of $A_t\times_tt'$ is an $S'$-abelian scheme $A'$ and that the regular $S'$-minimal model of $X'_{t'}=X_t\times_tt'$ is an $A'$-torsor $F'$ on $S'$ for the \'{e}tale topology. On $F'\to S'$, $\mu_{\delta}$ acts compatibly. The non-degenerate abelian fibration \[[F'/\mu_{\delta}]=F\to [S'/\mu_{\delta}]=E\] has albanese $[A'/\mu_{\delta}]=A$. The identity $X'_{t'}=F'_{t'}$ extends by (2.1) to a unique $S'$-morphism $p': X'\to F'$. Write $p=[p'/\mu_{\delta}]$, which is proper and is \'{e}tale at precisely the points outside the image of the uniruled irreducible components of $X_{\overline{s}}$. This $(F/E, p)$ is unique up to unique $S$-isomorphisms and its formation commutes with every formally smooth faithfully flat base change $T\to S$ of spectra of discrete valuation rings, as one verifies after the base change $S_{(\overline{s})}\to S$. \begin{flushright} $\square$ \end{flushright} 4.12. \emph{Proof of} (4.10). \smallskip Keep the notations of (4.11). As $X_{\overline{s}}$ by hypothesis does not have uniruled irreducible components, the morphism $p': X'\to F'$ is thus an isomorphism (4.2) and $X'_{\overline{s'}}$, hence $X_{\overline{s}}$ as well, does not contain rational curves. Clearly, $X=F$ is an almost non-degenerate $S$-abelian fibration with ramification stack $E$ and albanese $A$. This fibration is universal. For, as $\delta$ is prime to the residue characteristics of $S$, the formation of the quotient $S'/\mu_{\delta}$ commutes with every base change $T\to S$. \smallskip Write \[\overline{A}=\prod_{E/S}A,\] which is the kernel of the diagram \[(d_o^*, d_1^*): \prod_{S'/S}A'\rightrightarrows \prod_{S''/S}A'',\] where $S''=\mu_{\delta}\times_SS'$, $d_1, d_o: S''\to S'$ respectively denotes the second projection and represents the action of $\mu_{\delta}$ on $S'$, and $A''=A\times_ES''=d_o^*A'=d_1^*A'$. In particular, $\overline{A}$ is a separated $S$-group scheme of finite type, since both \[\prod_{S'/S}A',\ \prod_{S''/S}A''\] are separated $S$-group schemes of finite type (\cite{neron_model} 7.6/4). Moreover, $\overline{A}(t^{hs})=\overline{A}(S_{(\overline{s})})$, where $t^{hs}$ denotes the generic point of $S_{(\overline{s})}$, as $A'$ is an $S'$-abelian scheme. Next, let $T$ be an affine $S$-scheme and let $T_o$ be a closed sub-$S$-scheme of $T$ defined by an ideal $I$ with $I^2=0$. By applying the functor $H^0(\mu_{\delta}, -)$ to the exact sequence of $\mu_{\delta}$-modules \[0\to\Gamma(T\times_SS', I\otimes_S\mathrm{Lie}(A'/S'))\to A'(T\times_SS')\to A'(T_o\times_SS')\to 0\] one obtains a surjection ($\delta$ invertible on $S$) \[\overline{A}(T)=H^0(\mu_{\delta}, A'(T\times_SS'))\to \overline{A}(T_o)=H^0(\mu_{\delta}, A'(T_o\times_SS')).\] This shows that $\overline{A}$ is formally smooth over $S$. So $\overline{A}$ is the $S$-N\'{e}ron model of $\overline{A}_t=A_t$. \smallskip Write the action of $A_t$ on $X_t$ as $\mu_t: A_t\times_tX_t\to X_t$. By (2.1) $\mu_t$ uniquely extends to an $S$-morphism $\mu: \overline{A}\times_SX\to X$, as $\overline{A}\times_SX$ is regular connected and as $X_{\overline{s}}$ does not contain $\overline{s}$-rational curves. The $S$-binary law $\mu$ is associative and hence represents an action of $\overline{A}$ on $X$, as $A_t\times_tA_t\times_tX_t$ is dense in $\overline{A}\times_S\overline{A}\times_SX$ and $X$ is $S$-separated. \begin{flushright} $\square$ \end{flushright} \smallskip 5. \emph{Non-uniruled abelian fibrations in characteristic zero. Purity.} \smallskip {\bf Lemma 5.1.} --- \emph{Let $S$ be a locally noetherian algebraic space and $U$ an open sub-algebraic space of $S$ with $\mathrm{prof}_{S-U}(S)\geq 2$.} \smallskip \emph{Then the functor $A\mapsto A|U$, from the category of $S$-abelian algebraic spaces to the category of $U$-abelian algebraic spaces, is fully faithful. It is an equivalence if $S$ is normal of residue characteristics zero and pure along $S-U$} (SGA 2 X 3.1)\emph{, in particular, if $S$ is regular of residue characteristics zero.} \begin{proof} The full-faithfulness of the functor $A\mapsto A|U$ follows by \cite{lemme de Gabber} Proposition (1), 3). The assertion on the equivalence is \cite{grothendieck_abelian} 4.2+4.5. \end{proof} \smallskip {\bf Lemma 5.2.} --- \emph{Let $S$ be a noetherian local scheme with closed point $s$. Let $U=S-\{s\}$. Let $A$ be an $S$-abelian algebraic space with structural morphism $f$ and zero section $e$. Let $A^*=\mathrm{Pic}^o_{A/S}$ be the dual $S$-abelian algebraic space of $A$ with structural morphism $f^*$ and zero section $e^*$.} \smallskip \emph{Then the two following statements hold when $A$ is para-factorial} (SGA 2 XI 3.1)\emph{ along $f^{-1}(s)$ }: \smallskip 1) \emph{Each $U$-section of $f^*|U$ extends uniquely to an $S$-section of $f^*$.} \smallskip 2) \emph{Each $f|U$-fiberwise numerically trivial invertible module on $f^{-1}(U)$ rigidified along $e|U$ extends uniquely to an $f$-fiberwise numerically trivial invertible module on $A$ rigidified along $e$.} \begin{proof} Note that these two are the same statements. By the hypothesis that $A$ is para-factorial along $f^{-1}(s)$, one has $\mathrm{prof}_s(S)\geq 2$ and that each invertible module $L$ on $f^{-1}(U)$ extends up to unique isomorphisms to a unique invertible module $\overline{L}$ on $A$. It is evident that $\overline{L}$ is $f$-fiberwise numerically trivial (resp. has a unique rigidification along $e$ extending that of $L$ along $e|U$) if $L$ is $f|U$-fiberwise numerically trivial (resp. rigidified along $e|U$). \end{proof} \smallskip {\bf Lemma 5.3.} --- \emph{Let $S$ be a noetherian local scheme with closed point $s$. Let $X$ be an $S$-smooth algebraic space with structural morphism $f$. Then $X$ is para-factorial along $f^{-1}(s)$ in the following cases }: \smallskip i) \emph{$S$ is regular of dimension $\geq 2$.} \smallskip ii) \emph{$S$ is of equal characteristic and geometrically para-factorial at $s$.} \begin{proof} Let $\overline{s}$ be the spectrum of a separable closure of $k(s)$ and $S_{(\overline{s})}$ the strict localization of $S$ at $\overline{s}$. Recall that one says that $S$ is \emph{geometrically para-factorial} at $s$ if $S_{(\overline{s})}$ is para-factorial along $\overline{s}$. Case i) is classical and ii) is \cite{boutot} III 2.14. \end{proof} \smallskip {\bf Definition 5.4.} --- \emph{Let $S$ be an algebraic space and $U$ an open sub-algebraic space of $S$. We say that $S$ is $A$-pure along $S-U$ if for every smooth morphism $S'\to S$ the functor $A\mapsto A|U'$, from the category of $S'$-abelian algebraic spaces to the category of $U'$-abelian algebraic spaces, is an equivalence, where $U'=U\times_SS'$. We say that $S$ is strictly $A$-pure along $S-U$ if furthermore for every smooth morphism $S'\to S$ with $U'=U\times_SS'$ and every $S'$-abelian algebraic space $A$, each $U'$-section of $A$ extends uniquely to an $S'$-section of $A$. We say that $S$ is $A$-pure (resp. strictly $A$-pure) at a geometric point $s$ if its strict localization $S_{(s)}$ at $s$ is $A$-pure (resp. strictly $A$-pure) along $s'$, where $s'$ is the closed point of $S_{(s)}$.} \smallskip {\bf Example 5.5.} --- By (5.1) and by SGA 4 XV 2.1, if an algebraic space $S$ is of residue characteristics zero locally noetherian normal and \emph{pure} (SGA 2 X 3.2) at a geometric point $s$, then $S$ is $A$-pure at $s$. If furthermore the strict localization $S_{(s)}$ is para-factorial along its closed point, then by (5.2)+(5.3) $S$ is strictly $A$-pure at $s$. \smallskip {\bf Lemma 5.6.} --- \emph{Let $S$ be an algebraic space and $U$ an open sub-algebraic space of $S$ such that $S$ is strictly $A$-pure along $S-U$. For $i=1, 2$, let $A_i$ be an $S$-abelian algebraic space and $X_i$ an $A_i$-torsor on $S$ for the \'{e}tale topology.} \smallskip \emph{Then each $U$-morphism from $X_1|U$ to $X_2|U$ extends uniquely to an $S$-morphism from $X_1$ to $X_2$.} \begin{proof} The question being an \'{e}tale local question on $S$, one may assume the torsors $X_i$, $i=1, 2$, to be trivial. \smallskip Each $U$-morphism $q: A_1|U\to A_2|U$ is the unique composite of a translation ($a_2\mapsto a_2+q(0)$) and a $U$-group homomorphism $p: A_1|U\to A_2|U$ (``Geometric Invariant Theory'' 6.4). As by hypothesis $S$ is strictly $A$-pure along $S-U$, the $U$-section $q(0)=\sigma$ and the $U$-group homomorphism $p$ extend uniquely to an $S$-section $\overline{\sigma}$ and an $S$-group homomorphism $\overline{p}$, hence the claim. \end{proof} \smallskip {\bf Proposition 5.7.} --- \emph{Let $S$ be an algebraic space and $U\to X$ an open immersion of $S$-algebraic spaces such that $X$ is strictly $A$-pure along $X-U$. Assume that on $U$ there is given an almost non-degenerate $S$-abelian fibration structure with defining $S$-groupoid $U_.$.} \smallskip \emph{Then up to unique isomorphisms there exists a unique almost non-degenerate $S$-abelian fibration structure on $X$ with $S$-groupoid $X_.$ such that $d_1: X_1\to X_o=X$ restricts to $d_1: U_1\to U_o=U$ on $U$.} \begin{proof} As $X$ is $A$-pure along $X-U$, there exist unique cartesian diagrams in the category of $S$-algebraic spaces : {\[\xymatrix{ U_1 \ar[r]^{j'} \ar[d]_{d_1} & A' \ar[d]^{f'} \\ U \ar[r]_{} & X}\] } {\[\xymatrix{ U_1 \ar[r]^{j} \ar[d]_{d_o} & A\ar[d]^{f} \\ U \ar[r]_{} & X}\] }whose vertical arrows have abelian algebraic space structures. Let $p'$ (resp. $p$) denote the projection of $A'\times_XA$ onto $A'$ (resp. $A$). The diagonal immersion \[(j', j): U_1\to A'\times_XA\] satisfies \[p'(j', j)=j',\ p(j', j)=j.\] --- \emph{There is a unique section $i'$ of the abelian algebraic space structure $p'$ such that $i'j'=(j', j)$.} \smallskip Indeed, as $j'$ is the base change of $U\hookrightarrow X$ by the smooth morphism $f'$, $A'$ is strictly $A$-pure along $A'-j'(U_1)$. So $(j', j)$, considered as a $U_1$-section of $p'$, extends uniquely to a section $i'$ of $p'$. That is, $p'i'=\mathrm{Id}_{A'}$ and $i'j'=(j', j)$. \smallskip --- \emph{The following diagrams are commutative and cartesian }: {\[\xymatrix{ U_1 \ar[r]^{j'} \ar[d]_{d_1} & A' \ar[d]^{f'} \\ U \ar[r]_{} & X}\] } {\[\xymatrix{ U_1 \ar[r]^{j'} \ar[d]_{d_o} & A' \ar[d]^{fpi'} \\ U \ar[r]_{} & X}\] }which, from now on, we rewrite as : {\[\xymatrix{ U_1 \ar[r]^{} \ar[d]_{d_1} & X_1 \ar[d]^{d_1} \\ U_o \ar[r]_{} & X_o}\] } {\[\xymatrix{ U_1 \ar[r]^{} \ar[d]_{d_o} & X_1 \ar[d]^{d_o} \\ U_o \ar[r]_{} & X_o}\] }Next, form the cartesian diagram : {\[\xymatrix{ X_2 \ar[r]^{d_o} \ar[d]_{d_2} & X_1 \ar[d]^{d_1} \\ X_1 \ar[r]_{d_o} & X_o}\] } --- \emph{There is a unique morphism $d_1: X_2\to X_1$ such that the following diagram commutes and is cartesian }: {\[\xymatrix{ X_2 \ar[r]^{d_1} \ar[d]_{d_2} & X_1 \ar[d]^{d_1} \\ X_1 \ar[r]_{d_1} & X_o}\] } Indeed, as $U_1\hookrightarrow X_1$ is the base change of $U\hookrightarrow X$ by the smooth morphism $d_1$, $X_1$ is strictly $A$-pure along $X_1-U_1$. By (5.6) the cartesian diagram (SGA 3 V 1) {\[\xymatrix{ U_2 \ar[r]^{d_1} \ar[d]_{d_2} & U_1 \ar[d]^{d_1} \\ U_1 \ar[r]_{d_1} & U_o}\] }whose vertical arrows have abelian algebraic space structures has a unique extension as above claimed, for one verifies that : \smallskip \noindent i) \emph{The base change of $d_2: X_2\to X_1$ by $U_1\hookrightarrow X_1$ is $d_2: U_2\to U_1$.} \smallskip \noindent ii) \emph{The base change of $d_1: X_1\to X_o$ by $U_1\hookrightarrow X_1\stackrel{d_1}{\longrightarrow}X_o$ is the base change of $d_1: U_1\to U_o$ by $d_1: U_1\to U_o$.} \smallskip --- \emph{The above $d_1: X_2\to X_1$ fits into the following diagram which is commutative and cartesian }: {\[\xymatrix{ X_2 \ar[r]^{d_1} \ar[d]_{d_o} & X_1 \ar[d]^{d_o} \\ X_1 \ar[r]_{d_o} & X_o}\] }For, $X_1$ is strictly $A$-pure along $X_1-U_1$ and one has similarly the cartesian diagram : {\[\xymatrix{ U_2 \ar[r]^{d_1} \ar[d]_{d_o} & U_1 \ar[d]^{d_o} \\ U_1 \ar[r]_{d_o} & U_o}\] } It is now immediate that one has obtained the desired $S$-groupoid $X_.$ (cf. SGA 3 V 1). \end{proof} \smallskip {\bf Proposition 5.8.} --- \emph{Let $S$ be a noetherian normal integral scheme, $t$ the generic point of $S$, $A_t$ a $t$-abelian variety and $X_t$ an $A_t$-torsor on $t$ for the \'{e}tale topology. Assume that, for each strict henselization $S'$ of $S$ at a geometric codimension $1$ point $s$, if $t'$ (resp. $s'$) denotes the generic (resp. closed) point of $S'$, $X_t\times_tt'$ extends to a separated $S'$-algebraic space $X'$ of finite type such that $X'$ is normal integral and at each of its geometric codimension $\geq 2$ points either regular or pure geometrically para-factorial of equal characteristic and that $X'_{s'}$ is non-empty, separable, proper and does not have uniruled irreducible components.} \smallskip \emph{Then, if $S$ is $A$-pure at all its geometric points of codimension $\geq 2$, there exists up to unique isomorphisms a unique $S$-abelian algebraic space $A$ extending $A_t$.} \begin{proof} Recall that the formation of N\'{e}ron models commutes with strict localization. Thus, the N\'{e}ron model of $A_t$ at every codimension $1$ point of $S$ is by (4.3) an abelian scheme. So, as $S$ is $A$-pure at all its geometric points of codimension $\geq 2$, there is up to unique isomorphisms a unique extension of $A_t$ to an $S$-abelian algebraic space $A$. \end{proof} \smallskip {\bf Proposition 5.9.} --- \emph{Keep the notations of $(5.8)$. Assume that $S$ is of residue characteristics zero pure at all its points of codimension $\geq 2$ and that there is an open sub-scheme $R$ of $S$ which consists precisely of all points of $S$ where $S$ is regular.} \smallskip \emph{Then $X_t$ extends to an $A$-torsor $X$ on $S$ for the \'{e}tale topology. Such an extension is unique up to unique isomorphisms if $S$ is geometrically para-factorial along $S-R$.} \begin{proof} Note that $S$ is by (5.5) $A$-pure at all its geometric points of codimension $\geq 2$. So (5.8) applies. \smallskip As the formation of regular minimal models commutes with strict localization, the regular minimal model of $X_t$ at each codimension $1$ point $s$ of $S$ is by (4.3) a torsor for the \'{e}tale topology under the localization of $A$ at $s$. As $R$ is strictly $A$-pure at all its points of codimension $\geq 2$, there exist, by (5.6) and a ``passage \`{a} la limite'', an open sub-scheme $V$ of $R$ with $\mathrm{codim}(R-V, R)\geq 2$ and an $A|V$-torsor $Z$ on $V$ for the \'{e}tale topology such that $Z$ extends $X_t$. \smallskip This torsor $Z$ is by \cite{raynaud_thesis} XIII 2.8 iv) of finite order. Namely, there exist an integer $n\geq 1$ and an ${}_nA|V$-torsor $P$ on $V$ for the \'{e}tale topology such that \[Z=P\stackrel{{}_nA|V}{\wedge}A|V,\] where ${}_nA=\mathrm{Ker}(n.\mathrm{Id}_A)$. \smallskip As $S$ is pure at all its points of codimension $\geq 2$, thus in particular pure along $S-V$, there is a unique finite \'{e}tale $S$-scheme $\overline{P}$ which restricts to $P$ on $V$. By the purity of $S$ along $S-V$ again, $\overline{P}$ is in a unique way an ${}_nA$-torsor on $S$ for the \'{e}tale topology and hence \[X=\overline{P}\stackrel{{}_nA}{\wedge}A\] extends $X_t$. Such an extension is by (5.6) unique up to unique isomorphisms if $S$ is geometrically para-factorial at all its points of codimension $\geq 2$, or equivalently, at all points of $S-R$. \end{proof} \smallskip {\bf Theorem 5.10.} --- \emph{Let $S$ be an integral scheme with generic point $t$ and $X$ an $S$-algebraic space with structural morphism $f$. Assume that $X$ is locally noetherian normal integral of residue characteristics zero and at all its geometric codimension $\geq 2$ points pure and geometrically para-factorial. Assume furthermore that $f^{-1}(t)$ is a non-degenerate $t$-abelian fibration and that, for each geometric codimension $1$ point $\overline{x}$ of $X$, $f\times_SS_{(\overline{s})}$ is separated of finite type and flat at $\overline{x}$ and the geometric fiber $f^{-1}(\overline{s})$ is proper and does not have uniruled irreducible components, where $S_{(\overline{s})}$ denotes the strict henselization of $S$ at $\overline{s}=f(\overline{x})$.} \smallskip \emph{Then there exists a unique almost non-degenerate abelian fibration structure on $f$ extending that of $f_t$.} \begin{proof} One applies (4.10), (5.5) and (5.7). \end{proof} \smallskip {\bf Proposition 5.11.} --- \emph{Keep the notations of $(5.10)$. Let $(X_., d_., s_.)$ denote the $S$-groupoid of $X/S$. Consider the following conditions }: \smallskip 1) \emph{$f$ is proper, $S$ is excellent regular.} \smallskip 2) \emph{$f$ is proper, $S$ is locally noetherian normal and at each of its points satisfies the condition $(W)$ $(\mathrm{EGA\ IV}\ 21.12.8)$.} \smallskip \emph{Then, if $1)$ (resp. $2)$) holds, $S$ is the cokernel of $(d_o, d_1)$ in the full sub-category of the category of $S$-algebraic spaces consisting of the $S$-algebraic spaces (resp. $S$-schemes) which are $S$-separated and locally of finite type over $S$.} \begin{proof} Let $Z$ be an $S$-separated algebraic space locally of finite type over $S$ and $p: X\to Z$ an $S$-morphism satisfying $pd_o=pd_1$. As the $t$-groupoid $X_{.t}$ is simply connected, $p_t: X_t\to Z_t$ factors through a unique $t$-point, say $\sigma_t$, of $Z_t$. It amounts to showing that when 1) holds (resp. when 2) holds and $Z$ is a scheme) such a $t$-point uniquely extends to an $S$-section of $Z$. \smallskip Replacing $Z$ by the closed image of $\sigma_t$ in $Z$, one may assume that $Z$ is integral and birational over $S$. In case $1)$, as $p$ is dominant, $X$ normal and $S$ excellent, one may by replacing $Z$ by its normalization assume that $Z$ is normal. \smallskip As $f$ is proper and $Z$ is $S$-separated, $p$ is proper and hence surjective. It suffices to show that in case $1)$ (resp. $2)$ where $Z$ is a scheme) $Z$ is \'{e}tale over $S$ (resp. $Z\to S$ is a local isomorphism at every point of $Z$). For, being proper birational, $Z\to S$ is then an isomorphism. \smallskip When 1) holds (resp. when 2) holds and $Z$ is a scheme), it suffices by the theorem of \emph{purity of branch locus} (2.4) (resp. the theorem of \emph{purity of branch locus} of van der Waerden, EGA IV 21.12.12) to show that $Z$ is $S$-\'{e}tale at each geometric codimension $1$ point $\overline{z}$ of $Z$ (resp. $Z\to S$ is a local isomorphism at each codimension $1$ point $z$ of $Z$). \smallskip Now each geometric maxmal point $\overline{x}$ of $p^{-1}(\overline{z})$ (resp. each maximal point $x$ of $p^{-1}(z)$) is of codimension $\leq 1$ in $X$, and the image of $\overline{x}$ (resp. $x$) in $S$, which is also the image of $\overline{z}$ (resp. $z$), is of codimension $\leq 1$ in $S$ by hypothesis. The projection $Z\to S$ being proper birational is an isomorphism when localized at every codimension $\leq 1$ point of $S$ and in particular is \'{e}tale at $\overline{z}$ (resp. a local isomorphism at $z$). \end{proof} \smallskip 5.12. \emph{Question }: \smallskip In (5.11), does $\mathrm{Coker}(d_o, d_1)=S$ hold in the category of $S$-algebraic spaces? \smallskip \bibliographystyle{amsplain}
2,869,038,155,823
arxiv
\section{Introduction} Friction between sliding surfaces is a common phenomenon which plays an important role in everyday life. Like all transport properties, however, friction is ultimately the result of microscopic interactions between particles. Recent years have witnessed a surge of interest in understanding the microscopic origin of friction, due to the increased control in surface preparation and the development of nanoscale experimental methods such as Quartz Crystal Microbalance~\cite{QCM} and Friction Force Microscopy~\cite{FFM}. A considerable amount of this effort is being directed towards reducing friction. One way to reduce friction is through incommensurability, i.e. structural incompatibility between the sliding surfaces on the atomic level. This effect, often called structural superlubricity~\cite{ShinjoHirano,fkphononconsoli,vanishingstaticfriction}, has been observed experimentally in nanoscale contacts, for example in graphite~\cite{Dienwiebel2004}. Theoretical studies of incommensurate sliding contacts often employ the Frenkel-Kontorova (FK) model~\cite{FK,ShinjoHirano,StrunzElmer,FKBraun} or extensions (see for example \cite{FKlubrication}). Crucially, the FK model does not assume that the sliding objects are rigid. This allows it to describe deformations of the lattice, which can destroy the vanishing static friction~\cite{vanishingstaticfriction} for sufficiently soft materials. The FK model can also describe phonons and heat in the lattices, which absorb the kinetic energy of the sliding\cite{fkphononconsoli}. Consequently, the FK model is the simplest model in which dynamic friction is emergent, while in other models some form of heuristic damping must be included. While the FK model has been studied extensively in one dimension (1D), its two-dimensional (2D) extensions have not received as much attention. In 2D, both halves of the contact have at least two independent parameters describing the lattice, which considerably complicates the concept of commensurability~(see for instance~\cite{quasiperiodicFK,astridgoldgraphite}). Two-dimensional surfaces also have 2D phonon dispersion and at least two independent elastic constants. Recently there have been a number of studies dealing with the FK model in 2D that use several different 2D extensions. Several works (see for example~\cite{vectorhexagonal,Wang1}) consider a scalar harmonic interaction, which we will see here can cause serious problems with the dynamic friction. Other works use 2D springs for the interaction, but only include interactions between nearest neighbors (see for example~\cite{vectorsquare}). For square lattices, this gives rise to an unphysically vanishing shear modulus, and, as we will see here, unphysical friction. Another group that has mostly investigated scalar harmonic FK models~\cite{Wang1} later also considered 2D next-nearest neighbor interactions~\cite{Wang-vector}, but, opposite to what one would expect, did not find any contribution to the dynamic friction from the dynamics inside the chain. Static friction of 2D sheets of colloids has also been studied in e.g. models consisting of particles interacting with a Yukawa potential~\cite{mandelli} instead of FK models. To the best of our knowledge, the impact of the functional forms of the different FK models in 2D and the values of their parameters on the static and dynamic friction have not yet been systematically compared and evaluated. Here we investigate the impact of the second dimension on the frictional properties in the FK model. We consider the two most straightforwards extensions to 2D and determine what is needed for describing the friction in a physical way, in both the static and dynamic case. We demonstrate how the second dimension affects the static and dynamic friction, and obtain several qualitatively new effects in terms of the temperature dependence of the friction coefficient and non-trivial anisotropy. The paper is organized as follows. Section~\ref{sec:Models} introduces the basic features of the FK model, along with definitions and motivations of the 2D models and their parameters. Section~\ref{sec:Statics} describes the static frictional properties. In Sec.~\ref{sec:Equilibration} we investigate thermal equilibration within the models, in relation to sliding friction. Section~\ref{sec:Viscous} deals with the dynamic friction and the resulting effective viscous friction. The results are then interpreted in terms of the phonon dispersion relations of the lattices in Sec.~\ref{sec:Lattice vibrations}. In Sec.~\ref{sec:2D} we show several new effects that occur in 2D. Lastly, the conclusions are summarized in Sec.~\ref{sec:Conclusions}. \section{Models} \label{sec:Models} We first briefly discuss the 1D FK model~\cite{FK} which has already been studied extensively~\cite{FKBraun}. It consists of $N$ particles of mass $m$ in an ordered harmonic chain with equilibrium spacing $a_0$ and spring constant $K$. The particles interact also with an external sinusoidal substrate potential of periodicity $a_\mathrm{s}$ and amplitude $V_0/2\pi$. The basic setup is shown in Fig.~\ref{fig:FK1D} (a). Often, the effects of inertia are neglected, and this is referred to as the static FK model. Inclusion of kinetic energy instead results in what is known as the dynamic FK model. \begin{figure} (a)\hfill\strut\\[-5.8ex] \includegraphics[width=0.25\textwidth]{gfx_models_1d.eps}\hskip3.0cm\strut\\[1ex] (b)\hskip0.22\textwidth\hskip0.01\textwidth\hskip-2.0\medskipamount(c)\hfill\strut\\[-0.8ex] \noindent\includegraphics[height=0.210\textwidth,clip]{gfx_models_scalar.eps}\hskip-0mm\hskip0.01\textwidth\includegraphics[height=0.220\textwidth,clip]{gfx_models_X.eps} \caption{Schematic illustrations of the 1D FK model (a), and the internal interaction of the two 2D extensions studied in this work, the scalar model (b) and vector model (c). In (b), the interaction is only displayed for the particle which has been distorted from the shaded equilibrium position, as the shear interaction equilibrium distance is zero. } \label{fig:FK2D} \label{fig:FK1D} \end{figure} The commensurability of the system is characterized by the winding number $w$, the ratio of the mean interatomic distance and the period of the potential. A rational (irrational) value of $w$ defines a commensurate (incommensurate) structure. As discussed in Sect.\ref{sec:Models}D, for computational reasons, we use periodic boundary conditions that fix the mean interatomic distance at the unstretched length of the spring. Therefore $w$ becomes equivalent to the ratio of the two length scales $r=a_0/a_\mathrm{s}$. We focus only on the incommensurate case which is both more likely for arbitrary surfaces in contact and more interesting in the context of structural lubricity. The coupling parameter $\lambda = V_0 / (K a_\mathrm{s}^2)$ describes the relative strength of the two interaction types. When $\lambda$ is below a critical value $\lambda_c$ (which depends on $r$) the incommensurate 1D FK model is in a floating state with zero static friction. For values $\lambda \geq \lambda_c$ however, the system enters a pinned state with finite static friction. This transition by breaking of analyticity is known as the Aubry transition~\cite{vanishingstaticfriction,Aubrytransition}. The parameter can in real systems vary strongly, but we focus here in particular on a value below the transition. The experimentally observed vanishing static friction of graphene flakes on graphite\cite{Dienwiebel2004} strongly suggests that covalently bonded materials physisorbed on substrates are in this regime. We use $r=\tau_g=(1+\sqrt{5})/2$, i.e.\ the golden mean (which is the optimally incommensurate ratio~\cite{MacKayAubrymaxima}) $\lambda=0.05 \approx \lambda_c/3$~\cite{Greengoldenmean}, and employ the reduced units $m=1$, $K=1$ and $a_\mathrm{s}=1$, which yields the units of time $\tau _0 = \sqrt{m/K}=1$ and energy $Ka_\mathrm{s}^2 =1$. In the dynamic 1D FK model, an effective friction arises as a result of the Hamiltonian dynamics~\cite{fkphononconsoli,joostfk}, as primarily caused by resonances between the sliding induced vibrations and phonon modes in the chain. For a uniform sliding of all the particles with velocity $v^+$ over the potential, a vibration with the washboard frequency $\Omega=2\pi v^+/a_\mathrm{s}$ is induced. The periodicity of the potential corresponds to a wavenumber of $q = 2\pi r$ for phonons in the chain. Resonances will therefore occur when the washboard frequency matches the phonon dispersion relation of the lattice $\omega=2|\sin(k/2)|$ for $k=q$ or its harmonics $nq$, i.e. for the velocities \begin{equation} \tilde{v}_n^+ \sim \sin (n\pi r) / n\pi~, \end{equation} where $n=1,2,3,...$ is the order of the resonance. When the chain slides with a velocity at or near a resonance, the washboard frequency can parametrically excite acoustic phonon modes. The energy will then dissipate from these modes also into other phonon modes~\cite{fkphononconsoli}, leading to friction and ultimately thermal equilibrium~\cite{joostfk}. At zero temperature, this will be preceded by an initial recurrence~\cite{fkphononconsoli,joostfk}, whereby a fraction of the total energy is transformed back and forth between internal degrees of freedom and the center of mass (CM) translation. At nonzero temperature, the thermal fluctuations speed up the decay which becomes viscous from the beginning. The effective friction coefficient, however, depends on the velocity non-trivially, due to the interplay of resonances and thermal fluctuations. \subsection{Two-dimensional models} For the 2D models we consider the simplest possible lattice, namely a square symmetry for both the elastic material and the substrate potential. This gives the straightforward generalizations of the Hamiltonian \begin{eqnarray} \mathcal{H} = \sum _{j=1} ^{N_x} \sum _{l=1} ^{N_y} \Big[ \frac{\dot{| \vec{r}}_{j,l} | ^2}{2} + V_\mathrm{ext} + V_\mathrm{int} \Big]~, \\ V_\mathrm{ext} = \frac{\lambda}{2\pi} \Big[ \cos(2\pi x_{j,l}) + \cos(2\pi y_{j,l}) \Big]~, \end{eqnarray} where $\vec{r}_{j,l} = (x_{j,l}, y_{j,l})$ is the position of particle $(j,l)$ and $\dot{\vec{r}}_{j,l}$ the corresponding velocity. The internal potential energy of the sheet of particles is given by $V_\mathrm{int}$. We also define the CM velocity as: \begin{equation} \vec{v} = (v_x,v_y) = \frac{1}{N_xN_y} \sum _{j=1} ^{N_x} \sum _{l=1} ^{N_y} \dot{\vec{r}}_{j,l} ~. \end{equation} The variables $\vec{r}_{j,l} = (x_{j,l}, y_{j,l})$ describe the microscopic internal degrees of freedom, while $\vec{v}= (v_x,v_y)$ is reserved for the macroscopic sliding. Several different options have been previously considered for $V_\mathrm{int}$, e.g.\ simple harmonic types~\cite{Wang1,vectorhexagonal,FKBraun} and multidimensional springs~\cite{Wang-vector,Yang-vector,vectorsquare,FKBraun}. We evaluate these alternatives by considering two representative but qualitatively different interaction types, hereafter dubbed the scalar and vector model, which are schematically illustrated in Fig.~\ref{fig:FK2D} (b) and (c). The scalar model describes the two coordinate components as independent scalars with interaction energy terms for each particle $j,l$ \begin{eqnarray} \lefteqn{V_\mathrm{scalar} =}& \nonumber \\ &\null \frac{K}{2} \Big[ (x_{j+1,l}-x_{j,l}-a_0)^2 + (y_{j,l+1}-y_{j,l}-a_0)^2 \Big]\nonumber \\ &\null + \frac{K_\mathrm{shear}}{2} \Big[ (x_{j,l+1}-x_{j,l})^2 + (y_{j+1,l}-y_{j,l})^2 \Big]~, \end{eqnarray} where $K_\mathrm{shear}$ measures the restoring force for transverse displacements within a subchain. Importantly, for the scalar model the $x$ and $y$ coordinates of the particles decouple completely. As will be seen later, this has undesirable consequences for the dynamics and can lead to unphysical behaviour. The vector model conversely describes the vector nature of the particle displacements via fully 2D springs. We include the interaction between second nearest neighbors as described here by the internal interaction \begin{eqnarray} V_\mathrm{vector} = &(1-\xi) \frac{K}{2} \Big[ (| \vec{r}_{j+1,l} - \vec{r}_{j,l}| -a_0)^2 \nonumber \\ &\null+ (| \vec{r}_{j,l+1} - \vec{r}_{j,l}| -a_0)^2 \Big] \nonumber \\ &\null+ \xi \frac{K}{2} \Big[ (| \vec{r}_{j+1,l+1} - \vec{r}_{j,l}| - \sqrt{2} a_0)^2 \nonumber \\ &\null+ (| \vec{r}_{j+1,l-1} - \vec{r}_{j,l}| -\sqrt{2} a_0)^2 \Big] \end{eqnarray} where $0 \leq \xi \leq 1$ controls the relative strength of the two interactions. \subsection{Model parameters} To understand the meaning of the two new model parameters, it is instructive to consider the effective elastic constants $c_{11}$ (longitudinal stretching) and $c_{44}$ (transverse shearing) of the lattices. For the scalar case one obtains trivially $c_{11}/c_{44}=K/K_\mathrm{shear}$ whereas Taylor expansion to first order of the vector model yields $c_{11}/c_{44}=K/\xi K$. The 2D elastic properties of the models are therefore expected to be most alike if the ratios are equated \begin{equation} K_\mathrm{shear}= \xi K~.\label{eq:kshearxi} \end{equation} Both models preserve the elastic properties of the 1D model in their longitudinal stretching, while the shearing can be tuned independently. The latter point has particular consequences for the vector model: the case $\xi=0$ yields a zero shear modulus with strong effects on the friction as shown later, whereas $\xi=1$ results in a model of two such intertwined but independent lattices. To estimate realistic values for the two parameters one can consider for example the properties of solidified rare gases often employed as the elastic material in Quartz Crystal Microbalance studies of friction~\cite{KrimReview}, e.g.\ Xenon~\cite{xenon}: $c_{11}/c_{44}=2.01$ and Krypton~\cite{krypton}: $c_{11}/c_{44}=2.08$, i.e. $c_{11}/c_{44}\approx2$ . However, since such measurements are performed on three-dimensional (3D) crystals one needs in the vector model to account for the higher number of next-nearest neighbors in 3D. The correct expression for comparison therefore becomes $c_{11}/c_{44}=(1+\xi )K/2\xi K$, which gives $K_\mathrm{shear} \approx 0.5$ and $\xi \approx 0.33$ as our estimation of representative values. \subsection{Lattice Phonon Dispersion} \label{sec:dispersion} The friction of the 1D FK model, and the associated resonances in particular, depends strongly on the phonon dispersion relation of the elastic lattice. The phonon dispersion in 1D, calculated in textbook examples~\cite{Kittel}, is readily extended to the scalar model due to the decoupled equations of motion. One obtains \begin{equation} \omega = \pm 2 \left[ K\sin^2 \left( \frac{k_x a_0}{2} \right) + K_\mathrm{shear}\sin^2 \left( \frac{k_y a_0}{2} \right) \right]^{1/2} \label{eq:scalardispersion} \end{equation} for $x$ polarization, and equivalently for $y$ polarization with $x \rightleftharpoons y$. The two branches are shown as curves between important points in the first Brillouin zone (FBZ) in Fig.~\ref{fig:scalarbands} (a). The $x$ polarized branch is additionally shown for the entire FBZ in Fig.~\ref{fig:S-dispersion} (b) and (c) for $K_{\mathrm{shear}}=0.5$ and~$1$. \begin{figure}[] \hskip3.3mm(a)\hfill\strut\\[-5.8ex] \includegraphics[width=0.40\textwidth]{gfx_vibrations_scalarbands.eps} \vskip1ex \includegraphics[width=0.49\textwidth]{gfx_vibrations_S-Dispersion.eps} \vskip-4.4cm \hskip8.3mm(b)\hskip3.5cm(c)\hfill\strut\\[-3ex] \vskip4.4cm \caption{Phonon dispersion for the scalar model: for $K_{\mathrm{shear}}=0.5 K$ between the three points $\Gamma=(0,0),~ X=(\pi,0),~ M=(\pi,\pi)$ for the $x$ (blue points) and $y$ (red squares) polarized branches (a), and 2D phonon dispersion maps of the $x$ polarized branch for $K_{\mathrm{shear}}=0.5$ (b) and $1$ (c). \label{fig:S-dispersion} \label{fig:scalarbands}} \end{figure} For the vector model we apply the harmonic approximation, which together with the Ansatz fuction \begin{equation} \vec{z}_{j,l}(t) = \Bigg[ \begin{array}{c} \Delta x(k_x,k_y)\\ \Delta y(k_x,k_y) \end{array} \Bigg] e^{i(\omega t+k_xja_0+k_yla_0)} \end{equation} gives the equations of motion \begin{equation} - \frac{d^2}{dt^2} \vec{z}_{j,l}= \underline{D}(k_x,k_y)\vec{z}_{j,l} = \omega^2 \vec{z}_{j,l} \end{equation} with the symmetric dynamical matrix \begin{eqnarray} \underline{D} = \frac{4K}{m}(1-\xi)\underline{D}^{\alpha} + \frac{2K}{m}\xi\underline{D}^{\beta}~,\\ \underline{D}^\alpha _{1,1}\ = \underline{D}^\alpha _{2,2}\ = \sin^2(k_x a_0 /2)~, \\ \underline{D}^\alpha _{1,2}\ = \underline{D}^\alpha _{2,1}\ = 0~, \\ \underline{D}^\beta_{1,1} = \underline{D}^\beta_{2,2}= [1-\cos (k_x a_0)\cos(k_y a_0)]~, \\ \underline{D}^\beta_{1,2} = \underline{D}^\beta_{2,1} =\sin(k_xa_0)\sin(k_ya_0) ~. \end{eqnarray} Diagonalization of the dynamical matrix yields the phonon dispersion shown in Fig. \ref{fig:vectorbands} for $\xi=0.33$. Results for the entire FBZ are shown in Fig. \ref{fig:dispersion} (b) and (c). For $\xi=0$, the 1D phonon dispersion is recovered. However, the harmonic approximation for $\xi=0$ for the transverse modes yields vanishing coefficients and a phonon dispersion relation with $\omega=0$ for all wave vectors. These zero frequency vibrational modes do not occur in physical systems, and, as we will see later, for a reasonable description of the dynamic friction it is crucial to exclude them from the model. \begin{figure}[] \hskip3.3mm(a)\hfill\strut\\[-5.8ex] \includegraphics[width=0.40\textwidth]{gfx_vibrations_vectorbands.eps} \vskip1ex \includegraphics[width=0.49\textwidth]{gfx_vibrations_X-dispersion.eps} \vskip-4.4cm \hskip8.3mm(b)\hskip3.5cm(c)\hfill\strut\\[-3ex] \vskip4.4cm \caption{Phonon dispersion for the vector model: for $\xi=0.33$ between the three points $\Gamma=(0,0),~ X=(\pi,0),~ M=(\pi,\pi)$ for the longitudinal (L, blue points) and transverse (T, red squares) branches (a), and 2D phonon dispersion maps for $\xi=0.33$ of the (quasi-)longitudinal modes (b) and (quasi-)transverse modes (c). \label{fig:vectorbands} \label{fig:dispersion} } \end{figure} \subsection{Computational details} \label{sec:Comp} We integrate numerically the equations of motion with a fourth order Runge-Kutta algorithm. The time step of $\tau _0/150$ preserves the total energy to at least three digits over the duration of our runs. As we simulate a finite-size system, with periodic boundary conditions [i.e. $\vec{r}_{j,l}=\vec{r}_{j \pm N_x,l} \mp (N_x a_0,0) = \vec{r}_{j,l \pm N_y} \mp (0,N_y a_0),~)$], we must approximate the ratio of lattice parameters by a nearly incommensurate rational ratio \begin{equation} r = \frac{F_{n+1}}{F_n} \approx \tau_g~, \end{equation} where $F_n$ is the $n$th Fibonacci number. Unless mentioned explicitly we choose as the system size $N_x=N_y=89$ and thus $a_0=144/89$. We restrict ourselves to the case of lattice vectors aligned with the substrate potential. To obtain initial conditions at a specific temperature (0 for energy minimization), we first place the particles in the equidistant equilibrium configuration of the elastic square lattice. Then, we apply a Langevin thermostat with damping parameter $0.5$ for $10^5$ time steps, after which the thermostat is removed. We note that the minimization procedure does not necessarily give the true ground state (GS), i.e. the global energy minimum, of the models. However, for the sake of comparing the qualitative static properties of the 2D models, we consider the numerical approach sufficient. To study the Hamiltonian dynamics of the system we follow the procedure of \cite{joostfk}. At $t=0$, we give every particle an equal velocity increment $ v_x^+$ in the $x$ direction, so that the sheet of particles starts sliding on the substrate. After this the Hamiltonian dynamics are monitored without further interference. \section{Static friction} \label{sec:Statics} Before investigating the dynamics of the 2D extensions of the FK model, we first discuss the static friction and the GS. The GS can be described in terms of the modulation function $f$ and hull function $g$ respectively~\cite{vanishingstaticfriction} \begin{equation} f(i a_0 \bmod a_\mathrm{s}) = u_i \bmod a_s~, \end{equation} \begin{equation} g(i a_0 \bmod a_\mathrm{s} ) = u_i -i a_0~, \end{equation} where the transition from zero to finite static friction (the Aubry transition) can be identified by the emergence of discontinuities in $f$ (or equivalently $g$). We calculate numerically the modulation function by using the displacements with respect to constant spacing of the ground state obtained from energy minimization. This procedure gives an approximation of the exact continuous or discontinuous function by a discrete set of points. For the scalar model, the GS configuration can be found directly from the GS of the 1D FK model. Let us denote the position of a particle $i$ in the GS of the 1D model as $q_i$. We find by direct insertion that the configuration \begin{equation} (x_{j,l},y_{j,l})_\mathrm{GS}=(q_j,q_l)\label{eq:2dGSfrom1d} \end{equation} fullfills the GS criterion of zero force, independently of $K_\mathrm{shear}$. The particles line up within $x$ and $y$ subchains in the positions of the 1D GS, which leaves the static friction unaffected by the extensions. \begin{figure}[] (a)\hfill\strut\\[-4ex] \includegraphics[width=0.40\textwidth,trim={0cm 0cm 0cm 0cm},clip]{gfx_aubry_05.eps}\\ (b)\hfill\strut\\[-4ex] \includegraphics[width=0.40\textwidth,trim={0cm 0cm 0cm 0cm},clip]{gfx_aubry_20.eps} \caption{Numerically obtained modulation $f(x)$ and hull $g(x)$ functions of the $x$ subchains (particles with identical $l$) in the vector model (with $\xi=0.5$, large black dots) and the scalar model (small cyan dots) for (a) $\lambda=0.05$, a floating state below the Aubry transition and (b) $\lambda=0.20$, a pinned state above the Aubtry transition. The results are nearly identical for the two models. Results with equivalent similarities between the models have also been obtained for several values $ 0 \leq \lambda \leq 0.25$, while a further increase of $\xi$ leads eventually to an appreciable alteration of the functions. } \label{fig:Aubry} \end{figure} For the vector model, equation~(\ref{eq:2dGSfrom1d}) only gives the exact GS in the limiting case of $\xi=0$. When $\xi>0$, the interaction between next-nearest neighbors distorts the GS. However, we find that the distortion is typically weak in the interesting parameter ranges. This can be seen in Fig. \ref{fig:Aubry}, which shows results obtained by numerical energy minimization for the vector model with $\xi=0.5$ as compared to the scalar model \footnote{For the 2D models the functions have been calculated individually for their respective subchains. For e.g. the $x$ coordinates we consider the particles with identical equilibrium $y$ coordinates (within the lattice), i.e. $u_i \rightarrow x_{j(,l)}$ for fixated $l$. $N_y$ calculated functions are thus obtained for the equally many $x$ subchains. In the figures, the $N_y$ functions are plotted on top of each other.}. The results are nearly identical for the two models both above and below the Aubry transition. We thus conclude that in the physically relevant parameter range the different models describe the Aubry transition and the static friction in a very similar way, that is also similar (or even identical) to the 1D case. All cases considered in the following sections, i.e. $\lambda=0.05 \approx \lambda_c/3$ and $\xi \leq 0.5$, are thereby approximately equivalent to the 1D case in terms of static properties. \section{Equilibration} \label{sec:Equilibration} To investigate the dynamic friction, we add a velocity $v_x^+$ to the CM and monitor the decay to equilibrium. The dissipation of CM motion into internal energy (phonons) and subsequent decay are key features of the FK model. However, there are many ways in which the equilibration can be slowed down or inhibited, e.g.\ if the systems are too small~\cite{ShinjoHirano,fkphononconsoli}. Here we therefore first examine if the different extensions of the FK model to 2D correctly describe the process of thermal equilibration. \subsection{Harmonic equipartition} \begin{figure*}[] \begin{center} \includegraphics[width=0.32\textwidth,trim={0cm 0cm 0cm 0cm},clip]{gfx_equilibration_scalar_energy.eps} \includegraphics[width=0.32\textwidth,trim={0cm 0cm 0cm 0cm},clip]{gfx_equilibration_scalar_histogramX.eps} \includegraphics[width=0.32\textwidth,trim={0cm 0cm 0cm 0cm},clip]{gfx_equilibration_scalar_histogramY.eps} \includegraphics[width=0.32\textwidth,trim={0cm 0cm 0cm 0cm},clip]{gfx_equilibration_cross_energy.eps} \includegraphics[width=0.32\textwidth,trim={0cm 0cm 0cm 0cm},clip]{gfx_equilibration_cross_histogramX.eps} \includegraphics[width=0.32\textwidth,trim={0cm 0cm 0cm 0cm},clip]{gfx_equilibration_cross_histogramY.eps} \end{center} \vskip-9.0cm (a)\hskip5.5cm(b)\hskip5.5cm(c)\hskip5.5cm\strut \vskip4.0cm (d)\hskip5.5cm(e)\hskip5.5cm(f)\hskip5.5cm\strut \vskip3.5cm \caption{Indicators of thermal equilibration, for the scalar model with $K_\mathrm{shear}=0$ and $1$ (the results are independent of the $K_\mathrm{shear}$ value) (top) and for the vector model with $\xi=0.33$ (bottom). Subfigures (a) and (d) show the time evolution of the kinetic (red upper solid line), potential (blue lower solid line) and total (black dashed line) energies, after a CM velocity increase of $v_x^+=0.12$ (a resonant value) from the GS. The energies clearly display equipartition, after the initial recurrence. The histograms of the CM velocity components for $x$ and $y$ components are for the scalar model in (b) and (c) and for the vector model in (e) and (f) with fitted Gaussian functions (red lines). The histograms were obtained by binning the velocity components over a time period of $9 \times 10^5 \tau_0$, starting after an initial equilibration time of $ 10^5 \tau_0$. Results for almost all other parameter values are comparable. For small $\xi$ equipartition is neither expected nor observed and for $\xi=0$ due to symmetry equilibration fails. \label{fig:equilibration} } \end{figure*} In thermal equilibrium the systems should obey equipartition. For (approximately) harmonic interaction, this means that the energy should be divided on average equally between kinetic energy $E_k$ and relative potential energy $E_p-E_0$. To determine if equipartition is obeyed, we monitor these energies as a function of time after the initial CM velocity increase. Typical cases are shown in Figs.~\ref{fig:equilibration}(a) and~(d). As these systems were started from zero temperature, one observes an initial reccurrence, followed by an approximately exponential decay of the CM velocity, after which it starts to jiggle thermally around a zero mean. We see that the expected equipartition is reached for both the scalar model and the vector model for sufficiently large values of $\xi$. For the vector model with small values of $\xi$ (not shown), however, the harmonic approximation is no longer valid at the energies in our simulations, and thus harmonic equipartition is neither expected nor obeyed. \subsection{Thermal distributions} As a second test, we check if the CM velocity obeys the Maxwell-Boltzmann (MB) distribution. For the scalar model, we find that the $x$ component of the CM velocity obeys a MB distribution, but the $y$ component does not, as can be seen in Figs.~\ref{fig:equilibration}(b) and~(c). This exposes clearly the previously mentioned flaw of the model: the scalar nature of the internal interaction causes a complete decoupling in the equations of motion between the $x$ and $y$ components, resulting effectively in two independent 1D models. There is therefore no equilibration between the different components and the kinetic energy from a velocity increment in the $x$ direction never dissipates into vibrations in the $y$ direction. Moreover, because these simulations start from the ground state, each independent subchain has the same initial conditions. Therefore, the energy is only redistributed over $N_x$ independent degrees of freedom, not $2 N_x N_y$. As a results, the temperature of the MB distribution is approximately a factor of $2N_y$ too high. While the symmetry between the subchains can be broken by initial conditions, such as an initial temperature, the decoupling between $x$ and $y$ is inherent in the dynamics. For this reason, the scalar model is clearly not suitable as a fully 2D extension for the dynamic FK model. The vector model, conversely, is well behaved for physical parameter values ($0.01 \leq \xi \leq 0.5$). For both $x$ and $y$ components the CM velocity obeys the MB distribution corresponding to the correct temperature. Only for $\xi=0$ with zero temperature the system does not equilibrate properly. This is due to the symmetry of the initial conditions, which is preserved by the symmetry of the dynamics. As a result, the nonlinear terms never become relevant. At finite temperature, this symmetry is broken and equilibration is restored. In many cases for the vector model, however, we find that while the temperatures do converge, this convergence is very slow. In order to obtain enough statistics and satisfactorily confirm equilibration, we were sometimes forced to resort to computationally cheaper smaller systems and simulate for much longer times. It thus appears that all the vector model systems (with $0<\xi\leq0.5$) do equilibrate eventually, but that the process can be very slow. We also note that equilibration was obtained for the vector model with the surprisingly small size $N_x=N_y=34$ (on 21 periods of the potential). For the scalar model and 1D FK, $N=34$ is too small for dissipation: there is no decay of the velocity as the system never leaves the initial recurrence. Thus, we can also conclude that the nonlinear coupling in the vector model not only provides equilibration between $x$ and $y$, but also helps equilibration in small systems. We have not investigated the system-size dependence systematically, but note that in the vector model with our parameters, for $N_x=N_y=13$ (on 8 periods of the potential), the initial recurrence survives for an extremely long time ($\sim 200000\tau_0$) even at resonant velocities. The authors of Ref.~\cite{Wang-vector} investigate an even smaller system, $N_x=N_y=12$; we suspect that this is why they did not observe any dissipation due to internal motion. \section{Viscous friction} \label{sec:Viscous} \begin{figure}[] \includegraphics[width=0.4\textwidth]{gfx_viscous_friction_fit.eps} \caption{The averaged CM velocity decay of the vector model (blue points) with $\xi=0.33$, $v^+_x=0.12$ and initial temperature $T=0.16\lambda$. The dashed red line shows the exponential fit in the range $0\leq t \leq1000 \tau_0$ from which the effective viscous friction coefficient $\eta$ is obtained. \label{fig:fit} } \end{figure} When a finite initial temperature is introduced in the system, the CM velocity decays more rapidly, as if subjected to an effective viscous friction. We can thus extract an effective viscous friction coefficient $\eta$ by fitting of an exponential function to the decay curve in a similar way as in the 1D case~\cite{joostfk}. In the results presented below, we fit both the coefficient in the exponent and the prefactor of the exponential function to the average decay curve of at least 100 trajectories (300 for the 1D system) obtained from different initial conditions (generated by differently seeded thermostats). Once the CM velocity has decayed enough, the internal temperature of the sheet of particles increases. We therefore fit only to an initial period, where the temperature is relatively constant, in this case the first 1000~$\tau_0$. An example of the ensemble average and the fit is shown in Fig.~\ref{fig:fit}. In the following we consider the low temperature case $T=0.022\lambda$, which has shown clear resonance peaks in the 1D case~\cite{joostfk}. In Sec.~\ref{sec:velocity_dependence} we first confirm that the resonance peaks in the friction that appear in the 1D systems survive in the extension to 2D. In \ref{sec:model_parameters} we then investigate how the new parameters of the 2D models affect the friction. Explanations for the observed results are presented in Sec.~\ref{sec:Lattice vibrations}. \subsection{Velocity dependence and resonances} \label{sec:velocity_dependence} \begin{figure}[] \includegraphics[width=0.45\textwidth]{gfx_viscous_friction_eta_scalar.eps} \includegraphics[width=0.45\textwidth]{gfx_viscous_friction_eta_vector.eps} \vskip-5.77cm \hskip-2.44cm \includegraphics[width=0.19\textwidth]{gfx_viscous_friction_eta_vector_inset.eps} \vskip3.3cm \vskip-12.0cm (a)\hfill\strut\\[-2ex] \vskip6.0cm (b)\hfill\strut\\[-2ex] \vskip6.0cm \caption{The effective viscous friction as a function of initial velocity $\eta (v^+_x)$ for the scalar (a) and vector (b) models, compared to the 1D case, for initial temperature $T=0.022\lambda$. Results for small values of $\xi$ are shown in an inset with logaritmic scale. Red lines (with upwards pointing triangles) correspond to models with the ratio $c_{11}/c_{44}=2$; other colors are not directly comparable between the figures. Resonances and their order (vertical dashed lines) apply in both cases, but have been omited in the lower panel to not obscure the results. } \label{fig:eta_v} \end{figure} In Fig.~\ref{fig:eta_v} we show how the friction coefficient depends on the velocity $v_x$ for several values of $K_\mathrm{shear}$ in the scalar model (a) and $\xi$ in the vector model (b), in comparison to the 1D results. The scalar model is found to behave much like its 1D counterpart. In the limiting case $K_\mathrm{shear}=0$ the results are identical, as the system then reduces to $N_y=89$ independent 1D chains. Results for larger $K_\mathrm{shear} \leq 1 $ remain comparable. The introduction of a shear interaction with a reasonable $K_\mathrm{shear}$ value has qualitatively two effects: an increase of friction for resonant velocities, and a suppresion of friction for non-resonant velocities. The increase is more pronounced for smaller values of $K_\mathrm{shear}$, whereas the suppresion appears with strong shear interaction. For the vector model the velocity-dependence changes drastically with the model parameter $\xi$. For small values of $\xi \lesssim 0.01$ the friction is orders of magnitude higher than for larger values, as shown in the inset of the figure on a logarithmic scale. The friction also decreases with increased initial velocity. This result is pathological and related to the unrealistic, (nearly) vanishing $c_{44}$ constant, yielding a transverse branch of phonon modes with extremely low frequency. Phonon modes with low or zero frequency can absorb energy easily, especially in combination with strong nonlinearity, and thus for small values of $\xi$, the modes involving transverse motion rapidly absorb energy and cause extremely high friction. For $\xi=0.1$, there is a qualitative agreement with the 1D and scalar models for low velocities. For high velocities, however, the friction increases dramatically and behaves similarly to the friction for smaller $\xi$. For the more reasonable $\xi=0.33$, the agreement between the 1D and 2D cases extends to all the simulated velocities. As for the scalar model, we find in this case a larger friction at the resonances than in the 1D case. Lastly, $\xi=0.5$ results in a higher friction for low velocities and the apparent appearance of a third resonance peak. Following the procedure of section \ref{sec:phonon_modes}, we find that the peak is not, unlike the other two, due to a resonance in the $x$ subchains. Energy is instead absorbed in modes with a wave vector oriented along the lattice diagonal, as the next-nearest and nearest neighbor interaction is equally strong to first order for this parameter value. It is not clear to us whether there exists real materials where such an effect could be observed. \subsection{Model parameters} \label{sec:model_parameters} \begin{figure}[] \includegraphics[width=0.45\textwidth]{gfx_viscous_friction_eta_shear.eps} \hfill\includegraphics[width=0.45\textwidth]{gfx_viscous_friction_eta_xi.eps} \vskip-12cm (a)\hfill\strut\\[-2ex] \vskip6cm (b)\hfill\strut\\[-2ex] \vskip6cm \caption{The viscous friction $\eta$ as a function of $K_\mathrm{shear}$ in the scalar model (a) and $\xi$ in the vector model (b) for three different initial velocities, and initial temperature $T=0.022\lambda$. The values $v_x^+=0.03,\:0.05,\:0.07$ correspond to a resonant value (0.05) and two nearby non-resonant values. The elastic behaviour of the models can be compared through the relation in equation~(\ref{eq:kshearxi}). } \label{fig:k_shear_xi} \end{figure} We now investigate in more detail the dependence of $\eta$ on the two new parameters $K_\mathrm{shear}$ and $\xi$ for both resonant and non-resonant sliding velocities. Figure \ref{fig:k_shear_xi} shows the results for a wide range of values of the model parameters and for three different initial velocities: one resonant ($v_x^+=0.05$) and two nearby non-resonant ($v_x^+=0.03,~0.07$). For the scalar model, shown in Fig.~\ref{fig:k_shear_xi}(a), we find a weak dependence for small values of $K_\mathrm{shear} \lesssim 0.1$, and a monotonous decrease thereafter. Only for extremely unrealistic values $K_\mathrm{shear} \gtrsim 10$ do the curves for the different velocities meet. The resonance is thus preserved throughout the range of reasonable $K_\mathrm{shear}$. The vector model, shown in Fig.~\ref{fig:k_shear_xi}(b), is qualitatively different. At low values of $\xi$, the friction coefficient is orders of magnitude higher than that of the scalar model, but also depends only weakly on $\xi$. At the intermediate values of $\xi$, the friction coefficent decreases and is similar to that of the scalar model for equivalent $K_\mathrm{shear}$ parameters. Finally, for large values of $\xi \gtrsim 0.7$ up to the maximum of $\xi=1$, the friction increases, as the system begins to separate into two independent lattices, consisting of the previously next-nearest neighbors, similar to the $\xi=0$ case but with a less incommensurate lattice spacing. It is thus only in the region of $0.1 \lesssim \xi \lesssim 0.7$, i.e.\ $1.22 \lesssim c_{11}/c_{44} \lesssim 5.5$, where the predicted resonance can be found. Interestingly, 74 out of 88 (84 \%) cubic single crystals found in Ref.~\cite{Rubberbible} are also in this range. We therefore expect the resonance behavior to be the rule rather than the exception also in 2D systems. \section{Phonon mode populations\label{sec:Lattice vibrations}\label{sec:phonon_modes}} To explain the trends observed in the last section, we now turn our attention to a direct connection between the sliding dynamics and the lattice phonon dispersion. Through the time-development of the phonon mode populations we determine how energy is transfered to vibrational modes and consequently dispersed in the vibrational spectrum, which gives detailed insight in the mechanism of dissipation. A discrete Fourier transform of the coordinates is performed as defined by \cite{SciPy} \begin{eqnarray} A_{j,l}(x) = \sum _{s=0} ^{N_x-1} \sum _{t=0} ^{N_y-1} x_{s,t}e^{-i(s\Omega_j + t\Omega_l)} \\ \Omega_j = 2 \pi j /N_x \:;\: j = 0,1,...,N_x-1 \\ \Omega_l = 2 \pi l /N_y \:;\: l = 0,1,...,N_y-1 \end{eqnarray} and equivalently for the $A_{j,l}(y)$ component with $x_{s,t} \rightarrow y_{s,t}$. The power $|A_{j,l}(w)|^2$ is then a direct indicator of the population in mode $(j,l)=(k_x,k_y)$ of polarization $w \in \left\{ {x, y}\right\}$ in the scalar model. For the vector model this is not strictly the case, as the $x-y$ coupling in the interaction causes the modes to not be purely $x$ or $y$ polarized, except in specific symmetry directions. Nevertheless, we find that these coordinates are sufficient for a qualitative analysis. \begin{figure*}[] \includegraphics[width=0.7\textwidth]{gfx_vibrations_scalar05_phononmaps0.eps} \includegraphics[width=0.7\textwidth]{gfx_vibrations_scalar05_phononmaps1.eps} \includegraphics[width=0.7\textwidth]{gfx_vibrations_scalar05_phononmaps10.eps} \vskip-18.1cm \hskip5mm\hskip2.75cm(a)\hskip5.8cm(b)\hfill\strut\\[-2ex] \vskip6.0cm \hskip5mm\hskip2.75cm(c)\hskip5.8cm(d)\hfill\strut\\[-2ex] \vskip6.0cm \hskip5mm\hskip2.75cm(e)\hskip5.8cm(f)\hfill\strut\\[-2ex] \vskip6.0cm \caption{The population of the phonon modes starting from initial temperature $T=0.022\lambda$ as a function of time for the scalar model with $K_\mathrm{shear}=0.5$. The squared modulus of the Fourier transformed coordinates $A_{j,l}(x)$ (a, c, e) and $A_{j,l}(y)$ (b, d, f) is plotted at different times after the velocity increment $v_x^+=0.12$. \label{fig:phonon_scalar} } \end{figure*} \begin{figure*}[] \includegraphics[width=0.7\textwidth]{gfx_vibrations_vector033_phononmaps0.eps} \includegraphics[width=0.7\textwidth]{gfx_vibrations_vector033_phononmaps1.eps} \includegraphics[width=0.7\textwidth]{gfx_vibrations_vector033_phononmaps10.eps} \vskip-18.1cm \hskip5mm\hskip2.75cm(a)\hskip5.8cm(b)\hfill\strut\\[-2ex] \vskip6.0cm \hskip5mm\hskip2.75cm(c)\hskip5.8cm(d)\hfill\strut\\[-2ex] \vskip6.0cm \hskip5mm\hskip2.75cm(e)\hskip5.8cm(f)\hfill\strut\\[-2ex] \vskip6.0cm \caption{The population of the phonon modes starting from initial temperature $T=0.022\lambda$ as a function of time for the vector model with $\xi=0.33$. The squared modulus of the Fourier transformed coordinates $A_{j,l}(x)$ (a, c, e) and $A_{j,l}(y)$ (b, d, f) is plotted at different times after the velocity increment $v_x^+=0.12$. \label{fig:phonon_vector} } \end{figure*} Shown in Fig.~\ref{fig:phonon_scalar} and \ref{fig:phonon_vector} are the Fourier transformed coordinates for the scalar model with $K_\mathrm{shear}=0.5$ and vector model with $\xi=0.33$ respectively, in the case of $T=0.022\lambda$. We choose $v_x^+=0.12$ to illustrate resonant behavior. Results are shown for the time points $t=0$ (just before the velocity increment), $t=50\tau_0$ (showing the initial conversion of sliding energy) and $t=1000\tau_0$ (illustrating a longer term redistribution of energy). For $t=0$ in both cases [Figs.~\ref{fig:phonon_scalar}(a), \ref{fig:phonon_scalar}(b), \ref{fig:phonon_vector}(a) and~\ref{fig:phonon_vector}(b)] we find a number of peaks in longitudinal modes with wavevectors corresponding to the modulation of the external potential and its harmonics. It is through these modes with wave vectors in the $x$ direction ($j$) that energy is transferred to the phonons. After this, however, [Figs.~\ref{fig:phonon_scalar}(c), \ref{fig:phonon_scalar}(d), \ref{fig:phonon_vector}(c) and~\ref{fig:phonon_vector}(d)] the energy predominatly spreads into wave vectors with a $y$ component ($l \neq 0$). This channel of dissipation does not exist in the 1D models. The pattern in which the energy spreads out depends strongly on the elastic parameters, stretching out (compressing) in the $y$ direction for lower (higher) $K_\mathrm{shear}$ or $\xi$. For the scalar model in particular, it is clear from Eq.~(\ref{eq:scalardispersion}) that the phonon dispersion level curves transform similarly, which indicates that energy is most easily spread between modes with comparable frequencies. This explains the general decrease in friction with $K_\mathrm{shear}$ seen in Fig.~\ref{fig:k_shear_xi}(a), as the minimal gap in frequency between neighboring modes is inversely related to $K_\mathrm{shear}$. We suspect that a similar effect is at play in the vector model. For the scalar model, the second polarization ($y$) is inconsequential due to the decoupling, as can be seen from Figs.~\ref{fig:phonon_scalar}(b), (d), and~(f). The vector model (Fig.~\ref{fig:phonon_vector}), however, does have coupling between $x$ and $y$ dynamics, which opens up yet another channel for dissipation: here energy is also appreciably transfered to the $y$ polarized phonon modes. The vector model is thus, with its coupled equations of motion, capable of dissipating energy throughout the 2D spectrum both with respect to the wavevector and polarization. \section{Two-dimensional effects on the viscous friction} \label{sec:2D} On the basis of the above results we conclude that a suitable extension of the FK model to 2D for describing sliding friction is the vector model with $\xi=0.33$ (hereafter referred to as the 2D model). In this section we show some of the new effects which occur in 2D. All calculations follow the same procedure as in section \ref{sec:Viscous}, where not otherwise stated. \subsection{Temperature dependence} \label{sec:2D_temperature} We first study the dependence of the friction on the initial temperature, as it is ultimately the temperature-induced fluctuations which cause the viscous decay of the CM velocity. \begin{figure}[] \begin{center} ~~\includegraphics[width=0.45\textwidth]{gfx_extras_1D.eps} ~~\includegraphics[width=0.45\textwidth]{gfx_extras_2D.eps} \end{center} \vskip-12cm (a)\hfill\strut\\[-2ex] \vskip6cm (b)\hfill\strut\\[-2ex] \vskip6cm \caption{Viscous friction coefficient $\eta$ as a function of velocity $v_x^+$ for the 1D (a) and 2D (b) models and four different temperatures, each corresponding to equilibration from sliding at the first four resonant velocities, as in~\cite{joostfk}. Dashed vertical lines indicate the positions of the 2nd, 3rd, and 4th resonances. The temperature affects the resonance peaks differently in 2D. \label{fig:eta_temp} } \end{figure} Shown in Fig.~\ref{fig:eta_temp} are the friction-velocity curves obtained for the 1D (a) and 2D (b) model for four different values of the initial temperature: $T=0.022\lambda,~0.063\lambda,~0.12\lambda,~0.16\lambda$. The results for the 1D model are in good agreement with previous results~\cite{joostfk}. Increased temperature here leads to a significant thermal broadening of the resonance peaks. The peak corresponding to $n=3$ sees an increase in friction as it essentially becomes a part of the larger $n=2$ peak. This second peak however remains distinguishable, and its maximum value is approximately constant for all temperatures. For the 2D model we find both similarities and differences to the 1D case. Also in this case the peaks are thermally broadened. However the effect is smaller, i.e.\ peak $n=2$ remains more narrow than in the 1D model. Importantly we also observe a shift of all the curves towards higher friction, i.e.\ the maximum value of the peak does not remain the same. This shift is a qualitatively new behavior which has not been seen in the 1D model. We note that the values we find for the viscous friction coefficients are significantly lower than those typically found in experiments, which would be around $1/\tau_0$. However, we observe that the temperature-dependence changes dramatically with the dimensionality and that the friction increases with dimensionality overall, due to the increase in the number of degrees of freedom which energy can be dissipated into. Thus, we expect that the viscous friction in a 3D model would be substantially larger than it is in our 2D systems. \subsection{Anisotropy of the sliding direction} \label{sec:2D_direction} For a 2D model, just as in real systems, the sliding is not necessarily restricted to a specific direction. At the atomic scale, the 2D friction force needs not be the same in all directions (see for example~\cite{gneccoanisotropy}) and it can even have components perpendicular to the sliding direction (see for example \cite{Yang-vector,Wang-vector,Wang4,onsanisotropy}). In this section we therefore show some of the effects of anisotropy which may occur when the velocity increment is not chosen along a symmetry axis. \begin{figure}[] \begin{center} ~~\includegraphics[width=0.45\textwidth]{gfx_extras_x.eps} ~~\includegraphics[width=0.45\textwidth]{gfx_extras_y.eps} \end{center} \vskip-12cm (a)\hfill\strut\\[-2ex] \vskip6cm (b)\hfill\strut\\[-2ex] \vskip6cm \caption{Friction coefficients $\eta_x$ (a) and $\eta_y$ (b) as functions of their respective velocity increment components for angled velocity increments. Results are shown for four different angles $0 \leq \varphi \leq \pi/4$ and initial temperature $T=0.022\lambda$. Note that the $\eta_x$ coefficient is shown also in (b) for $\varphi=0$, as $\eta_y$ is not a meaningful quantity in this setup. \label{fig:eta_xy} } \end{figure} We use $\vec{v}^+=v^+(\cos \varphi, \sin \varphi)$, i.e.\ a velocity increment in a direction $\varphi$. The CM velocity then consists of two components $\vec{v}=(v_x,v_y)$ which may potentially decay at different rates. We perform one exponential fit to each component to extract the two friction coefficents $\eta_x$ and $\eta_y$. The fitting procedure is otherwise identical to previous sections. To reduce the number of parameters we consider once again only the low temperature case $T=0.022\lambda$. In Fig.~\ref{fig:eta_xy} we show $\eta _x$ (a) and $\eta _y$ (b) for four different angles $0\leq \varphi \leq \pi/4$, i.e.\ $v_x^+$ larger than or equal to $v_y^+$. The behaviour of $\eta _x$ is similar for all four angles. It is only for the case of $\varphi=\pi/4$ (i.e.\ $v_x^+=v_y^+$) where a small increase in friction can be found for high velocities ($v_x^+ \gtrsim 0.12$). The friction can thus, in this case, be well described in terms of the single velocity increment component $v_x^+$, disregarding to good approximation any influence of $v_y^+$. As expected, $\eta _y$ is identical to $\eta _x$ in the limiting case $\varphi=\pi/4$. For the two smaller angles however, a significant increase in friction is found for intermediate to high velocities ($v_x^+ \gtrsim 0.06$), as compared to a velocity increment with the same $v_y^+$ but $v_x^+=0$. The sliding in $x$ which has been projected out in Fig.~\ref{fig:eta_xy} (b) couples strongly to the $y$ dynamics. As this additional sliding rapidly raises the temperature, friction becomes much higher. We emphasize that this 2D effect could never arise in the scalar model, as the velocity and friction components would then be individually determined for $x$ and $y$ from the one-component sliding described in Sec.~\ref{sec:velocity_dependence}. \begin{figure}[] \begin{center} \includegraphics[width=0.45\textwidth]{gfx_extras_direction.eps} \caption{The sliding angle $\varphi(t)$ as a function of time after three different velocity increment magnitudes $v^+$, for initial angle $\varphi \approx 0.528$ and temperature $T=0.022\lambda$. The direction of the velocity changes, because the friction force is not parallel to the velocity. As a consequence of this, the sliding direction changes depending on the magnitude of the velocity. \label{fig:direction} } \end{center} \end{figure} The possibility for different decay rates in the velocity components also gives rise to an interesting effect when considering the time evolution of the sliding angle $\varphi(t)=\arctan[v_y(t)/v_x(t)]$. In Fig.~\ref{fig:direction} we show $\varphi (t)$ for three different velocity increment magnitudes and initial value $\varphi\approx0.528$. Here we see that the angle can, depending on the velocity increment, either remain nearly constant or vary with time. In 2D the lattice can thus follow a curved trajectory due to friction forces perpendicular to the sliding direction, instead of a straight line as would be expected macroscopically. For a given initial direction, the magnitude of the velocity can thus be used to control the trajectory. \section{Conclusions} \label{sec:Conclusions} We have examined friction in the FK model in 2D, considering the additional parameters that result from the extra dimension as well as the more complex phonon spectrum when compared to 1D. We have systematically investigated how two common types of extensions for the 1D FK model to 2D affect the static and dynamic frictional properties, demonstrating clearly that the process of generalization is not trivial. The models behave very similarly (or even identically) to the 1D case in terms of static properties. The dynamical friction properties, however, are more sensitive to the 2D nature of the models. The wrong type of interaction or parameter choices can lead to unphysical dynamics, and unphysical dynamic friction. The scalar model, though 2D in terms of particle positions and lattice structure, consists in practice of two decoupled 1D systems which do not interact. It cannot therefore describe thermal equilibration, which is crucial for dissipation, and does not capture 2D behavior of dissipation in real physical systems. The vector model is in every sense a true 2D model, as the higher order terms in internal interaction couple the coordinate components. However, it is important that the elastic parameter $\xi$ is chosen with care. Too small values of $\xi$ give rise to phonon modes with unphysical dispersion and extremely high friction. For the physically realistic parameter values around $\xi=0.33$, we find a qualitative but not always quantitative agreement with the 1D case for the dynamic friction and no sign of pathologies. We have used the vector model to study some unique features that result from the true 2D nature of the system. In 2D there are extra channels for dissipation: phonon modes with (quasi-)transverse polarization as well as phonons travelling in directions which are not aligned with the sliding. Their influence is seen in the qualitatively different temperature dependence of the dynamic friction, as compared to the 1D case. The new effects would likely be even stronger in a fully 3D model, as e.g. there are then several transverse phonon branches. In addition, the possibility of sliding in directions other than the symmetry axes of the substrate has been demonstrated to result in nontrivial anisotropic effects. \section{Acknowledgements} \label{sec:Acknowledgements} We thank Joost A van den Ende for interesting discussions. J.N. acknowledges financial support for travel from Sancta Ragnhild’s Gille. A.F.\ acknowledge support of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO) within the program n.129 ``Fundamental Aspects of Friction''. A.S.d.W.'s work is financially supported by an Unga Forskare grant from the Swedish Research Council (Vetenskapsr\aa{}det). This work is supported in part by COST Action MP1303. \section{References} \label{sec:References}
2,869,038,155,824
arxiv
\section{Introduction} \begin{figure*}[htb] \centering \includegraphics[width=2.0\columnwidth]{block_diagram3.png \caption{Our AMC system model with adversarial interference. The shaded blocks correspond to our time and frequency-domain classifiers.} \label{block_diagram} \end{figure*} \IEEEPARstart{T}{he} recent exponential growth of wireless traffic has resulted in a crowded radio spectrum, which, among other factors, has contributed to reduced mobile efficiency. With the number of devices requiring wireless resources projected to continue increasing, this inefficiency is expected to present large-scale challenges in wireless communications. Automatic modulation classification (AMC), which is a part of cognitive radio technologies, aims to alleviate the inefficiency induced in shared spectrum environments by dynamically extracting meaningful information from massive streams of wireless data. Traditional AMC methods are based on maximum-likelihood (ML) approaches \cite{ml_review}, which consist of deriving statistical decision boundaries using hand-crafted features to discern various modulation constellations. More recently, deep learning (DL) has become a popular alternative to ML methods for AMC, since it does not require manual feature engineering to attain robust classification performance \cite{amc_dl1}. Despite their robust AMC performance, however, deep learning models are highly susceptible to adversarial attacks \cite{intriguing_props}, which introduce additive wireless interference into transmitted RF signals to induce high-confidence misclassifications on well-trained deep learning models. In addition to degrading the classification performance of a particular targeted model, adversarial attacks are also transferable to other classification networks that are trained to perform the same task as the targeted classifier \cite{transfer1}. As a result, an adversary can degrade the performance of several deep learning models simultaneously, thus reducing spectrum efficiency and compromising secure communication channels In this work, we develop a novel AMC method that is capable of mitigating the effects of transferable adversarial attacks. Specifically, our method learns on frequency domain-based features, as opposed to in-phase and quadrature (IQ) time-domain features, which are traditionally used for deep learning-based AMC. After quantifying the model's performance in the absence of adversarial interference, we consider a wireless channel compromised by an adversary aiming to induce an erroneous modulation constellation prediction at the receiver by injecting interference into the transmitted signal. Although the interference degrades the classification performance of the model trained on IQ features, the frequency feature-based model significantly increases the probability of correctly classifying the perturbed signal, thus mitigating the effects of transferable adversarial interference. \textbf{Related Work:} The susceptibility of deep learning-based AMC models to adversarial attacks has been demonstrated in prior work \cite{adv_filters,amc_adv_atk1,amc_adv_atk2}. Such attacks have been found to be more efficient than traditional jamming attacks applied in communication networks \cite{jamming} and, as a result, present challenges for deep learning deployment in autonomous wireless channels \cite{hurdle1}. Yet, limited work has been conducted in exploring the degree to which AMC models are susceptible to interference. Few defenses have been proposed to mitigate the effects of wireless adversarial interference \cite{autoencoder_defense}, and to the best of our knowledge, no work has explored the extent to which adversarial attacks are transferable between domains (although various domains for classification have been investigated \cite{fft_features}). On the other hand, several defenses have been proposed for defending deep learning image classifiers from adversarial attacks, with no method generally accepted as a robust solution \cite{review1}. Nonetheless, even considering the adoption of image classification defenses for AMC is difficult due to the differing constraints placed on the adversary in both settings (e.g., transmit power budget, SNR degradation, visual perceptibly, etc.). In this work, we address this challenge by proposing a novel AMC methodology, which allows us to quantify the extent of adversarial transferability in a wireless channel with real-world communication constraints. \textbf{Summary of Contributions:} The main contributions of this work are as follows: \begin{enumerate} \item \textbf{A novel signal receiver architecture for AMC} (Sec. II-B, II-C, and III-B): We model and develop a robust AMC module consisting of both frequency-based and IQ-based deep learning architectures. \item \textbf{Resilience to time domain adversaries} (Sec. II-D and III-C): We demonstrate that, although an adversary may be able to degrade the classification performance on a time-domain model, their attacks are not well-transferable to our models trained using frequency features. \item \textbf{Best architecture to offset adversary} (Sec. III-C): Our results show that, out of several deep learning architechures, convolutional neural networks (CNNs) have the fastest training times and mitigate the classifier degradation to the greatest extent. \end{enumerate} \section{Our AMC Methodology} In this section, we outline the wireless channel we consider for AMC as well as the assumptions about the knowledge level of the transmitter, receiver, and adversary. We describe two ways we represent the received signal (Sec. II-A and II-B) followed by the machine learning models we employ for AMC (Sec. II-C). Finally, we describe perturbation methods performed by the adversary to induce misclassifications on the trained models (Sec. II-D). Our overall AMC system model is shown in Fig. \ref{block_diagram}. \subsection{Signal Modeling} We consider a wireless channel consisting of a transmitter, which is aiming to send a modulated signal, and a receiver, whose objective is to perform AMC on the obtained waveform and realize its modulation constellation. Specifically, at the transmitter, we consider an underlying data source, $\mathbf{s} = [s[0],\ldots,s[\ell - 1]]$, which is modulated using one of $C$ modulation constellations chosen from a set, $\mathcal{S}$, of possible modulation schemes, with each scheme having equal probability of selection. $s[k]$ is the (scalar) value wirelessly transmitted at time $k$. At the receiver, the collected waveform at each time instance is modeled by \begin{equation} \label{x(t)} x[k] = \sqrt{\rho}(\mathbf{s} * \mathbf{h})[k] + n[k], \end{equation} \noindent $n[k]$ represents complex additive white Gaussian noise (AWGN) at time $k$ distributed as $\mathcal{CN}(0,1)$, $\sqrt{\rho}$ denotes the SNR (known at the receiver), $*$ denotes convolution, and $\mathbf{h}$ captures the wireless channel's impulse response. $\mathbf{h}$ also includes radio imperfections such as sample rate offset (SRO), center frequency offset (CFO), and selective fading, none of which are known to the receiver. Furthermore, we assume that the receiver has no knowledge about the channel model or the distribution of $\mathbf{h}$. This general setting motivates an AMC solution using a data-driven approach, as presented in this work, in which the true modulation constellation of the received signal is estimated from a model trained on a collection of pre-existing labeled signals. \subsection{Domain Transform} At the receiver, we model $\mathbf{x} = [x[0],\ldots,x[\ell - 1]]$ using its frequency components obtained from the discrete Fourier transform (DFT). Specifically, the $p^{\text{th}}$ component of the DFT of $\mathbf{x}$ is given by \begin{equation} \label{dft} X[p] = \sum_{k=0}^{\ell - 1} x[k] e^{-\frac{j2\pi}{\ell}pk}, \end{equation} \noindent where $\mathbf{X} = [X[0],\ldots,X[\ell - 1]]^T$ contains all frequency components of $\mathbf{x}$. We are interested in comparing the efficacy of AMC learning based on $\mathbf{x}$ and $\mathbf{X}$ as feature representations of the input signal. Although both signal representations are complex (i.e., $\mathbf{x}, \mathbf{X} \in \mathbb{C}^{\ell}$), we represent all signals as two-dimensional reals, using the real and imaginary components for the first and second dimension, respectively, in order to utilize all signal components during classification. Thus, we represent all time and frequency domain features as real-valued matrices $\mathbf{x}, \mathbf{X} \in \mathbb{R}^{\ell \times 2}$. \subsection{Deep Learning Architectures} In this work, we consider the effectiveness of different deep learning architectures for AMC under IQ and frequency features as model inputs. In general, we denote a trained deep learning classifier, parameterized by $\theta$, as $f(\cdot, \theta): \mathbb{R}^{\ell \times 2} \rightarrow \mathbb{R}^{C}$. This calculates the likelihood, $\hat{\mathbf{y}}$, of an input signal consisting of IQ features, $\mathbf{x}$, belonging to each of the $C$ modulation constellations. From $\hat{\mathbf{y}}$, the predicted modulation constellation is given by ${\arg\max}_{i = 1, \ldots, C} \hspace{0.5mm} \hat{y}_{i}$. Similarly, we denote a deep learning classifier trained using the DFT of the input signal, $\mathbf{X}$, parameterized by $\phi$, as $g(\cdot, \phi): \mathbb{R}^{\ell \times 2} \rightarrow \mathbb{R}^{C}$, which is trained to perform the same classification task as $f(\cdot, \theta)$ but using the frequency features of $\mathbf{x}$ to comprise the input signal. We analyze the classification performance using the aforementioned signal representations on four common AMC deep learning architectures: the fully connected neural network (FCNN), the convolutional neural network (CNN), the recurrent neural network (RNN) and the convolutional recurrent neural network (CRNN). Each architecture consists of a set of layers and a set of neurons per layer (referred to as units). The specific differences of layer interactions in each considered model are described below. For each model, we apply the ReLU non linearity in its hidden layers, given by $\sigma(a) = \max\{0, a\}$, and a $C$-unit softmax output layer given by \begin{equation} \sigma(\mathbf{a})_{i} = \frac{e^{a_{i}}}{\sum_{j=1}^{C} e^{a_{j}}}, \end{equation} where $i = 1,\ldots,C$ for input vector $\mathbf{a}$. This normalization allows a probabilistic interpretation of the model's output predictions. \textbf{FCNN}: Our FCNN consists of three hidden layers with 256, 128, and 128 units, respectively. The output of a single unit, $u$, is given by \begin{equation} {\sigma}\big{(}\sum_{i} w_{i}^{(u)}\cdot a_i + b\big{)}, \end{equation} \noindent where $\sigma(\cdot)$ is the activation function, $\mathbf{w} = [w_1, \ldots, w_n]$ is the weight vector for unit $u$ estimated from the training data, $\mathbf{a} = [a_1, \ldots, a_n]$ is the vector containing the outputs from the previous layer (or the model inputs in the first layer), and $b$ is a threshold bias. Each hidden layer applies a 20\% dropout rate during training. \textbf{CNN}: The CNN is comprised of two convolutional layers consisting of 256 and 64 feature maps (each with 20\% dropout), respectively, followed by a 128-unit fully connected layer. The output of each feature map in the convolutional layer is given by \begin{equation} {\sigma}\big{(} \mathbf{v} * \mathbf{a} + b\big{)}, \end{equation} where $\mathbf{v}$ is the filter kernel whose parameters are estimated during training, and $\mathbf{a}$ is the output from the preceding layer. Our model uses a $2 \times 5$ and $1 \times 3$ kernel for the first and second convolutional layers, respectively. \textbf{RNN}: The RNN is comprised of a 75-unit long-short-term-memory (LSTM) \cite{lstm} layer followed by a 128-unit ReLU fully connected layer. Each LSTM unit implements three gates for learning. \emph{Input gates} prevent irrelevant features from entering the recurrent layer while \emph{forget gates} eliminate irrelevant features altogether. \emph{Output gates} produce the LSTM layer output, which is inputted into the subsequent network layer. The gates are used to recursively calculate the internal state of the cell, denoted by $\mathbf{z}_{c}^{(t)}$ at time $t$ for cell $c$, at a specific recursive iteration, called a time instance, which is then used to calculate the cell output given by \begin{equation} \mathbf{q}^{(t)} = \text{tanh}(\mathbf{z}_{c}^{(t)})\sigma(\mathbf{p}^{(t)}), \end{equation} where $\mathbf{p}^{(t)}$ is the parameter obtained from the output gate and $\sigma(\cdot)$ is the logistic sigmoid function given by $\sigma(p_{i}^{t}) = 1 / (1 + e^{-p_{i}^{t}})$ for the $i^{\text{th}}$ element in $\mathbf{p^{(t)}}$. \textbf{CRNN}: Lastly, we consider a CRNN comprised of two convolutional layers (containing 128 and 64 feature maps with $2 \times 5$ and $1 \times 3$ kernels, respectively) followed by a 32-unit LSTM layer Unless otherwise noted, each model is trained using the Adam optimizer \cite{adam}, 75 epochs, a batch size of 64, and the categorical cross entropy loss function given at the output by \begin{equation} \label{single_cost} \mathcal{L}_{n} = \sum_{j=1}^{C} y_{j} \text{log}(\hat{y}_{j}) \end{equation} for each sample $n$ and \begin{equation} \mathcal{L} = -\frac{1}{N}\sum_{n=1}^{N} \mathcal{L}_{i}, \end{equation} \noindent over the entire training set $n = 1,\ldots, N$, where $y_j = 1$ if the ground truth label of the sample is modulation class $j$ and $y_j = 0$ otherwise. \subsection{Adversarial Interference} In addition to the transmitter and receiver, our considered communication network also consists of an adversary, whose objective is to induce a misclassification on the trained AMC model. The adversary will perturb the received signal by injecting wireless interference, which we will denote $\pmb{\delta}: \pmb{\delta} \in \mathbb{R}^{\ell \times 2}$, into $\mathbf{x}$ during transmission. For a given design of $\pmb{\delta}$, the resulting signal that arrives at the receiver will be \begin{equation} \tilde{\mathbf{x}} = \mathbf{x} + \pmb{\delta}, \end{equation} where $\tilde{\mathbf{x}} = \mathbf{x}$ in the absence of an attack (i.e., when $\pmb{\delta} = 0$). We consider a limited knowledge level threat model where the adversary knows the architecture and parameters of $f(\cdot, \theta)$ but is blind to $g(\cdot, \phi)$. This constraint mimics a real-world wireless channel where an adversary may not have complete knowledge of the underlying system under attack and thus restricts the adversary to injecting an attack in the time-domain, where traditional AMC features are constructed from. The adversary's objective is to inject $\pmb{\delta}$ to change the classification of $\mathbf{x}$ using the least amount of power possible (to evade detection caused by higher powered adversarial interference \cite{adv_det}), thus constraining the power of the perturbation to \begin{equation} ||\pmb{\delta}||_{2}^{2} \leq P_{T}, \end{equation} where $P_{T}$ is the total power budget available to the adversary for instantiating an attack. In this work, we study two particular methods to inject adversarial interference: the fast gradient sign method (FGSM) \cite{fgsm}, in which the adversary exhausts its total power budget on a single step attack, and the basic iterative method (BIM) \cite{bim}, in which the adversary iteratively uses a fraction of its attack budget resulting in a more powerful attack at the cost of higher computational overhead. \textbf{FGSM}: In this case, the adversary adds an $l_{2}$-bounded perturbation to the transmitted signal in a single step exhausting the power budget. Formally, the $n^{\text{th}}$ perturbed received signal is given by \begin{equation} \label{l2_fgsm} \tilde{\mathbf{x}}_{n} = \mathbf{x}_{n} + P_{T} \frac {\nabla_{\mathbf{x}} \mathcal{L}_{n}(\mathbf{x}_{n}, \mathbf{y}_{n}, \theta)} {||\nabla_{\mathbf{x}} \mathcal{L}_{n}(\mathbf{x}_{n}, \mathbf{y}_{n}, \theta)||_{2}}, \end{equation} \noindent where $\mathcal{L}$ refers to the cost function of $f(\cdot, \theta)$ in (\ref{single_cost}). Adding a perturbation in the direction of the cost function's gradient behaves as performing a step of gradient ascent, thus increasing the classification error on the perturbed sample. We explore the effects of various bounds on $P_{T}$ in Section III-C. \textbf{BIM} The BIM is an iterative extension of the FGSM. Specifically, in each iteration, a smaller $l_{2}$-bounded perturbation, $\alpha < P_{T}$, is added to the transmission, and the optimal direction of attack (the direction of the gradient) is recalculated. Formally, the perturbation on iteration $k + 1$ for the $n^{\text{th}}$ sample is calculated as \begin{equation} \tilde{\mathbf{x}}_{n}^{(k+1)} = \mathbf{x}_{n}^{(k)} + \text{clip}_{P_{T}}\bigg{(}\alpha \frac {\nabla_{\mathbf{x}} \mathcal{L}_{n}(\mathbf{x}_{n}^{(k)}, \mathbf{y}_{n}, \theta)} {||\nabla_{\mathbf{x}} \mathcal{L}_{n}(\mathbf{x}_{n}^{(k)}, \mathbf{y}_{n}, \theta)||_{2}}\bigg{)}, \end{equation} \noindent where the \texttt{clip} function is defined to ensure that the additive perturbation based on $\alpha$ in each iteration remains within the adversary's power budget. \section{Results and Discussion} In this section, we conduct an empirical evaluation of our method. First, we overview the dataset that we use (Sec. III-A). Next, we present the efficacy of using frequency features for classification in the absence of any adversarial interference (Sec. III-B). Finally, we demonstrate the resilience of our trained models to transferable adversarial attacks instantiated in the time domain (Sec. III-C). \vspace{-0.2cm} \subsection{Dataset and Evaluation Setup} We employ the GNU RadioML2016.10B dataset \cite{dataset} for our analysis. Each signal in the dataset, $\mathbf{x}_{n}$, has an SNR of 18 dB, is normalized to unit energy, and consists of a 128-length observation window modulated according to a certain digital constellation, $\mathbf{y}_n$. We focus on the following four modulation schemes: CPFSK, GFSK, PAM4, and QPSK. Each constellation set contains 6000 examples for a total of 24000 signals. In each experiment, we employ a 70/15/15 training/validation/testing dataset split, where the training and validation data are used to estimate the parameters of $f(\cdot, \theta)$ and $g(\cdot, \phi)$, and the testing dataset is used to evaluate each trained model's susceptibility to adversarial interference and transferability to resilience. In particular, the validation set is used to tune the model parameters using unseen data during the training process whereas the testing set is used to measure the performance of the fine-tuned model. We denote the training, validation, and testing datasets, consisting of either time-domain IQ points or frequency-domain feature components, as $\mathcal{X}_{tr}^{t}$, $\mathcal{X}_{va}^{t}$, $\mathcal{X}_{te}^{t}$, $\mathcal{X}_{tr}^{\omega}$, $\mathcal{X}_{va}^{\omega}$, and $\mathcal{X}_{te}^{\omega}$, respectively. \subsection{Model Convergence Rate and Accuracy} \begin{figure}[h] \centering \includegraphics[width=1.0\columnwidth]{iq_frq_clf_perf_final.png} \caption{The model training performance of each considered AMC architecture on the corresponding training and validation sets. We see that the frequency-based features $g(\cdot, \phi)$ outperform the time domain features $f(\cdot, \theta)$ in terms of training convergence and validation accuracy for each deep learning architecture. The CNN results in the fastest convergence and highest accuracy for both $f(\cdot, \theta)$ and $g(\cdot, \phi)$.} \label{iq_frq_perf} \end{figure} \begin{table} \small \caption{The testing accuracy of each considered model on $\mathcal{X}_{te}^{(\cdot)}$. The CNN outperforms every other considered model (although the CRNN delivers equivalent accuracy, it is achieved with a longer training time in Fig. \ref{iq_frq_perf} compared to the CNN). \label{model_acc}} \centering \begin{tabular}{c c c} \centering Model & Input Features & Accuracy \\ \hline FCNN & IQ & 92.25\%\\ FCNN & Frequency & 92.42\% \\ CNN & IQ & 98.92\% \\ CNN & Frequency & 99.19\% \\ RNN & IQ & 93.78\%\\ RNN & Frequency & 92.67\% \\ CRNN & IQ & 98.28\% \\ CRNN & Frequency & 99.03\% \\ \hline \end{tabular} \end{table} We begin by evaluating the performance of both $f(\cdot, \theta)$ and $g(\cdot, \phi)$ in the absence of adversarial interference. In Fig. \ref{iq_frq_perf}, we plot the evolution of the classification accuracy across training epochs achieved by each deep learning architecture on the training and validation sets. In contrast to using IQ training features, we see that each model trained using our proposed frequency feature-based input outperforms its time domain counter-part model. For example, the RNN trained on frequency components achieves an accuracy of 93.4\% on its corresponding validation dataset in 75 training epochs whereas the same architecture trained on $\mathcal{X}_{tr}^{t}$ requires 150 epochs to converge to a validation accuracy of 93.9\%. Furthermore, the CRNN also converges in fewer epochs when using frequency-based features in comparison to IQ features. We also see in Fig. 2 that the CNN obtains the best performance overall. IQ features present more challenges during training on the FCNN, RNN, and CRNN compared to the CNN. Specifically, the FCNN results in slight overfitting to the training data, the RNN fails to converge on a validation accuracy greater than 94\%, and the CRNN presents instability during optimization requiring a longer number of training epochs before convergence. The CNN, on the other hand, entails almost no degree of overfitting while converging in substantially less epochs compared with the RNN and CRNN models. Each trained model's accuracy on its corresponding testing set is shown in Table \ref{model_acc}. Among all eight considered models, the CNN trained on frequency features, as proposed in this work, achieves the highest testing accuracy, as well as the fastest convergence rate in Fig. \ref{iq_frq_perf}. Specifically, this model results in nearly no overfitting between $\mathcal{X}_{tr}^{\omega}$ and $\mathcal{X}_{va}^{\omega}$, unlike either FCNN, while converging in nearly 10 epochs unlike the CNN trained on IQ features. Although the CRNN, in both cases, results in robust classification performance, the higher number of epochs required by these models results in substantially higher computational overhead (e.g., the the CNN achieves a three-fold improvement per epoch over the CRNN). \emph{Therefore, our proposed CNN trained using frequency features is the most desirable model in terms of classification performance, training time, and computational efficiency.} \subsection{Model Resilience to Adversarial Interference} \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{fgsm_plots_grid_on2.png} \caption{The transferability of the FGSM attack from $f(\cdot, \theta)$ to $g(\cdot, \phi)$. The CNN mitigates the effects of the attack to the greatest extent with a performance improvement of 53.23\% when the adversary exhausts the total perturbation budget.} \label{fgsm_plots} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{bim_plots_grid_on2.png} \caption{The transferability of the BIM attack from $f(\cdot, \theta)$ to $g(\cdot, \phi)$. Similar to the FGSM attack, the CNN displays the strongest resilience to transferability with a performance improvement of up to 52.91\%.} \label{bim_plots} \end{figure} We now evaluate the ability of an adversarial attack instantiated in the time domain to affect our frequency domain-based AMC methodology. We begin by considering the FGSM attack where we restrict $P_{T} \leq 0.0200$ (corresponding to $2\%$ additive power, which effectively degrades time domain model performance). Fig. \ref{fgsm_plots} depicts the robustness of each considered model for various levels of injected interference. We see that $g(\cdot; \phi)$ improves the classification accuracy of each model in the presence of an attack on time domain feature-based classifiers. In particular, the average accuracy improvement for the FCNN, CNN, RNN, and CRNN is 10.77\%, 38.32\%, 20.61\%, and 13.26\%, respectively, across the range of $P_{T}$. The ability of the CNN and RNN to withstand attacks to the highest degree indicates their increased resilience to transferable adversarial interference. The effect of the BIM adversarial attack is consistent with the response of the FGSM attacks. For the BIM attacks, we used ten iterations of different $\alpha$-bounds with $P_{T} = 0.0200$. As shown in Fig. \ref{bim_plots}, the attack instantiated on time domain features is significantly mitigated on each considered model when the frequency domain is used for classification. The FCNN, CNN, RNN, and CRNN experience average improvements of 12.99\%, 42.16\%, 27.31\%, and 27.33\%, respectively, for $\alpha \in [0.000, 0.002]$. \emph{Thus, as shown by the instantiation of both considered attacks, the transferability of adversarial interference is mitigated to the greatest extent when using the CNN as the underlying classification model.} We analyze the performance of the CNN model more closely, both in the presence and absence of interference, in Figs. \ref{baseline_conf_matx}-\ref{bim_conf_matx}. The labels \{0, 1, 2, 3\} correspond to the constellations \{CPFSK, GFSK, PAM4, QPSK\}. As shown in Fig. \ref{baseline_conf_matx}, both time and frequency features deliver robust AMC performance in the absence of adversarial interference with classification rates of 98.92\% and 99.19\%, respectively. However, the classification rate in the time domain drops to a mere 23.58\% and 20.56\% when the FGSM and BIM perturbations are employed, respectively (where the total perturbation budget is exhausted in both cases). As shown in Figs. \ref{fgsm_conf_matx} and \ref{bim_conf_matx}, the adversarial interference pushes the majority of signals within the classification decision boundaries of the PAM4 constellation. This is largely due to the nature of the untargeted attack in which the adversary's sole objective is to induce misclassification without targeting a specific misclassified prediction. The CNNs trained on frequency features, however, show significant improvements in classifying both FGSM and BIM perturbed signals with accuracies of 76.81\% and 73.47\% corresponding to classification accuracy improvements of 53.23\% and 52.91\%, respectively. The frequency domain-based models correctly classify a majority of CPFSK and GFSK modulation schemes, with the largest incongruency being between PAM4 and QPSK. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{baseline_conf_matx.png} \caption{The confusion matrices of the CNN's predictions with no interference, using IQ features (left) and frequency features (right). The performance for both feature representations is equivalent.} \label{baseline_conf_matx} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{fgsm_conf_matx.png} \caption{The confusion matrices of the CNN classifier with the FGSM perturbation, using IQ features (left) and frequency features (right). The frequency feature-based model is able to significantly mitigate the effects of the interference induced on the IQ features.} \label{fgsm_conf_matx} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{bim_conf_matx.png} \caption{The confusion matrices of the CNN's predictions with the BIM attack, using IQ features (left) and frequency features (right). Similar to the FGSM attack, the CNN trained using frequency features significantly mitigates the effects of time domain feature-based perturbations.} \label{bim_conf_matx} \end{figure} \section{Conclusion and Future Work} Deep learning has recently been proposed as a robust method to perform automatic modulation classification (AMC). Yet, deep learning AMC models are vulnerable to adversarial interference, which can alter a trained model's predicted modulation constellation with very little input power. Furthermore, such attacks are transferable, which allows the interference to degrade the performance of several classifiers simultaneously. In this work, we developed a novel wireless transmission receiver architecture, consisting of a frequency domain feature-based classification model, which is capable of mitigating the transferability of adversarial interference. Specifically, we showed that our proposed frequency-feature based deep learning classifiers are resilient to transferable adversarial interference instantiated on traditional time-domain in-phase and quadrature (IQ) feature-based models. The convolutional neural network (CNN), in particular, demonstrated the most robust classification performance in the absence of an attack, along with the highest resilience to withstand additive adversarial perturbations. Future work may consider the effects of adversarial transferability in more invasive AMC environments, where the adversary's knowledge level may be unknown or unpredictable.
2,869,038,155,825
arxiv
\section{Introduction} \label{sect:intro} AdS/CFT correspondence \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj} is a powerful tool to solve the strongly coupled physics from the one dimensional higher weakly coupled gravity in some limit, which is also dubbed `holography'. In recent years, the applied holography has attained great success in the area of condensed matter theory (CMT), hydrodynamics and etc. \cite{Hartnoll:2008vx, Lee:2008xf, Kovtun:2004de}. The holographic Josephson junction was first studied in \cite{Horowitz:2011dz}, later on was largely extended to various models \cite{Wang:2011rva}. Other models of holographic Josephson junction were proposed from designer multi-gravity in \cite{elias} and from D-branes in \cite{hoyos}. The gravity backgrounds in the above Josephson junctions are mostly the Einstein's general relativity (GR). Due to the requirements of diffeomorphism invariance in GR, the corresponding graviton should be a massless spin-2 boson, see for example~\cite{tony}. Therefore, it is a direct and interesting task in history to find whether there are other gravitational theories in which the graviton is massive~\cite{Fierz:1939ix}. However, the generalization is not so easy since usually the massive gravity has the instability problem of the Boulware-Deser ghost~\cite{Boulware:1973my}. Recently, a nonlinear massive gravity theory has been proposed (the so-called dRGT theory by de Rham, Gabadadze, and Tolley)\cite{deRham:2010ik, deRham:2010kj, Hinterbichler:2011tt}, and later it is found to be ghost-free~\cite{Hassan:2011hr,Hassan:2011tf}. For more details about the aspects of massive gravity, one can refer to the reviews~\cite{Hinterbichler:2011tt,deRham:2014zqa}. There have been many investigations on this type of massive gravity, for instance, the black hole solutions and their thermodynamics were studied in \cite{Vegh:2013sk,Adams:2014vza,Cai:2014znn,Hu:2015xva,Xu:2015rfa,Hendi:2015hoa}. The counterterm of this massive gravity has been obtained in \cite{Cao:2015cza}, as well as that it has been proved to be ghost-free for this massive gravity with a special degenerate reference metric in~\cite{Zhang:2015nwy}. Moreover, due to the breaking of diffeomorphism invariance in dRGT massive gravity, the stress energy tensor of matter is not conserved anymore. The non-conservation of stress energy tensor is dual to the dissipation of momentum in the boundary field theory according to the AdS/CFT, such that a finite DC conductivity was obtained in~\cite{Vegh:2013sk,Davison:2013jba,Blake:2013bqa,Blake:2013owa}. Some other holographic results related to the effects of graviton mass in massive gravity have also been investigated in~\cite{Davison:2013txa,Adams:2014vza,Amoretti:2014zha,Zeng:2014uoa,Baggioli:2015zoa}. In this paper, we are going to investigate the effect of the graviton mass (or breaking of translational symmetry in the boundary field theory due to AdS/CFT \cite{Blake:2013bqa}) on the holographic Josephson junction. In particular, we study the Superconductor-Normal metal-Superconductor (SNS) Josephson juction in the dRGT massive gravity. First, for a homogeneous superconductor, we find that the graviton mass will reduce the critical temperature from a normal metal to a superconductor, which indicates it is harder to have a phase transition from normal metal to a superconductor when graviton mass is larger. Meanwhile, from the AdS/CFT aspect it also means in the boundary field theory, the larger the momentum dissipation is, the harder the phase transition is. This is reminiscent of the phase diagram of cuprates, where greater doping makes the phase transition from normal metal to superconductor more difficult to take place. Although doping is not the same as the momentum dissipation, we can still argue that they have similar effects to the phase transitions from the point view of holography. For the holographic SNS Josephson junction, we find the usual sinusoidal relation between the tunneling current the phase difference across the junction. One can obtain the maximal current by fitting the sinusoidal relation. It is found that the maximal current decreases exponentially with width of the junction, from which we can get the coherence length of the normal metal in-between the junction \cite{tinkham}. Moreover, we find that the coherence length decreases with respect to the graviton mass, so does the maximal current. Physically, it indicates that the momentum dissipation (or breaking of translational symmetry) will reduce the coherence length as well as the maximal current. The paper is organized as follows. In Sec.\ref{sect:MG} we will briefly introduce the dRGT massive gravity; We build up the holographic model of Josephson junction in Sec.\ref{sect:setup}; Numerical results are shown in Sec.\ref{sect:result} and conclusions are drawn in Sec.\ref{sect:con}; In particular, the dependence of the constraint equation with the gauge field equations are generically proven in Appendix \ref{sect:appendix}. \section{A Brief Review: dRGT Massive Gravity and Its General Black Hole Solutions} \label{sect:MG} In this paper, we will focus on the ghost-free dRGT massive gravity, whose action in an $(n+2)$-dimensional spacetime is usually read as~\cite{Vegh:2013sk,Cai:2014znn} \begin{equation} \label{actionmassive} S =\frac{1}{16\pi G}\int d^{n+2}x \sqrt{-g} \left[ R +\frac{n(n+1)}{L^2} +m^2 \sum^4_i c_i {\cal U}_i (g,\mathfrak{f})\right], \end{equation} where $\mathfrak{f}$ is a fixed symmetric tensor usually called the reference metric, $L$ is the radius of AdS$_{n+2}$ spacetime; $c_i$ are constants, $m$ stands for the graviton mass,\footnote{Precisely, $m$ is the graviton mass near the UV boundary. From \cite{Blake:2013bqa} we know that the effective graviton mass depends on the radial direction. By taking the radial direction to the UV boundary, we can see that $m$ here is proportional to the graviton mass near the UV boundary up to a constant. For simplicity, we call $m$ here as graviton mass, the exact meaning of which is clear from the above explanation.} and ${\cal U}_i$ are symmetric polynomials of the eigenvalues of the $(n+2)\times (n+2)$ matrix ${\cal K}^{\mu}_{\ \nu} \equiv \sqrt {g^{\mu\alpha}\mathfrak{f}_{\alpha\nu}}$: \begin{eqnarray} \label{eq2} && {\cal U}_1= [{\cal K}], \nonumber \\ && {\cal U}_2= [{\cal K}]^2 -[{\cal K}^2], \nonumber \\ && {\cal U}_3= [{\cal K}]^3 - 3[{\cal K}][{\cal K}^2]+ 2[{\cal K}^3], \nonumber \\ && {\cal U}_4= [{\cal K}]^4- 6[{\cal K}^2][{\cal K}]^2 + 8[{\cal K}^3][{\cal K}]+3[{\cal K}^2]^2 -6[{\cal K}^4]. \end{eqnarray} The square root in ${\cal K}$ means $(\sqrt{A})^{\mu}_{\ \nu}(\sqrt{A})^{\nu}_{\ \lambda}=A^{\mu}_{\ \lambda}$ and $[{\cal K}]=K^{\mu}_{\ \mu}=\sqrt {g^{\mu\alpha}\mathfrak{f}_{\alpha\mu}}$. After making variation of action with respect to the metric, the equations of motion (EoM) turns out to be \begin{eqnarray} R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}-\frac{n(n+1)}{2L^2} g_{\mu\nu}+m^2 \chi_{\mu\nu}&=&8\pi G T_{\mu \nu },~~ \end{eqnarray} where \begin{eqnarray} && \chi_{\mu\nu}=-\frac{c_1}{2}({\cal U}_1g_{\mu\nu}-{\cal K}_{\mu\nu})-\frac{c_2}{2}({\cal U}_2g_{\mu\nu}-2{\cal U}_1{\cal K}_{\mu\nu}+2{\cal K}^2_{\mu\nu}) -\frac{c_3}{2}({\cal U}_3g_{\mu\nu}-3{\cal U}_2{\cal K}_{\mu\nu}\nonumber \\ &&~~~~~~~~~ +6{\cal U}_1{\cal K}^2_{\mu\nu}-6{\cal K}^3_{\mu\nu}) -\frac{c_4}{2}({\cal U}_4g_{\mu\nu}-4{\cal U}_3{\cal K}_{\mu\nu}+12{\cal U}_2{\cal K}^2_{\mu\nu}-24{\cal U}_1{\cal K}^3_{\mu\nu}+24{\cal K}^4_{\mu\nu}).~~ \end{eqnarray} Since the background we are going to use is $(3+1)$ dimension, thus a general black hole solution can be~\cite{Cai:2014znn} \begin{eqnarray}\label{metric} ds^2&=&-r^{2}f(r)dt^2+\frac{dr^2}{r^2 f(r)}+r^2h_{ij}dx^idx^j,\\ \label{fr} f(r)&=&\frac{k}{r^2}+\frac{1}{L^2}-\frac{m_0}{r^3}+\frac{q^2}{4r^4}+\frac{c_1m^2}{2r}+\frac{c_2m^2}{r^2}, \end{eqnarray} where $h_{ij}dx^idx^j$ is the line element for the 2-dimensional spherical, flat or hyperbolic space with $k=1,~0$ or $-1$ respectively. $m_0$ is related to the mass of the black hole while $q$ is the charge of it. The reference metric now can have a special choice \begin{equation} \mathfrak{f}_{\mu\nu}=\text{diag}~\{0,0,h_{ij}\}. \end{equation} The Hawking temperature of this black hole solution can be easily found \begin{eqnarray} T_{BH}=\frac{\left(r^2f(r)\right)'}{4\pi}\bigg|_{r=r_+}=\frac{1}{4\pi r_+}\left(k+\frac{3r_+^2}{L^2}-\frac{q^2}{4r_+^2}+c_1m^2r_++c_2m^2\right). \end{eqnarray} in which $r_+$ is the horizon of the black hole. \section{Holographic Setup} \label{sect:setup} For simplicity, we will consider the black hole solution in (\ref{metric}) with $k=0$ and $q=0$, therefore, $h_{ij}=\text{diag}(h_{xx},h_{yy})=\text{diag}(1,1)$. In the probe limit, we adopt the Maxwell and complex scalar field action as \begin{equation}\label{action2} S=\int d^{4} x\sqrt{-g}\left(-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-|\nabla\psi-iA\psi|^2-m_\psi^2|\psi|^2\right), \end{equation} in which $A_\mu$ is the $U(1)$ gauge field while $F_{\mu\nu}$ is the corresponding field strength with $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$; $m_\psi$ is the mass of the complex scalar field $\psi$. The EoMs can be obtained readily from the above action as \begin{eqnarray} \label{eomA} 0&=&(\nabla_\mu-iA_\mu)(\nabla^\mu-iA^\mu)\psi-m_\psi^2\psi, \\ \label{eompsi}\nabla_\nu F^{\nu\mu}&=&i(\psi^*(\nabla^\mu-iA^\mu)\psi-\psi(\nabla^\mu+iA^\mu)\psi^*). \end{eqnarray} In order to work with the gauge-invariant fields, a natural ansatz for the fields are \begin{equation}\label{ansatz} \psi=|\psi|e^{i\varphi},\quad A_\mu=(A_t, A_r, A_x, 0), \end{equation} where $|\psi|,\varphi,A_t,A_r,A_{x}$ are all real functions of $r$ and $x$. The corresponding gauge-invariant quantity can be defined as $M_{\mu}\equiv A_{\mu}-\partial_{\mu}\varphi$. Therefore, from the formula \eqref{metric}, \eqref{eomA}, \eqref{eompsi} and \eqref{ansatz}, we obtain the following coupled partial differential equations (PDEs)~: \begin{subequations} \begin{eqnarray} \partial_r^2|\psi|+\frac{1}{r^4 f}\partial_x^2|\psi|+\left(\frac{4}{r}+\frac{f'}{f}\right)\partial_r|\psi|+\frac{1}{r^2 f}\left(\frac{M_t^2}{r^{2}f}-r^2 f M_r^2-\frac{M_x^2}{r^2}-L^2 m_\psi^2\right)|\psi|&=&0, \quad\quad\quad\label{eom1}\\ \partial_r M_r+\frac{1}{r^4 f}\partial_x M_x+\frac{2}{|\psi|}\left(M_r\partial_r|\psi|+\frac{M_x}{r^4 f}\partial_x|\psi|\right)+\left(\frac{4}{r}+\frac{f'}{f}\right)M_r&=&0, \label{eom2}\\ \partial_r^2 M_t+\frac{1}{r^4 f}\partial_x^2 M_t+\frac{2}{r}\partial_r M_t-\frac{2L^2|\psi|^2}{r^2 f}M_t&=&0, \label{eom3}\\ \partial_x^2 M_r-\partial_x\partial_r M_x-2L^2 r^2 |\psi|^2 M_r&=&0, \label{eom4}\\ \partial_r^2 M_x-\partial_x\partial_r M_r+\left(\frac{f'}{f}+\frac{2}{r}\right)\left(\partial_r M_x-\partial_x M_r\right)-\frac{2|\psi|^2}{L^2 r^2 f}M_x&=&0. \label{eom5} \end{eqnarray}\label{pdes} \end{subequations} where $'$ denotes the derivative with respect to $r$. One can find that only gauge-invariant quantities are left in the above PDEs, and the phase $\varphi$ has been absorbed into the gauge-invariant quantity $M_\mu$. Moreover, there are only four independent EoMs in the above five EoMs, since the second equation \eqref{eom2} is just a constraint equation which results from the algebraic combinations of \eqref{eom4} and \eqref{eom5} as \footnote{In the Appendix \ref{sect:appendix}, we show how the constrained equation can be obtained from the Maxwell equation in a generic case. } \begin{eqnarray} \text{Eq.}\eqref{eom2}=-\frac{1}{2r^2|\psi|^2}\left(\partial_r[\text{Eq.}\eqref{eom4}] +\left(\frac{f'}{f}+\frac2r\right)\times\text{Eq.}\eqref{eom4} +\partial_r[\text{Eq.}\eqref{eom5}]\right). \end{eqnarray} Therefore, we will correctly work with four independent EoMs for four fields, {\it i.e.}, $|\psi|, M_t, M_r$ and $M_x$. In order to solve the above coupled EoMs, we first need to impose boundary conditions for these fields. At the horizon, the field $M_t$ should vanish since $g^{tt}$ is divergent at horizon, while other fields are finite at horizon. Near the infinite boundary $r\to\infty$, the fields $|\psi|, M_r$ and $M_x$ can be expanded as, \begin{equation} \label{uv} \begin{split} |\psi|=&\frac{\psi^{(1)}(x)}{r^{(3-\sqrt{9+4m_\psi^2})/2}}+\frac{\psi^{(2)}(x)}{r^{(3+\sqrt{9+4m_\psi^2})/2}} +\mathcal{O}(\frac{1}{r^{(5+\sqrt{9+4m_\psi^2})/2}}), \\ M_t=&\mu(x)-\frac{\rho(x)}{r}+\mathcal{O}(\frac{1}{r^{2}}), \\ M_r=&\frac{M_r^{(1)}(x)}{r^{2}}+\mathcal{O}(\frac{1}{r^{3}}), \\ M_x=&\nu(x)+\frac{J(x)}{r}+\mathcal{O}(\frac{1}{r^{2}}). \end{split} \end{equation} According to the AdS/CFT correspondence, the scalar field $|\psi|$ has conformal dimension $\Delta_\pm=(3\pm\sqrt{9+4m_\psi^2})/2$. In the following, we will impose $\psi^{(1)}\equiv0$, which indicates there is no source term of the scalar operator on the boundary. According to the AdS/CFT dictionary, the coefficients $\psi^{(2)}$, $\mu$, $\rho$, $\nu$ and $J$ correspond to the condensate of the dual scalar operator $\langle \mathcal{O}\rangle$, chemical potential, charge density, superfluid velocity and current in the boundary field theory, respectively The gauge-invariant phase difference $\gamma=\Delta \varphi-\int A_x$ across the weak link can be defined as \cite{Horowitz:2011dz} \begin{eqnarray}\label{gamma} \gamma=-\int^{+\infty}_{-\infty}dx[\nu(x)-\nu(\pm\infty)]. \end{eqnarray} where $\nu(\pm\infty)$ is the superfluid velocity at the spacial infinity $x=\pm\infty$. In order to model a SNS Josephson junction, we choose the chemical potential $\mu(x)$ as \begin{equation}\label{mu} \mu(x)=\mu_\infty\left\{1-\frac{1-\epsilon}{2\tanh(\frac{\ell}{2\sigma})}\left[\tanh(\frac{x+\frac{\ell}{2}}{\sigma})-\tanh(\frac{x-\frac{\ell}{2}}{\sigma})\right]\right\}, \end{equation} where $\mu_\infty=\mu(+\infty)=\mu(-\infty)$ is the chemical potential at $x=\pm\infty$, while $\ell$, $\sigma$ and $\epsilon$ are the width, steepness and depth of the junction, respectively. Following ref.~\cite{Horowitz:2011dz}, we can define the critical temperature of the Josephson junction $T_c$ as the critical temperature of a homogenous superconductor, {\it i.e.}, with a flat chemical potential. Therefore, $T_c$ is proportional to $\mu_\infty=\mu(\pm\infty)$: \begin{equation}\label{tem1} T_c=\frac{T_{BH}}{\mu_c}\mu(\infty), \end{equation} where $\mu_c$ is the critical chemical potential of a homogenous superconductor without any current at temperature $T_{BH}$. Inside the junction, $x\in(-\frac{\ell}{2},\frac{\ell}{2})$, the effective critical temperature can be defined as \begin{equation}\label{tem2} T_0=\frac{T_{BH}}{\mu_c}\mu(0), \end{equation} Therefore, if one can set the profile of the chemical potential as $\mu(0)<\mu_c<\mu(\infty)$, from the relations \eqref{tem1} and \eqref{tem2} we know that $T_0<T_{BH}<T_c$. Hence, the in-between junction is in the normal metallic phase, while the region outside the junction is in superconducting phase. In the following section, we will work in this way to model the SNS Josephson junction. \section{Numerical Results} \label{sect:result} There is a scaling symmetry in the PDEs~\eqref{pdes}: \begin{equation}\label{scal} t\rightarrow\lambda t,\quad x\rightarrow\lambda x, \quad y\rightarrow\lambda y, \quad r\rightarrow\frac{1}{\lambda}r,\quad M_t\rightarrow\frac{1}{\lambda}M_t,\quad M_x\rightarrow \frac{1}{\lambda} M_x,\quad M_r\rightarrow \lambda M_r, \end{equation} where $\lambda$ is an arbitrary constant. We can adopt the above scaling symmetry \eqref{scal} to set the horizon $r_+=1$ in the numerics. For convenience, we use the transformed coordinates in the following way $u=1/r$ and $y=\tanh(\frac{x}{4\sigma})$, as well as \begin{eqnarray}\label{uv2} \begin{split} |\psi|\rightarrow \frac{|\psi|}{r^{\left(3-\sqrt{9+4m_\psi^2}\right)/2}},\quad\quad M_r\rightarrow\frac{M_r}{r^{2}}. \end{split} \end{eqnarray} Without loss of generality, we set the AdS radius $L=1$. We choose $m_\psi^2=-2$ in the numerics and the range of graviton mass are $0\leq m\leq 1.2$, since in the following we find that at $m\sim1.2$ the maximal current is extremely close to zero as the width of the junction is large. For the convenience of numerics, we set $c_1=1, c_2=-1/2$ in \eqref{fr} in order to fix the horizon at $r_+=1$ by varying the mass of graviton $m$.\footnote{According to \cite{Cai:2014znn}, the background in $(3+1)$ dimensions with $k=0$ is thermodynamically stable for $c_2\leq0$.} We solve the EoMs~\eqref{eom1}-\eqref{eom5} numerically by means of Chebyshev spectral methods \cite{tinkham}. \subsection{Critical Temperature} \begin{figure}[htbp] \centering \includegraphics[trim=0cm 1.cm 0cm 0.5cm, clip=true,scale=0.245]{mucmplot.pdf} \includegraphics[trim=0cm 1.5cm 0cm 1.3cm, clip=true,scale=0.26]{Tcm.pdf} \caption{\label{tcmass} Phase diagrams of superconductor and normal metal. (Left) Chemical potential versus graviton mass $m$; (Right) Temperature versus graviton mass $m$. Orange dots are from numerical data. } \end{figure} In this subsection, we will study the phase diagram of the boundary theory with homogeneous chemical potential. The critical chemical potentials $\mu_c$ from normal metal states to superconductor states are from $\mu_c\approx4.0638$ at $m=0$ to $\mu_c\approx 5.3306$ at $m=1.2$, which are shown in the left panel of Fig.\ref{tcmass}. The phase diagram corresponds to the critical temperatures are plotted in the right panel of Fig.\ref{tcmass}. The dark region is the normal metal phase while the white region is the superconductor phase. On one hand, for a fixed graviton mass $m$, lowering temperature (increasing chemical potential) will change the normal metal state to a superconductor state; On the other hand, for a fixed temperature (chemical potential), increasing graviton mass $m$ will destroy the superconductor phase into a normal metal phase. The phase diagram in Fig.\ref{tcmass} is reminiscent of the famous phase diagram in the cuprates with doping, such as the Fig.1 in \cite{varma}. Between the phases of superconductivity and Fermi liquid, greater doping will change a superconductor to a Fermi liquid or normal metal at a fixed temperature. Therefore, from this point of view there is a subtle relationship between the graviton mass and the doping. We cannot make any definite conclusion of this relationship currently, however, at least they more or less have a similar effect to the phase transition from superconductivity to normal metal. A more complicated study of this phase transition has been brought out in \cite{Baggioli:2015zoa}, where they have adopted a different action and metric from ours. In order to model a SNS Josephson junction, from the above discussion we need to set $\mu(0)<\mu_c<\mu(\infty)$ for various $m$. After some trials, we find that a unified chemical potential $\mu(x)$ with the parameters $\mu_\infty=6, \sigma=0.7$ and $\epsilon=0.6$ will satisfy the requirements of SNS junction. We also choose $2\leq \ell\leq5$ in order to study the coherence length $\xi$ of the junction. \subsection{Tunneling Current} \begin{figure}[htbp] \centering \includegraphics[trim=0cm 2cm 0cm 2cm, clip=true,scale=0.3]{JgammaL2.pdf} \caption{\label{Jgamma} The sinusoidal relation between the tunneling current $J/T_c^2$ and the phase difference $\gamma$ for various $m$. The width of junction now is $\ell=2$. Dots are from the numerical data while the solid lines are the fittings of the data. } \end{figure} Now we are going to set the extra boundary conditions for the Josephson junctions. Close to the spatial boundary $x=\pm\infty$, we demand that all the fields are homogeneous, {\it i.e.} $\partial_x (\text{fields})=0$. There is another symmetry of the fields when we flip the sign of $x\to -x$, \begin{eqnarray} |\psi|\to|\psi|,\quad M_t\to M_t,\quad M_r\to-M_r,\quad M_x\to M_x. \end{eqnarray} Therefore, $M_r$ is an odd function of $x$ while others are even. Thus it is natural to set $M_r(x=0)=0$, while other fields have vanishing first order derivative with respect to $x$ at $x=0$. In the numerics, we set $J$ as a constant and make it as an input parameter. Hence, the velocity $\nu$ and phase difference $\gamma$ can be obtained numerically. Moreover, we find that it is convenient to work with dimensionless quantities, for instance $J/T_c^2$. When the junction width is $\ell=2$, we show the relation between the tunneling current $J/T_c^2$ and the phase difference of the junction $\gamma$ in Fig.\ref{Jgamma}. By using the sinusoidal relation $J\approx J_{max} \sin(\gamma)$ to fit the data, we can figure out the maximal current $J_{max}$ for each $m$ and $\ell$. In Fig.\ref{Jmax}, we plot the relation between the maximal current $J_{max}$ to the graviton mass $m$ for various junction widths $\ell$. We find that for a fixed $\ell$, the maximal current will decrease as $m$ increases; Meanwhile for a fixed $m$, the maximal current decreases as well when $\ell$ increases. Physically, it means increasing the graviton mass (or equivalently increasing the momentum dissipation in the boundary) will suppress the tunneling between the two superconductors in the both sides of the junction. \begin{figure}[htbp] \centering \includegraphics[trim=0cm 1.5cm 0cm 1.3cm, clip=true,scale=0.27]{Jmaxmass.pdf} \caption{\label{Jmax} Maximal current versus the graviton mass for various junction width $\ell$. } \end{figure} \subsection{Coherence Length} From condensed matter physics \cite{tinkham}, there is a relation between the maximal current $J_{max}$ to the coherence length $\xi$ as \begin{eqnarray} \label{xi} J_{max}/T_c^2&\approx& A e^{-\frac{\ell}{\xi}} \end{eqnarray} This relation holds when $\ell\gg\xi$ where $\xi$ is the coherence length of the normal metal. We show the numerics and fittings of the relation \eqref{xi} in Fig.\ref{Ox0}. From the left side of Fig.\ref{Ox0}, we can see that for a fixed value of $m$, the maximal current has an exponential decay with respect to the width $\ell$. The fitted values of $\xi$ are shown on the right panel of Fig.\ref{Ox0}. We can see that the coherence length will decrease as $m$ increases, which means stronger momentum dissipation (breaking of translational symmetry) in the boundary field theory will reduce the coherence length $\xi$. \begin{figure}[htbp] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,scale=0.27]{JmaxL.pdf} \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true,scale=0.23]{ximJmax.pdf} \caption{\label{Ox0}(Left) Maximum current to the width of the junction $\ell$ for various $m$; (Right)Coherence length $\xi$ versus the graviton mass $m$ from the fitting \eqref{xi}. } \end{figure} \section{Conclusions and Discussions} \label{sect:con} In this paper, we studied the SNS Josephson junction in the background of dRGT massive gravity. For a homogeneous chemical potential, we found that the greater the graviton mass was, the harder the normal metal-superconductor phase transition took place. From the aspects of holography we argued that the phase transition would be more difficult to happen if the momentum dissipation or breaking of translational symmetry was stronger in the boundary field theory. For the holographic SNS Josephson junction model, we obtained the familiar sinusoidal relation between the tunneling current and the phase difference across the junction. The maximal current would decrease by increasing the width of the junction which was similar to the previous studies. However, the more interesting thing was that by increasing the graviton mass in the bulk, the maximal current would decrease as well. Therefore, it indicated that stronger momentum dissipation would make the quantum tunneling in the Josephson junction harder to take place. By virtue of the relation between the maximal current and the coherence length, we found that the coherence length would decrease as well with respect to the graviton mass. Therefore, the momentum dissipation would also reduce the coherence length in the Josephson junction. We expect that this kind of relation between the maximal current, coherence length and the graviton mass (momentum dissipation in the boundary field theory) can be observed in the condensed matter experiments. It will also be interesting to find the analytic relation between the coherence length and the graviton mass. \acknowledgments We are grateful to the KITPC's hospitality and partial support during completing this work. This work was partly supported by National Natural Science Foundation of China (NSFC) under grants No. 11105004, 11205097, 11575083, 11565017; The Program for the Innovative Talents of Higher Learning Institutions of Shanxi; The Natural Science Foundation for Young Scientists of Shanxi Province, China (Grant No.2012021003-4); The Fundamental Research Funds for the Central Universities under grant No. NS2015073; Shanghai Key Laboratory of Particle Physics and Cosmology under grant No. 11DZ2260700; The Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (No. Y5KF161CJ1);
2,869,038,155,826
arxiv
\section{Approach} In this section, we introduce the three shift primitives which we find sufficient for producing fast CNN models. We then fuse these shift primitives together with pointwise group convolution to create two efficient modules and from these we create novel deep neural networks. \subsection{Channel Shift} \label{s:channel-shift} Group convolution can reduce the computational complexity of ordinary convolution. For example, AlexNet~\cite{alexnet} used group convolutions to divide the feature map across two GPUs. More generally, a group convolution with group number $G$ reduces the FLOPs and parameter size by a factor of $G$. However, stacking group convolution layers together can block the information from flowing among groups and reduce accuracy. To mitigate this, the channel shuffle operation in ShuffleNet~\cite{shufflenet} is adopted to fuse features among different groups. As illustrated in the left panel of Figure~\ref{f:channel-shift}, channel shuffle is time-consuming since it requires moving feature maps to another memory space. Note that moving data is much more expensive in terms of latency and energy consumption compared with floating point operations~\cite{eie}. In contrast, shifting the pointer, or the physical address to load data, is free. Therefore, we propose the channel shift primitive to utilize pointer shift and minimize actual data movement to reduce the time and energy. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{imgs/soft_shuffle.pdf} \caption{The computation diagrams of channel-shuffle and channel-shift, both with two stacked convolution layers. a): channel shuffle layer with the same number of groups. Input channels are fully mixed when GConv2 takes data from different groups after GConv1, and each colored arrow denotes copying data one time. b): channel shift layer, where the channels are shifted circularly along a predefined direction and there the process at most spends two units to copy data, thus 8$\times$ less memory movement (in this example) than channel shuffle}\label{f:channel-shift} \end{figure} The channel shift primitive blends the information in adjacent channels by shifting all the channels along a certain direction. Taking four-group convolution for example, as shown in Fig.~\ref{f:channel-shift} (b), a predefined storage space (\ie\ the two yellow grids) is allocated at the end of the feature map, and half of the first group is moved there. Then, the pointer is shifted backwards by two grids (half of a group). Note that this primitive does not involve any data movement. When the second group convolution ($GConv2$) fetches data from the shifted address, each group in $GConv2$ can get data from two groups of input feature map. Moving data circularly, a single layer of channel shift fuses information between adjacent groups and stacking multilayers can fuse more. This is not equivalent to channel shuffle method, but our experiment show that channel shift lead to similar accuracy. Furthermore, compared to channel shuffle whose mapping is more complex and requires much more actual data movement, channel shift is needs 8$\times$ less data movement in this case because it only needs to copy 2 units of data while channel shuffle needs to copy 16 units, as illustrated in Figure~\ref{f:channel-shift}. \subsection{Address Shift} Modern convolutional neural networks usually consist of convolution layers with different kernel sizes and channels. Nevertheless, as 90\% or more time is spent in convolutions, improving the process of convolutions is attractive. Fortunately, feature map shift can provide the equivalent function of spatial convolutions with zero FLOPs and no parameters. Despite all this, feature map shift in ShiftNet~\cite{shiftnet} still consumes inference time heavily. In order to solve this problem, we propose the address shift primitive and this will be detailed in the following. \begin{figure}[!t] \centering \includegraphics[width=0.98\linewidth]{imgs/address_shift.pdf} \caption{The implementation of address shift in four directions, where A stands for values from adjacent feature map, and the black arrow denotes the address of feature map pointer}\label{f:address-shift} \end{figure} Figure~\ref{f:address-shift} presents address shift primitives in four different directions. Taking right shift for example, as illustrated in figure \ref{f:address-shift} (a), a shift pointer is offset by 1 unit ahead of the starting address of the feature map and is pointed to address A. Then fetching the tensor continuously in the memory space starting from address is equivalent to shifting the entire tensor to the right by one grid. Similarly we can define the other three shift operations of different directions(left, up and down): moving the pointer one unit forward is equivalent to left shift, and skipping (backtracking) a row is equivalent to shift down (up), as illustrated in the rest parts of Figure~\ref{f:address-shift}. Formally, the above process can be abstracted as the following formula: \begin{align} \bm x_r &=tensor(p_{\bm x}-s_d) \end{align} where function $tensor(p)$ denotes a read operation to get a tensor at pointer $p$, $s_d$ denotes the offset for shift direction $d$. More specifically, $s_{right} = 1$, $s_{left} = -1$, $s_{up} = -stride$, $s_{down} = stride$. The proposed operation is functionally similar to the shift mechanism in ShiftNet~\cite{shiftnet}, which is implemented by a predefined spatial convolution where the kernel only contains one non-zero value to indicate shift direction, as shown in Figure~\ref{f:shiftnet}. The feature map shift can be seen as a special case of depthwise separable convolution. Comparing Figure~\ref{f:shiftnet} and Figure~\ref{f:address-shift}(a), we can see that one difference is with the boundary, where the feature map shift will lead to 0-padded boundaries, but address shift will have none-zero boundaries. In our experiment however, we found this nuance does not have any noticeable effect to the network's accuracy. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{imgs/shiftnet.pdf} \caption{Implementation of feature map right shift by depthwise convolution, proposed by \cite{shiftnet}, where the result is similar to address shift operation in Figure~\ref{f:address-shift}(a)}\label{f:shiftnet} \end{figure} Based on the four basic address shift operations (up, down, left, right), we can compose arbitrary shift patterns (such as up-left). In practice, the cost of this operation is still expensive in inference, since the number of possible shift directions grows quadratically with respect to kernel size (3x3 kernel: 9 possible directions; 5x5 kernel: 25 possible directions), as explained in \cite{shiftnet}. Inspired by convolution decomposition in \cite{rethinking}, 3$\times$3 convolution can be divided into 1$\times$3 and 3$\times$1 convolution. Similarly, top-left shift direction can be decomposed into top and left shift. With the increasing of channel number, CNNs equipped with address shift operation can fuse all information from each direction. Thus we can just use \textbf{four fundamental shift directions} to represent other directions to simplify the network architecture. \subsection{Shortcut Shift} With the success of ResNet, shortcut connections have become common in deeper network architectures. Both addition and concatenation are effective implementation choices, and concatenation performs better according to the analysis in SparseNet~\cite{sparsenet}. Shortcut connections integrate the lower and detailed information with the higher representation and offers a shorter path for back propagation and channel concatenation does not require any computation, and can therefore lead to faster inference speed than addition. We propose a further optimization for channel concatenation: a fixed-size space is allocated in advance which places the output of current layer right after the output of the last layer. In other words, our approach can make the output of two layers located in a pre-allocated continuous storage space so that no copy or computation time is spent on channel concatenation. Considering the starting pointer of current output is shifted to the end of last output, we name it ``shortcut shift'', in accordance with our naming style. This optimization can be better leveraged on DenseNet~\cite{densenet} which heavily relies on channel concatenation. \subsection{Address-based module} \label{s:architeture} Based on the address shift and channel shift operations described above, we build an Address-based module\ as a collection of layers in the manner of the bottleneck module in ResNet~\cite{resnet}. As shown in Figure~\ref{f:architeture}(a), we use the pointwise group convolution layer in the beginning. Then the channel shift layer exchanges information among channel groups. Next, the address shift layer mixes spatial information. In the address shift module, we divide the channels into 3 groups, and within each group, the address shift operation moves the data towards four directions of \{up, down, left, right\}. Finally, we perform another pointwise group convolution to fuse information and match the output channel dimension. Following ShuffleNet~\cite{shufflenet}, we use an additive residual connection if the size of feature map maintains. Otherwise, we use average pooling and concatenation. The first group convolution is followed by batch normalization and non-linear activation function(ReLU) while the second is just followed with batch normalization. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{imgs/architecture.pdf} \caption{Address-based module\ and Address-enhanced module. a): the unit with pointwise group convolution (GConv), channel shift and address shift, where only the GConv consumes parameters. b): similar to the left one, but different in with regard to the second GConv operation in which we embed channel shift and address shift. This is expected to significantly reduce computation. }\label{f:architeture} \end{figure} We find that both channel shift and address shift operation can be embedded into the second pointwise group convolution to speed up. Thus we design an \textit{enhanced} group convolution to realize channel shift, address shift and group convolution together. For simplicity, we set the number of groups to four, each group corresponds to one direction for address shift operation. Also, taking address offset into consideration, as depicted in Figure~\ref{f:architeture} (b), we suggest that shifting left and up (forward offset) are performed in the first two group, and shifting right and down (backward offset) for the last two group. In practice, this design is conducive to implement without memory overflow or extra memory overhead. Thus, we call the Address-based module\ equipped with the above-sophisticated design as Address-enhanced module. \section{Conclusions} The practical value of using deep neural networks in latency-sensitive power and energy constrained embedded systems naturally motivates the search for techniques to create fast, energy-efficient deep neural-net models. This paper adds a collection of shift based operations, namely, channel shift, shortcut shift and address shift, to the set of useful techniques for designing such models. In particular, these shift-based techniques utilize address offset to realize spatial convolutions and no parameters and no FLOPs, and thereby no inference time. Based on these shift operations, we proposed two inference-efficient CNN models named AddressNet\ and Enhanced AddressNet, which outperform ShiftNet significantly. We have also used our operations to improve state of the art in neural net architecture: on CIFAR and ImageNet datasets we demonstrate that, deep neural nets designed with our shift operations can achieve better accuracy. In the future, we plan to develop a library to optimize group convolution which will make it more adaptable to our shift operations. \section{Experiments} As described above, we present a collection of shift operations including channel shift, address shift and shortcut shift, to reduce parameters, FLOPs, and more importantly, actual inference time, while retaining accuracy. In this section, we first build variants of our compact networks, called AddressNet\ and Enhanced AddressNet, based on the above two basic modules. We then investigate their basic functions by comparing with other primary operations on CIFAR100 dataset. We then compare the two architectures with ShiftResNet~\cite{shiftnet}. Finally, we use the proposed approaches to modify MobileNet and evaluate it, as well as our own models, on the ImageNet dataset. In AddressNet\ and Enhanced AddressNet, a 3x3 convolution is first applied with 36 filters and 16 filters. Then three blocks are stacked with Address-based module\ and Address-enhanced module\ on the feature map sizes \{32, 16, 8\} respectively. The subsampling is performed in the second pointwise group convolution with a stride of 2. An identity map is performed when the adjacent units contain the same feature map size. An average pooling is used to match the shape and double the output channels. The output channel numbers of blocks are \{48, 60, 96\} and \{48, 96, 192\} respectively. The network ends with a global average pooling followed by a 100-way fully-connected layer and softmax classifier. Each block contains 3 modules in AddressNet-20 and Enhanced AddressNet-20, 5 modules for AddressNet-32 and Enhanced AddressNet-32, and 7 modules for AddressNet-44 and Enhanced AddressNet-44. \subsection{Implementation details} To improve our experimental results, hyperparameters are fine-tuned with a coarse grid search. Following ShiftResNet, ``expansion rate"($\varepsilon$) is used to scale the number of channels in the intermediate layers. For simplicity, we set the expansion rate to 3 for most experiments on CIFAR100 dataset and the $\varepsilon$ for ImageNet is shown in Table~\ref{table:addressnetArch}. On the CIFAR100 dataset, the input image is 32$\times$32. We use a stochastc gradient-descent (SGD) optimizer with mini-batch 128 on 4 GPUs (\ie\ 32 per GPU). The weight decay is 0.0005 with momentum 0.9 and training the network for 300 epochs. The learning rate starts with 0.1 and decreases by a factor of 10 after 32k and 48k iterations and a factor of 2 from 64k to 128k in every interval 16k iterations. We adopt the weight initialization of \cite{msra}. While on the ImageNet dataset, the input image is randomly cropped from a 256$\times$ 256 image to 224$\times$ 224. We start from a learning rate of 0.1 with divided by 10 for every 30 epochs and weight decay of 0.0001. We evaluate the error on the single 224x224 center crop from an image whose shorter side is 256. During testing, we remove all batch normalization (BN) layers because they can be fused into a convolution layer in advance. We implement our network in Caffe framework and test inference time on a single GeForce GTX 1080 GPU for two epochs with a batch size of 1 and report the average performance. \subsection{Shift-based Primitives vs. baselines} In the following experiments, we will first study the equivalence between our proposed operations and their baselines on CIFAR100 dataset. \subsubsection{Channel shift vs. Channel shuffle} Channel fusion is especially critical to the performance of small networks. Channel shift and channel shuffle promote the information fusion to varying degrees. To compare the performance of channel shift and channel shuffle, we replace all channel shifts in AddressNet-32 with channel shuffle and the results is shown in Table~\ref{table:channel-perf}. The two models achieved the same accuracy, but AddressNet-32 equipped with channel shift is much faster than channel shuffle, with 1.4 times faster in total time and 12.7 times faster in operation time(operation time denotes the average time that the operation consumes). It is quite interesting that the acceleration is larger than theoretical estimation that is described in Section~\ref{s:channel-shift}. This shows advantages of embedding the shift operations: as introduced in Section~\ref{s:architeture}, embedding the channel shuffle into group convolutions saves inference time significantly. \begin{table*}[t] \begin{center} \begin{tabular}{cccccc} \hline\noalign{\smallskip} Model & Params & FLOPs & Top1 Accuracy & Total Time & Operation Time \\ \noalign{\smallskip} \hline \noalign{\smallskip} AddressNet-32-shift & 0.14M & 37.3M & 71.96\% & \textbf{4.8} & \textbf{0.15}\\ AddressNet-32-shuffle & 0.14M & 37.3M & 71.97\% & 6.5 & 1.9\\ \hline Speed Up & - & - & - & 1.4$\times$ & 12.7$\times$ \\ \hline \end{tabular} \end{center} \caption{Comparison between Channel shift and Channel shuffle operations conditioned on the same size of parameters and FLOPs. The following two models conform to the architecture of AddressNet-32 and just differ in channel transformation. The total time(ms) means average runtime of the model and operation time(ms) denotes the average time that the operation consumes} \label{table:channel-perf} \end{table*} \subsubsection{Address Shift vs. Feature map Shift} Based on the analysis in the previous section, we argue that some shift directions are redundant given the four basic directions. To validate this, we first replace nine shift directions (kernel size of 3) on feature map with four fundamental shift directions in ShiftResNet network architecture~\cite{shiftnet}. Secondly, to compare performance between different shift operations, we replace feature map shift with our address shift in the same network architectures to build AddressResNet. The result is displayed in Table~\ref{table:headings}. Firstly, we notice that both two combinations of directions achieve nearly the same performance, which demonstrates the shift operation based on four basic directions is adequate for shifting feature map. Secondly, we find both address shift and feature map shift can achieve nearly the same performance. Thirdly, regarding the inference time, the model that uses address shift achieved nontrivial speedup. Thus, this experiment validates our expectation successfully, and in the following experiments, all the models will use address shift with four directions. \begin{table*}[t] \begin{center} \begin{tabular}{cccccc} \hline\noalign{\smallskip} Model & Params & FLOPs & Top1 Accuracy & Total time & Operation Time \\ \noalign{\smallskip} \hline \noalign{\smallskip} ShiftResNet-four & 145k & 22.9M & 71.80\% & 3.3 & 0.4 \\ ShiftResNet-nine & 145k & 22.9M & 71.86\% & - & -\\ AddressResNet-four & 145k & 22.9M & \textbf{71.94}\% & 3 & 0.3 \\ AddressResNet-nine & 145k & 22.9M & 71.91\% & - & -\\ \hline Speed Up & - & - & - & 1.1$\times$ & 1.3$\times$ \\ \hline \end{tabular} \end{center} \caption{Performance of address shift and feature map shift operations following the above comparison method. The four models conform to the architecture of ShiftResNet~\cite{shiftnet} and just differ in feature map transformation and number of directions} \label{table:headings} \end{table*} \subsubsection{Shortcut shift vs. Identity map} The purpose of the shortcut shift operation is to achieve shortcut connections for free. We replace all the identity map layers in Fast AddressNet-20 with shortcut shift operation to build Enhanced AddressNet-20-concat. To prevent over-accumulation of channels in deep layers, we reduce the output channels of current layer to match the output channel of previous residual function and set the expansion rate to four in all layers. This gives similar FLOPs and parameters. The result is shown in Table~\ref{table:shortcut-compare}. We notice that they can achieve almost equivalent performance with the similar parameters and FLOPs. However, our shortcut shift does not need to spend any inference time obviously(so we ignore the comparison of inference time). \begin{table}[t] \begin{center} \begin{tabular}{cccc} \hline\noalign{\smallskip} Model & Params & FLOPs & Top-1\\ \noalign{\smallskip} \hline \noalign{\smallskip} Enhanced AddressNet-20 & 0.18M & 27M & 69.45\% \\ \begin{tabular}[c]{@{}c@{}}Enhanced AddressNet-20\\ (concat)\end{tabular} & 0.19M & 29M & 69.71\% \\ \hline \end{tabular} \end{center} \caption{Performance of two shortcut connection implementation, following the above comparison method} \label{table:shortcut-compare} \end{table} \subsubsection{Address shift vs. Depthwise separable convolution} When the outputs are used in a spatial aggregation context, there is strong correlation between adjacent units which results in much less loss of information during dimension reduction. Thus, depthwise separable convolutions are critical for extraction of spatial features. Address shift naturally derives spatial features in different views, so we integrate address shift into a CNN-style network to test its utility and expressive ability. To evaluate the performance fairly, we choose depthwise separable convolution to design MobileNet-32 whose number of layers is the same as Enhanced AddressNet-32. We follow most of the hyper-parameters in Enhanced AddressNet-32 with one exception: the output channels of first convolution and three blocks are 48 and \{56, 112, 224\} respectively in MobileNet-32. In table~\ref{table:shift-compare}, it shows that Enhanced AddressNet-32 can behave better with similar scales of parameters and FLOPs. It indicates that our address shift is equal or superior to depthwise separable convolutions in its expressive ability and capacity to extract the feature of images. \begin{table}[t] \begin{center} \begin{tabular}{cccc} \hline\noalign{\smallskip} Model & Params & FLOPs & Top-1 \\ \noalign{\smallskip} \hline \noalign{\smallskip} Enhanced AddressNet-32 & 328k & 48M & \textbf{73.55}\% \\ MobileNet-32 & 337k & 50M & 71.62\% \\ \hline \end{tabular} \end{center} \caption{Comparison in expressive ability(accuracy) between address shift and depthwise separable convolution conditioned on the same size of parameters and FLOPs. The following two models conform to the architecture of Enhanced AddressNet-32 and just differ in feature map transformation} \label{table:shift-compare} \end{table} \subsection{Performance on CIFAR100} We evaluate AddressNet\ and Enhanced AddressNet\ with different depths on the CIFAR100 classification task, and compare it with ShiftResNet architecture. It is reported in \cite{shiftnet} that ShiftResNet achieves better accuracy than ResNet with similar architectures, but ShiftResNet requires fewer FLOPs and parameters. We elaborate three variants of AddressNet\ and Enhanced AddressNet\ on three different parameter scales and quote the results of ShiftResNet from \cite{shiftnet}. The results are shown in Table~\ref{table:perf-shift-address} and visualized in Fig~\ref{fig:compare-shift-address}. Compared to the best accuracy of ShiftResNet, our model(AddressNet-44) can achieve better performance with 3x fewer FLOPs and 6x fewer parameters. Furthermore, the curves in Fig~\ref{fig:compare-shift-address}(a) and Fig~\ref{fig:compare-shift-address}(b) show that our network architectures consistently obtain better accuracy than ShiftResNet in different parameters and FLOPs. In Fig~\ref{fig:compare-shift-address}(c), it presents that our models can reduce inference time significantly. Also note that the Enhanced AddressNet\ is always better than AddressNet~due to its larger groups and well-designed mechanism for shift operation as described in Section~\ref{s:architeture}. \begin{table*}[t] \begin{center} \begin{tabular}{cccccc} \hline\noalign{\smallskip} Model & Parameters & FLOPs & Top1 Accuracy & GPU Time (ms)& CPU Time (ms)\\ \noalign{\smallskip} \hline \noalign{\smallskip} ShiftResNet-20 & 0.19M & 46.0M & 68.64\% & 2.5$\pm$0.01 & 45.86$\pm$0.26\\ ShiftResNet-56 & 0.58M & 102M & 72.13\% & 7.0$\pm$0.04 & 115.45$\pm$1.50\\ ShiftResNet-110 & 1.18M & 187M & 72.56\% & 14.2$\pm$0.02 & 239.83$\pm$4.83\\ \hline AddressNet-20 & 0.08M & 21.8M & 68.68\% & 2.9$\pm$0.01 & 10.73$\pm$0.01\\ AddressNet-32 & 0.14M & 37.3M & 71.96\% & 4.8$\pm$0.04 & 17.44$\pm$0.00\\ AddressNet-44 & {0.20M} & {52.8M} & {73.31\%} & {6.2}$\pm$0.08 &24.08$\pm$0.05\\ \hline Enhanced AddressNet-20 & 0.18M & 26.7M & 69.45\% & 2.9$\pm$0.03 &16.86$\pm$0.66\\ Enhanced AddressNet-32 & 0.33M & 48.0M & 73.55\% & 4.9$\pm$0.01 &25.90$\pm$0.01\\ Enhanced AddressNet-44 & 0.47M & 69.2M & {74.6\%} & {6.4}$\pm$0.01 &29.14$\pm$0.06\\ \hline \end{tabular} \end{center} \caption{The performance of ShiftResNet and AddressNet\ in CIFAR100. The results of ShiftResNet are quoted from \cite{shiftnet}} \label{table:perf-shift-address} \end{table*} \begin{figure*}[t] \subfigure{ \begin{minipage}{5cm} \centering \includegraphics[scale=0.22]{imgs/cifar_parames_acc.pdf} \end{minipage} } \quad \quad \subfigure{ \begin{minipage}{5cm} \centering \includegraphics[scale=0.22]{imgs/cifar_flops_acc.pdf} \end{minipage} } \quad \quad \subfigure{ \begin{minipage}{5cm} \centering \includegraphics[scale=0.22]{imgs/cifar_inference_acc.pdf} \end{minipage} } \caption{A more clear comparison between AddressNet\ and ShiftResNet, summarized from Table~\ref{table:perf-shift-address}. These figures show that our network architectures are better than ShiftResNet family members with fewer parameters(a), FLOPs(b) and lower latency(c)} \label{fig:compare-shift-address} \end{figure*} \begin{table}[t] \begin{center} \begin{tabu} to \hsize {c|c|c|c|c} \hline Type & Output size & Stride & $\varepsilon$ & Repeat \\ \hline \hline Input & 224$\times$224, 3 & - & - & - \\ \hline Conv1 & 112$\times$112, 32 & 2 & - & 1\\ \hline Stage1 & 56$\times$56, 96 & 2 & 4 & 1 \\ & 56$\times$56, 96 & 1 & 3 & 3 \\ \hline Stage2 & 28$\times$28, 192 & 2 & 3 & 1\\ & 28$\times$28, 192 & 1 & 2 & 4 \\ \hline Stage3 & 14$\times$14, 384 & 2 & 2 & 1 \\ & 14$\times$14, 384 & 1 & 2 & 5 \\ \hline Stage4 & 7$\times$7, 768 & 2 & 2 & 1 \\ & 7$\times$7, 768 & 1 & 2 & 3 \\ \hline Pool & 1$\times$1, 768 & - & - & 1 \\ \hline FC & 1000 & - & - & 1 \\ \hline \end{tabu} \end{center} \caption{Enhanced AddressNet\ Architecture} \label{table:addressnetArch} \end{table} \begin{figure*}[t] \subfigure{ \begin{minipage}{5cm} \centering \includegraphics[scale=0.22]{imgs/low_imagenet.pdf} \end{minipage} } \quad \quad \subfigure{ \begin{minipage}{5cm} \centering \includegraphics[scale=0.22]{imgs/mid2_imagenet.pdf} \end{minipage} } \quad \quad \subfigure{ \begin{minipage}{5cm} \centering \includegraphics[scale=0.22]{imgs/mid1_imagenet.pdf} \end{minipage} } \caption{The performance of different models in three levels of accuracy, under the reference frame of Inference Time and Accuracy. In this coordinate system, the closer to the left(time) and top(accuracy), the better is the model} \label{fig:imagenet} \end{figure*} \subsection{ImageNet} Based on the above experiments, we have confirmed that our networks outperform ShiftNet~\cite{shiftnet} on CIFAR100 and the three shift operations can reduce parameters while retaining accuracy. To further assess the scalability and flexibility of our operations, we use our address shift operation to improve MobileNet and show its performance on ImageNet dataset. In our experiments on small models, we modify MobileNet to create Address-MobileNet\ by doubling the output channel of the first convolution, removing the last $2\sim4$ layers, and replacing depthwise separable convolutions with address shift. Then similarly to MobileNet, we scale the input size to build Address-MobileNet-192 and Address-MobileNet-160. To adapt to ImageNet, we improve the depth and expansion rate $\varepsilon$ of Enhanced AddressNet\ to form Enhanced AddressNet-A and Enhanced AddressNet-B. The detailed architecture of Enhanced AddressNet-A is listed in Table~\ref{table:addressnetArch} while Enhanced AddressNet-B is a little shallower to fit the level of low accuracy and we will not show its details for simplicity. The results for four levels of accuracy are shown in the Table~\ref{table:perf-imagenet} and their corresponding scatter diagram is also shown in Figure~\ref{fig:imagenet}. In the low-accuarcy scenario, MobileNet, improved by our address shift operation, excels in both accuracy and inference time significantly (shown in Figure~\ref{fig:imagenet}(a)). This validates the scalability of the address shift operation on large dataset. In the modest accuracy scenarios, our models achieve comparable performance with other state-of-the-art models with small and compact network architecture. In the NVIDIA CUDA Deep Neural Network library (cuDNN\footnote{https://developer.nvidia.com/cudnn}), there is little optimization for group convolution; therefore, this favors models that are not based on group convolution. That is why our models only achieve comparable results rather than the best results. Generally speaking, our experiments on CIFAR100 and ImageNet demonstrates the value of using address shift to accelerate the inference process and also demonstrates our three shift operations have scalability and flexibility for designing compact architectures. \begin{table}[t] \begin{center} \begin{tabular}{ccc} \hline\noalign{\smallskip} Model & \begin{tabular}[c]{@{}c@{}}Top-1\\ Acc.\end{tabular} & \begin{tabular}[c]{@{}c@{}}Latency\\ (ms)\end{tabular}\\ \noalign{\smallskip} \hline \noalign{\smallskip} ShuffleNet-1x & \textbf{67.4}\% & 6.0 \\ \textbf{Enhanced AddressNet-A(Ours)} & 67.0\% & 5.6 \\ 0.5MobileNet-224 & 63.7\%& \textbf{2.0}\\ ShiftNet-B & 61.2\% & 6.0\\ SqueezeNet-Simple & 60.4\% & 3.0\\ \hline \textbf{Enhanced AddressNet-B(Ours)} & \textbf{59.7}\% & 4.3\\ ShiftNet-C & 58.8\% & \textbf{2.7}\\ SqueezeNet-Complex & 58.8\% & 3.6\\ SqueezeNet & 57.5\% & 2.8\\ ShuffleNet-0.5x & 56.8\% & 5.2 \\ \hline \textbf{Address-MobileNet-224(Ours)} & \textbf{46.0}\% & 1.7 \\ ShuffleNet-0.25x & 45.0\% & 5.0 \\ \textbf{Address-MobileNet-160(Ours)} & 43.6\% & \textbf{1.5} \\ 0.25MobileNet-128 & 41.3\%& 1.8\\ \hline \end{tabular} \end{center} \caption{The performance of different levels of accuracy on ImageNet. We modify MobileNet with address shift operation to build Address-MobileNet\ family and improve Enhanced AddressNet\ to build Enhanced AddressNet-A and Enhanced AddressNet-B. Enhanced AddressNet-B is a little shallower to fit the level of low accuracy} \label{table:perf-imagenet} \end{table} \section{Introduction} Convolutional neural networks (CNNs) have been firmly established as the prevalent methods in image understanding problems such as image classification, image caption, and object detection~\cite{imagenet,resnet,rcnn,fasterrcnn,fcn}. The high accuracy is at the cost of increased computation time and memory usage. Real-time processing is vital in some applications such as self-driving cars and speech recognition, where low latency, small storage, and an appropriate accuracy are required~\cite{mobilenet,shufflenet,ma2018vehicle}. Thus producing fast and energy efficient CNNs are very well motivated. There are a number of recent efforts aimed at reducing CNN model size and computational requirements while retaining accuracy. For example, MobileNets~\cite{mobilenet} propose a family of lightweight neural networks based on depthwise separable convolution. ShuffleNet~\cite{shufflenet} utilizes pointwise group convolution and channel shuffle to reduce parameters and FLOPs. To further decrease parameters, ShiftNet~\cite{shiftnet} adopts the shift operation on a feature map as an alternative to spatial convolution. Unfortunately, smaller parameter size or number of FLOPs do not always lead to direct reduction of actual inference time, since many core operations introduced by these state-of-the-art compact architectures are not efficiently implemented for GPU-based machines. For instance in MobileNet, depthwise separable convolutions \textbf{only consume 3\% of the total FLOPs and 1\% of the parameters, but they constitute 20\% of the total inference time}. Channel shuffle and shortcut connections in ShuffleNet do not require any FLOPs or parameters; however, these operations still constitute 30\% of the total inference time. Similarly, in ShiftNet the feature map shift is \textbf{parameter-free and FLOP-free, but it occupies 25\% of the total inference time}. Although MobileNet and ShuffleNet have roughly the same FLOPs, the latter requires two times more inference time. More details are shown in Table~\ref{tab:statistics}. Therefore, in practice, \textbf{neither reducing parameters nor FLOPs ensures a reduction in inference time}. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{imgs/fig1.pdf} \caption{Three efficient shift primitives for efficient neural network architecture design}\label{f:three_operations} \end{figure} Based on the above concerns, we propose a collection of three shift primitives (Fig.~\ref{f:three_operations}) for CNN-based compact architectures to reduce parameters, FLOPs and inference time simultaneously for GPU-based machines: 1) the channel shift that acts as a faster alternative to the channel shuffle operation. 2) the address shift that efficiently collects spatial information at no cost of actual inference time. 3) the shortcut shift provides a fast channel concatenation to realize residual connection through allocating continuous memory space in advance, but does not consume inference time. This collection of primitives mainly involves moving pointers in continuous memory space to minimize actual memory copy and completely avoid floating-point operations, which leads to actual speedup. We combined these collection of shift primitives together with point-wise group convolution to build two compact architectures named AddressNet\ and Enhanced AddressNet\ respectively. Experiments on CIFAR100 and ImageNet datasets demonstrate that our models can achieve equal or superior accuracy with less inference time. \section{Related Work} \label{s:related} Deep convolutional neural networks provide the best results on many computer vision tasks~\cite{imagecaption,fcn}. However, deep neural networks are not always computationally efficient and there appears to be redundant computation in most models. The desire to deploy accurate deep neural networks in low latency applications natural motivates the search for methods to decrease model size and operations (parameters, FLOPs), and, more generally, overall inference time. Recently \cite{smallnn} surveyed current approaches to designing small, energy efficient deep neural nets. Here we take another approach and note that these approaches can be categorized into either compressing pre-trained models or training small models directly. \subsection{Compressing Neural Networks} We outline four types of widely used methods for model compression. First, pruning is an approach that reduces redundant parameters that are insensitive to the accuracy~\cite{channelpruning,amc,thinet,networktrimming,deepcompression}. Second, low-rank factorization, which estimates the informative parameters by matrix decomposition, has been used in~\cite{lowrank,linearstructure,lowrankregularization,channeldecomposition}. Thirdly, quantization and binarization~\cite{vectorquantization,quantizedconvolution,xnornet} can reduce the number of bits to represent each weight. Lastly, knowledge distillation~\cite{bayesian,knowledgedistill} is a method that trains a small neural network by distilling knowledge of a large model. We adopt a different approach from these because we design compact network directly instead of compressing a pretrained network. \subsection{Designing compact layers and networks} There has been increasing interest in building efficient and small models, \eg\ \cite{flattened,quantizedconvolution,xnornet,lightweight}. In ResNet~\cite{resnet}, the bottleneck structure has been proposed to decrease the channel numbers before and after a 3 $\times$ 3 convolution. ResNeXt~\cite{resnext} introduces a multi-branch and homogeneous architecture to decrease FLOPs and improve accuracy. The fire module is introduced in SqueezeNet~\cite{squeezenet} where a fraction of 3 $\times$ 3 convolutions is replaced by 1$\times$1 convolutions to reduce parameters. GoogLeNet~\cite{googlenet} is a well-designed network with a complex structure to reduce computation and increase accuracy. More generally, in order to reduce the amount of parameters and FLOPs, the following operations may be used: \textbf{Depthwise Separable Convolution.} The initial work on depthwise separable convolution is reported in Sires's thesis which is inspired by prior research from Sifre and Mallat on transformation-invariant scattering~\cite{scattering}. Later, Inception V1 and Inception V2 used it as the first layer. After that, it was adopted by MobileNet to design network architecture for mobile devices. The depthwise separable convolution applies a single filter to each input channel. While this operation can reduce computation in theory, in practice it is hard to implement a depthwise separable convolution layer efficiently because a fragmented set of memory footprints are required. Even though there are few FLOPs in a layer, it is still very expensive for inference time as Table~\ref{tab:statistics} shows, and this drawback is also mentioned in~\cite{xception,shufflenet}. \textbf{Feature Map Shift.} ShiftNet~\cite{shiftnet} presents a parameter-free, FLOP-free shift operation as an alternative to spatial convolution. It can be viewed as a special case of depthwise separable convolution that results from assigning one of the values in each $n \times n$ kernels to be 1 and the rest to be 0. It can not be implemented efficiently either by depthwise separable convolution. The address shift operation, based on pointer shifting, varies from ShiftNet obviously. To avoid confusion, in this paper, we use the term \textit{feature map shift} to refer specifically to the method proposed by ShiftNet. \textbf{Pointwise Group Convolution and Channel shuffle.} ShuffleNet~\cite{shufflenet} adopts pointwise group convolutions to reduce parameters and FLOPs, but it also brings the side effect of blocking information flow between group convolutions. Channel shuffle is proposed to address this problem. This is also difficult to implement efficiently since a channel shuffle will move the entire set of channels into another memory space. \textbf{Residual Connection.} Residual connections are introduced by He et al~\cite{resnet} to enable very smooth forward/backward propagation. There are two categories of shortcut connection operations including identity mapping and channel concatenation. Both of them are used in ResNet and ShuffleNet. DenseNet~\cite{densenet} concatenates all previous output of layers before the activation function. \begin{table*}[t] \begin{center} \begin{tabular}{ccccccc} \hline Network & FLOPs & Params & Operation & Parameter (\%) & FLOPs (\%) & Time (\%) \\ \hline ShuffleNet & 524M & 5.6M & identify map & 0 & 0 &6\%\\ ShuffleNet & 524M & 5.6M & channel shuffle & 0 & 0 &24\%\\ MobileNet & 569M & 4.2M & depthwise conv & 3\% & 1\% &20\%\\ ShiftNet & 1.4G & 4.1M & feature map shift & 0 & 0 &25\% \\ \hline \end{tabular} \end{center} \caption{Comparison of number of FLOPs, number of parameters and inference time in different operations and models} \label{tab:statistics} \end{table*}
2,869,038,155,827
arxiv
\section{Introduction} \label{sec:org4eee532} Reconstructing surfaces from scanned 3D points has been an important research area for several decades. With the wide proliferation of 3D scanners, the problem of surface reconstruction has received significant attention in the graphics and vision communities. In recent years, neural implicit representations gained popularity in 3D reconstruction due to their expressiveness and flexibility. But, the existing works in this area concentrated on the representation and reconstruction of watertight mesh only. In computer vision and graphics, watertight meshes usually describe meshes consisting of one closed surface. In this sense, watertight meshes do not contain holes and have a clearly defined inside \cite{enwiki:1024744141}. But in reality, most of the objects are not watertight and therefore the reconstruction of the mesh from the point cloud sampled from such objects should also reflect the non-watertightness. But reconstructing non-watertight meshes remained an unexplored area in this domain. Therefore, in this project, we tried to tackle this problem by extending the learning-based 3D watertight mesh reconstruction pipeline presented in the paper ‘Shape as Points’ (SAP) \cite{peng2021shape}. The existing pipeline in SAP can only be used to reconstruct the watertight mesh from unoriented point cloud even when the point cloud exhibits non-watertightness. Therefore to achieve our goal, it is required to detect and extract out only the relevant part of the watertight mesh defined by the point cloud. The core of our approach is to cast the problem as a semantic segmentation problem. We take the output representation from the SAP pipeline and apply semantic segmentation to it to identify the region in the 3D volume where the mesh surface lies and then we can extract the non-watertight mesh with the application of marching cube algorithm. The advantage of our approach is its simplicity and robustness. Compared to the hand-engineered filtering techniques that we used as baselines, our method achieves compelling results both qualitatively and quantitatively. In summary, the main contributions of this work are: \begin{itemize} \item We present a novel machine learning-based approach for generating high-quality non-watertight meshes. \item We show that our approach achieves significantly better results than the hand-engineered filtering based baseline methods both qualitatively and quantitatively. \end{itemize} We organize the structure of the report as follows. We first provide an overview of the learning-based pipeline in SAP and its limitations in Section \ref{org2f64bb1}. We then introduce details of our methodology in Section \ref{orga7e8a2d}, followed by the description of the baseline methods and the effectiveness of our proposed model compared to the baselines both quantitatively and qualitatively in Section \ref{orge45565e}. \section{Reviewing SAP \label{org2f64bb1}} \label{sec:orgf519865} In this section, we briefly review the learning-based watertight surface reconstruction pipeline in SAP \cite{peng2021shape} and analyze the limitation of PSR indicator grid for non-watertight mesh reconstruction. \subsection{Learning-based Watertight Surface Reconstruction} \label{sec:orgbf95f47} The learning-based watertight surface reconstruction setting in SAP takes a noisy, unoriented point cloud as input and outputs a watertight mesh. More specifically, taking the noisy, unoriented point cloud as input the network predicts a clean oriented point cloud which is then fed into the Differentiable Poisson Solver (Section \ref{org7ec7f79}) to produce an occupancy indicator grid and the watertight mesh is then extracted by running the Marching Cube algorithm on this occupancy indicator grid. The key idea of this work was to introduce differentiability in the classic Poisson Surface Reconstruction algorithm. The model was trained with watertight meshes as ground truth and consequently was supervised directly with the ground truth occupancy grid obtained from these meshes. Figure \ref{sap_pipeline} illustrates the pipeline of the learning-based surface reconstruction task. \begin{figure} \centering \includegraphics[width=\textwidth]{./img/SAP_pipeline.pdf} \caption{Pipeline for learning-based watertight surface reconstruction in SAP} \label{sap_pipeline} \end{figure} \subsubsection{Differentiable Poisson Solver \label{org7ec7f79}} \label{sec:orgea8fa9f} The Differentiable Poisson Solver \cite{peng2021shape} is the differentiable version of the classic Poisson Surface Reconstruction (PSR) algorithm \cite{Kazhdan2006}. The PSR algorithm constructs the characteristic function \(\chi\) of the solid defined by the oriented point cloud --- the function whose value is one inside of the solid and zero outside of it --- and then extracts the appropriate iso-surface. The characteristic function when realized in the voxel grid, produces the PSR indicator grid. Let \(\mathcal{P}=\left\lbrace (\textbf c_{i}, \textbf n_{i}) \right\rbrace\) be a set of oriented point cloud sampled from the surface of a solid \(M\), where \(\textbf c_{i}\in\mathbb{R}^{3}\) denotes a spatial coordinate on the surface of the solid and \(\textbf n_{i}\in\mathbb{R}^{3}\) is its corresponding surface normal. Let \(\chi:\mathbb{R}^{3}\to\mathbb{R}\) be the characteristic function. Then, the Divergence theorem states that \[\iiint_{M} (\Delta\cdot\chi) dV = \iint_{\partial M}(\nabla\chi\cdot\textbf n) dS.\] Approximating the right hand side of the above equation with the given samples gives rise to the Poisson equation as follows: \[\Delta \chi = \nabla \cdot \textbf v\] where, \(\textbf v(\textbf x) = \sum_{(\textbf c_{i}, \textbf n_{i})\in\mathcal{P}}\delta(\textbf x - \textbf c_{i})\textbf n_{i}\) where \(\delta\) is the Kronecker delta. Solving this set of linear Partial Differential Equations (PDEs) differentiably, involves discretizing the point normal field \(\textbf v\) uniformly by rasterizing the point normals onto a uniformly sampled voxel grid. The differentiability in the point rasterization process comes via inverse trilinear interpolation. With Sprectral methods, the original signal can be decomposed into a linear sum of sine/cosine basis functions whose derivatives can be computed analytically. Therefore, employing this method one can solve for one can first solve for the unnormalized characteristic function \(\chi'\) without the boundary conditions \[\chi'=\operatorname{IFFT}(\tilde X), \quad \tilde X=\tilde g_{\sigma,r}(\textbf u)\odot \frac{i\textbf u\cdot\tilde{\textbf v}}{-2\pi\left\| u \right\|^{2}}, \quad \tilde g_{\sigma,r}(\textbf u)=\exp\left( -\frac{2\sigma^{2}\left\| u \right\|^{2}}{r^{2}} \right)\] where the Fast Fourier Transform of \(\textbf v\) is denoted as \(\tilde{\textbf v}=\operatorname{FFT}(\textbf v)\); \(\textbf u:=(u,v,w)\in\mathbb{R}^{n\times d}\) denotes the spectral frequencies corresponding to the \((x,y,z)\) spatial dimensions and \(\operatorname{IFFT}(\tilde \chi)\) represents the Inverse Fast Fourier Transform of \(\tilde\chi\). \(\tilde g_{\sigma, r}(\textbf u)\) is a Gaussian smoothing kernel of bandwidth \(\sigma\) at grid resolution \(r\) in the spectral domain to mitigate ringing effects as a result of the Gibbs phenomenon from rasterizing the point normals. Therefore the normalized differentiable characteristic function is given by \[\chi=\frac m{\operatorname{abs}(\chi'|_{\textbf x=0})}\left( \chi'-\frac1{\left| \left\lbrace \textbf c_{i} \right\rbrace \right|}\sum_{\textbf c\in\left\lbrace \textbf c_{i} \right\rbrace}\chi'|_{\textbf x=\textbf c} \right).\] \subsubsection{Architechture} \label{sec:orgc5979d4} The very first component of the learning based pipeline is the convolutional point encoder network as proposed in \cite{Peng2020}. This network encode the noisy unoriented point cloud coordinates \(\left\lbrace \textbf c_{i} \right\rbrace\) into a feature \(\phi_{\theta}\) encapsulating both local and global information about the input point cloud. Here, \(\theta\) refers to network parameters. Let \(\phi_{\theta}(\textbf c)\) denote the feature at any particular point \(\textbf c\) obtained from feature volume \(\phi_{\theta}\) using trilinear interpolation. Then given the feature \(\phi_{\theta}(\textbf c)\) a shallow Multi-Layer Perceptron (MLP) \(\textbf f_{\theta}\) predict \(k\)-offsets for \(\textbf c\): \[\Delta\textbf c=\textbf f_{\theta}(\textbf c,\phi_{\theta}(\textbf c))\] Therefore we get the updated point positions \(\hat{\textbf c}\) by adding the offsets \(\Delta\textbf c\) to the input point position \(\textbf c\). These additional offsets densify the point cloud, leading to enhanced reconstruction quality. Following the authors, we also conducted all our subsequent experiments with \(k=7\). Given the updated points \(\hat{\textbf c}\) a second MLP \(\textbf g_{\theta}\) is trained to predict the corresponding normals: \[\hat{\textbf n}=\textbf g_{\theta}(\hat{\textbf c}, \phi_{\theta}(\hat{\textbf c})).\] The same decoder architecture as in \cite{Peng2020} is used for both \(\textbf f_{\theta}\) and \(\textbf g_{\theta}\). The network comprises 5 layers of ResNet blocks with a hidden dimension of 32. \subsubsection{Training and Inference \label{org0b5cf8a}} \label{sec:org45cebfb} The authors used watertight and noise-free meshes for supervision and acquire the ground truth indicator grid by running PSR algorithm \cite{Kazhdan2006} on a densely sampled point clouds of the ground truth meshes with the corresponding ground truth normals. Mean Square Error (MSE) loss on the predicted \((\hat\chi)\) and ground truth \((\chi)\) indicator grid \[\mathcal{L}_{\text{DPSR}}=\left\| \hat\chi-\chi \right\|^{2}\] is used for training the model with a learning rate of 5e-4. During inference, the trained model predicts the normals and offsets and then DPSR solves for the PSR indicator grid, and after that Marching Cubes \cite{Lorensen1987} extract meshes from the PSR indicator grid as demonstrated in Figure \ref{sap_pipeline}. \subsection{Limitation of the Learning-based Pipeline in SAP} \label{sec:org48f633f} The limitation of the learning-based pipeline, for non-watertight mesh reconstruction comes from the PSR algorithm itself. By definition, the characteristic function relies on the assumption that a solid object has an interior and an exterior part with an enclosed boundary. In practice, deriving the functional form of the boundary of a solid is very difficult and using it to solve for the characteristic function is impractical. Thus, we resort to using the point cloud sampled from the surface of the solid and solve the Poisson equation to get the PSR indicator grid. But by using point cloud, we no longer have an enclosed boundary but many tiny holes where no samples are present. In these regions where no samples are present the indicator values diffuse from 1 to 0 gradually rather than a sharp change. This problem can be partially mitigated in SAP by densifying the point cloud by predicting offsets. This problem is more pronounced in the case of non-watertight mesh reconstruction since the regions where there are no samples are much larger. Therefore, running the PSR algorithm on a point cloud sampled from a non-watertight mesh results in a PSR indicator grid where a sharp transition from 1 to 0 can be observed where there are samples (sharpness depends on the point cloud density at that region) and regions where there are no samples, indicator values gradually diffuse from 1 to 0. Therefore running Marching Cube algorithm on this PSR grid results in a watertight mesh where reconstructed surfaces are well constructed and match with the boundaries defined by the samples but also produces excess surfaces of arbitrary topology where there are no samples. In this project, we tried to mitigate this problem by identifying the regions with a sharp transition from 1 to 0 in the PSR grid. \section{Our Approach \label{orga7e8a2d}} \label{sec:org5c98b0c} Given a noisy, unoriented point cloud in \(\mathbb{R}^{3}\) our goal is to reconstruct a surface (watertight or non-watertight) that fits the point cloud. We do this by extending the existing learning-based watertight surface reconstruction pipeline in SAP \cite{peng2021shape} with a Surface Mask Prediction Network. \subsection{Method} \label{sec:orgbce455b} To begin with, we define ``A Surface Mask" of an object surface to be a volume in the 3D space that encapsulates the object surface. Given a PSR indicator grid, our objective is to predict a surface mask such that it only encapsulates the actual object surface and does not encapsulate other non-surface region. Thus, our surface mask has to be thick enough such that it captures the actual surface entirely and thin enough such that it does not capture any non-surface region. The reason behind this approach is that once we can generate the appropriate surface mask, we can run the marching cube algorithm restricted to the masked region and therefore we can extract only the relevant object surface. To achieve this we observed that, the PSR indicator grid constructed from a point cloud sampled from a non-watertight mesh shows a sharp change in the gradient where there are point samples, whereas the gradient changes slowly where there are no samples. But the sharpness of the gradient change depends on the density of the point cloud in that region. Therefore, simple hand-engineered gradient-based filtering is not sufficient to detect the regions (See the comparisons below in Section \ref{orgd9ef9db}). Thus, we resort to Machine Learning based approach. \subsubsection*{Surface Mask Prediction Network} \label{sec:org5cfc8ad} We can regard the aforementioned problem as a semantic segmentation problem in 3D as we want to label each voxel if it belongs in the surface mask region and thus provides a dense 3D surface mask. Now the surface mask is a local feature of the PSR indicator grid and requires a small receptive field to label a voxel by identifying whether a sharp gradient change occurs in the vicinity of the voxel. Therefore a standard 3D U-NET, with one input and output channel, is sufficient to predict the surface mask. Following the work on 3D U-NET \cite{Cicek2016}, in the analysis part, we used at each stage a \texttt{DoubleConv} module consists of two \(3\times 3\times 3\) convolutions each followed by batch normalization and a rectified linear unit (ReLU) and then a \(2\times 2\times 2\) max-pooling with a stride of two in each dimension. In the synthesis part, we used 3D transposed convolution operator \cite{Zeiler2010} each followed by the \texttt{DoubleConv} module. We also used skip-connections to fuse high-resolution features from the analysis path into the synthesis path. We call this network ``Surface Mask Prediction Network" (SMPN). The illustration of the network is given in Figure \ref{architechture}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{./img/UNET.pdf} \caption{Model architechture of the Surface Mask Prediction. A standard 3D U-NET architechture with feature channels 8, 16, 32, 64, 128.} \label{architechture} \end{figure} Now with the SMPN, we can extend the learning-based watertight surface reconstruction pipeline as illustrated in Figure \ref{pipeline} to facilitate non-watertight surface reconstruction. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{./img/extended_pipeline.pdf} \caption{Schematic view of the extended SAP pipeline. The highlighted portion indicates the extension.} \label{pipeline} \end{figure} \subsection{Implementation Details} \label{sec:org59c633b} \subsubsection{Dataset} \label{sec:orgb5be1b9} We generate our training data using the watertight meshes from ShapeNet \cite{Chang2015a} dataset. For all our experiments we used the meshes of the following classes --- Box, Car, Chair, Ship, and Sofa. \subsubsection{Data Preprocessing Pipeline \label{org82a1b3e}} \label{sec:org47b3a92} Given a watertight mesh from ShapeNet dataset \cite{Chang2015a}, We first apply a random rotation to the mesh and then we translate and scale it to fit into the \([0,1]^{3}\) unit cube. We then use this transformed mesh to densely sample oriented points and feed them to the PSR algorithm \cite{Kazhdan2006} to generate the ground truth PSR indicator grid. From this point cloud, we then remove points that fall within some randomly selected regions, which we use as the ground truth point cloud. We then scale this ground truth point cloud and discretize it to create the voxel grid and apply a \(7\times7\times7\) 3D dilation kernel on this voxel grid to generate the ground truth surface mask. \subsubsection{Training and Inference} \label{sec:org0f7ac82} During the training phase, we train both the learning-based watertight surface reconstruction pipeline and the surface mask prediction network jointly. We first obtain the predicted PSR indicator grid \(\hat\chi\) by feeding the unoriented point cloud \(\hat{\textbf c}\) as input to the learning-based pipeline in SAP and then use the predicted PSR indicator grid \(\hat \chi\) as input to our Surface Mask Prediction Network to generate the surface mask \(\hat M\). The corresponding ground truth mask \(M\) is obtained as described in Section \ref{org82a1b3e}. To compute the loss between the predicted and the ground truth mask, we used the dice loss \cite{Milletari2016} \[\mathcal{L}_{\text{Dice}} = 1 - \frac{2\sum_{i,j,k}M_{ijk}\hat M_{ijk}+1}{\sum_{i,j,k}M_{ijk}^{2}+\sum_{i,j,k}\hat M_{ijk}^{2}+1}\] which considers the loss information both locally and globally as it is critical to predict a precise surface mask. Also to train the SAP pipeline we use the same \(\mathcal{L}_{\text{DPSR}}\) loss as discussed in Section \ref{org0b5cf8a}. Thus to train both the networks jointly, we used the following total loss \[\mathcal{L}=\mathcal{L}_{\text{DPSR}}+\mathcal{L}_{\text{Dice}}.\] We implement all models in PyTorch \cite{NEURIPS2019_9015} and for training, we use the Adam optimizer \cite{Kingma2014} with a learning rate of 5e-4 and batch size of 16. We trained the models using NVIDIA GeForce RTX 3090. During inference, given any input point cloud, we first use the learning-based watertight surface reconstruction pipeline in SAP to predict the PSR indicator grid and use the predicted indicator grid as input to our surface mask prediction network to predict the surface mask and run Marching Cubes \cite{Lorensen1987a} on the indicator grid restricted to the surface mask region to extract the non-watertight mesh. Figure \ref{sapeinference} illustrates the inference mechanism. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{./img/prediction.pdf} \caption{Schematic view of the inference mechanism} \label{sapeinference} \end{figure} \section{Experimental Results \label{orge45565e}} \label{sec:org293b2dc} \subsection{Evaluation Metrics \label{orge630a53}} \label{sec:org848116b} We considered Chamfer Distance and Hausdorff Distance as our evaluation metrics. Given two point clouds \(S_{1}\) and \(S_{2}\), the Chamfer Distance (CD) and Hausdorff Distance (HD) is defined as follows \[\operatorname{CD}(S_{1},S_{2})=\frac1{|S_{1}|}\sum_{x\in S_{1}}\min_{y\in S_{2}}\left\| x-y \right\|^{2}_{2} + \frac1{|S_{2}|}\sum_{x\in S_{2}}\min_{y\in S_{1}}\left\| x-y \right\|^{2}_{2},\] \[\operatorname{HD}(S_{1}, S_{2}) = \max\left\lbrace \sup_{x\in S_{1}}\inf_{y\in S_{2}}\left\| x-y \right\|, \sup_{y\in S_{2}}\inf_{x\in S_{1}}\left\| x-y \right\| \right\rbrace.\] These metrics may not be the best indicator of similarity between two non-watertight meshes since if one of the non-watertight meshes have some tiny holes and the other does not, then this discrepancy may not get reflected in the metric score as we are calculating the metric only with a number of points sampled from the surface of the meshes. Therefore, we also use visual aid for our assessment. \subsection{Non-watertight Surface Reconstruction} \label{sec:orgc450a96} In this part, we investigate whether our extension to the SAP pipeline can be used for non-watertight surface reconstruction from unoriented point clouds. We evaluate our method on the single object reconstruction task using noise and outlier-augmented point clouds from ShapeNet as input to our method. We investigate the performance for three different noise levels: (a) Gaussian noise with zero mean and standard deviation 0.005, (b) Gaussian noise with zero mean and standard deviation 0.025, (c) 50\% points have the same noise as in a) and the other 50\% points are outliers uniformly sampled inside the unit cube. Table \ref{table1} shows that our method achieves better results compared to the base SAP pipeline. Also, we can observe that training the entire extended pipeline performs better than training only the SMPN with the backbone frozen. Table \ref{table2} shows qualitatively that our method works well with various noise levels augmented in the input point cloud. \begin{center} \begin{longtable}{|m{0.31\textwidth}| m{0.31\textwidth} | m{0.31\textwidth}|} \hline {\bfseries Pipeline} & \raggedleft{\bfseries Chamfer Distance ($\downarrow$)} & \raggedleft\arraybslash{\bfseries Hausdorff Distance ($\downarrow$)}\\\hline\hline SAP & \raggedleft 0.009798 & \raggedleft\arraybslash 0.254690\\\hline SAP(freezed) + SMPN & \raggedleft 0.000615 & \raggedleft\arraybslash 0.108691\\\hline \cellcolor{SpringGreen!50}SAP + SMPN & \cellcolor{SpringGreen!50} \raggedleft{\bfseries 0.000239} & \cellcolor{SpringGreen!50} \raggedleft\arraybslash{\bfseries 0.071070}\\\hline \caption{Quantitative comparison of the performances of the extended pipeline (2nd and 3rd row) with the base pipline (1st row) on the ShapeNet dataset (mean over 5 classes).} \label{table1} \end{longtable} \end{center} \begin{center} \begin{longtable}{m{0.05\textwidth}|m{0.2125\textwidth}|m{0.2125\textwidth}|m{0.2375\textwidth}|m{0.2125\textwidth}} ~ & \centering Input & \centering SAP & \centering SAP Extended & \centering\arraybslash GT Mesh\\\hline &&&&\\ ~ & \includegraphics[width=\linewidth]{img/table1/sn1.png} & \includegraphics[width=\linewidth]{img/table1/sn1sap.png} & \includegraphics[width=\linewidth]{img/table1/sn1p.png} & \includegraphics[width=\linewidth]{img/table1/1g.png}\\&&&&\\ \centering \rotatebox[origin=c]{90}{Low Noise} & \includegraphics[width=\linewidth]{img/table1/sn2.png} & \includegraphics[width=\linewidth]{img/table1/sn2sap.png} & \includegraphics[width=\linewidth]{img/table1/sn2p.png} & \includegraphics[width=\linewidth]{img/table1/2g.png}\\&&&&\\ ~ & \includegraphics[width=\linewidth]{img/table1/sn3.png} & \includegraphics[width=\linewidth]{img/table1/sn3sap.png} & \includegraphics[width=\linewidth]{img/table1/sn3p.png} & \includegraphics[width=\linewidth]{img/table1/3g.png}\\\hline &&&&\\ ~ & \includegraphics[width=\linewidth]{img/table1/ln1.png} & \includegraphics[width=\linewidth]{img/table1/ln1sap.png} & \includegraphics[width=\linewidth]{img/table1/ln1p.png} & \includegraphics[width=\linewidth]{img/table1/1g.png}\\&&&&\\ \centering \rotatebox[origin=c]{90}{High Noise} & \includegraphics[width=\linewidth]{img/table1/ln2.png} & \includegraphics[width=\linewidth]{img/table1/ln2sap.png} & \includegraphics[width=\linewidth]{img/table1/ln2p.png} & \includegraphics[width=\linewidth]{img/table1/2g.png}\\&&&&\\ ~ & \includegraphics[width=\linewidth]{img/table1/ln3.png} & \includegraphics[width=\linewidth]{img/table1/ln3sap.png} & \includegraphics[width=\linewidth]{img/table1/ln3p.png} & \includegraphics[width=\linewidth]{img/table1/3g.png}\\\hline&&&&\\ ~ & \includegraphics[width=\linewidth]{img/table1/o1.png} & \includegraphics[width=\linewidth]{img/table1/o1sap.png} & \includegraphics[width=\linewidth]{img/table1/o1p.png} & \includegraphics[width=\linewidth]{img/table1/1g.png}\\&&&&\\ \centering \rotatebox[origin=c]{90}{Outliers} & \includegraphics[width=\linewidth]{img/table1/o2.png} & \includegraphics[width=\linewidth]{img/table1/o2sap.png} & \includegraphics[width=\linewidth]{img/table1/o2p.png} & \includegraphics[width=\linewidth]{img/table1/2g.png}\\&&&&\\ ~ & \includegraphics[width=\linewidth]{img/table1/o3.png} & \includegraphics[width=\linewidth]{img/table1/o3sap.png} & \includegraphics[width=\linewidth]{img/table1/o3p.png} & \includegraphics[width=\linewidth]{img/table1/3g.png}\\&&&&\\ \caption{Qualitative comparison of the performances of the extended pipeline with the base pipline in 3 different types of input.} \label{table2} \end{longtable} \end{center} \subsection{Comparisons to the Baselines Methods \label{orgd9ef9db}} \label{sec:orgf667a1a} For baselines, we used the classic Laplacian filter on the PSR indicator grid with various thresholds for mask generation. We first discuss the baseline method with 2D Laplacian filtering. Given a 3D PSR indicator grid, we take each 2D slice and convolve it using a 2D \(3\times 3\) Laplace kernel: \[\begin{bmatrix} 0&-1&0\\-1&4&-1\\0&-1&0 \end{bmatrix},\] to get a silhouette of the edges defined by the indicator grid in that slice. Then we compute the absolute values of the convolved grid and use different thresholds to classify whether a pixel is an actual point on the surface of the mesh. Then, we apply a \(7\times 7\) dilation kernel to generate the surface mask and use it together with the PSR indicator grid to extract the non-watertight surface using Marching Cube algorithm. The baseline method for 3D Laplacian filter is similar as the 2D baseline. In this case, we use a 3D \(3\times 3\times 3\) Laplace kernel \(K\) given by \[K_1 = \begin{bmatrix} 0&0&0\\0&1&0\\0&0&0 \end{bmatrix},\quad K_2 = \begin{bmatrix} 0&1&0\\1&-6&1\\0&1&0 \end{bmatrix},\quad K_3 = \begin{bmatrix} 0&0&0\\0&1&0\\0&0&0 \end{bmatrix},\] where \(K_{i}\) denotes the \(i\)-th plane and we use this kernel to convolve the entire grid. Then we compute the absolute values of convolved grid and use thresholding as above and convolve it using a \(7\times 7\times 7\) dilation kernel to generate the surface mask. Table \ref{table3} and Table \ref{table4} show the quantitative and qualitative comparisons respectively. We can observe that the baseline methods do not perform well. Also, a considerable output depends on the threshold and varies from input to input. Compared to the baseline methods, our method achieves superior performance. \begin{table}[H] \begin{longtable}{|m{0.20\textwidth}|m{0.14\textwidth}|m{0.29\textwidth}|m{0.29\textwidth}|} \hline {\bfseries Method} & \raggedleft{\bfseries Threshold} & \raggedleft{\bfseries Chamfer Distance ($\downarrow$)}& \raggedleft\arraybslash{\bfseries Hausdorff Distance ($\downarrow$)}\\\hline\hline 2D Laplacian & \raggedleft 0.00 & \raggedleft 0.009798 & \raggedleft\arraybslash 0.254690\\\hhline{~---} ~ & \raggedleft {\bfseries 0.05} & \raggedleft {\bfseries 0.008534} & \raggedleft\arraybslash {\bfseries 0.254473}\\\hhline{~---} ~ & \raggedleft 0.10 & \raggedleft 0.009470 & \raggedleft\arraybslash 0.254429\\\hhline{~---} ~ & \raggedleft 0.20 & \raggedleft 0.016016 & \raggedleft\arraybslash 0.260365\\\hhline{~---} ~ & \raggedleft 0.40 & \raggedleft 0.059645 & \raggedleft\arraybslash 0.340960\\\hhline{----} 3D Laplacian & \raggedleft 0.00 & \raggedleft 0.009799 & \raggedleft\arraybslash 0.254690\\\hhline{~---} ~ & \raggedleft {\bfseries 0.05} & \raggedleft {\bfseries 0.008783} & \raggedleft\arraybslash {\bfseries 0.254579}\\\hhline{~---} ~ & \raggedleft 0.10 & \raggedleft 0.010009 & \raggedleft\arraybslash 0.254576\\\hhline{~---} ~ & \raggedleft 0.20 & \raggedleft 0.023703 & \raggedleft\arraybslash 0.255712\\\hhline{~---} ~ & \raggedleft 0.40 & \raggedleft 0.088625 & \raggedleft\arraybslash 0.400718\\\hhline{----}\hline \cellcolor{SpringGreen!50}Ours & \cellcolor{SpringGreen!50} \raggedleft {}- & \cellcolor{SpringGreen!50} \raggedleft{\bfseries 0.000239} & \cellcolor{SpringGreen!50} \raggedleft\arraybslash{\bfseries 0.071070}\\\hline \caption{Quantitative comparison of the performances of the extended pipeline with the baseline methods on the ShapeNet dataset (mean over 5 classes). The table shows performances of the kernel methods (2D/3D Laplacian) for mask generation at various thresholds.} \label{table3} \end{longtable} \end{table} \begin{table}[p] \begin{longtable}{m{0.18\linewidth}|m{0.18\linewidth}|m{0.18\linewidth}|m{0.18\linewidth}|m{0.18\linewidth}} \centering Input & {\centering 2D Laplacian\par} \centering (with best threshold) & {\centering 3D Laplacian\par} \centering (with best threshold) & \centering SAP Extended & \centering\arraybslash Ground Truth\\\hline\hline &&&&\\ \centering \includegraphics[width=\linewidth]{img/table2/1.png} & \centering \includegraphics[width=\linewidth]{img/table2/12d.png} & \centering \includegraphics[width=\linewidth]{img/table2/13d.png} & \centering \includegraphics[width=\linewidth]{img/table2/1p.png} & \centering\arraybslash \includegraphics[width=\linewidth]{img/table2/1g.png}\\&&&&\\ \centering \includegraphics[width=\linewidth]{img/table2/2.png} & \centering \includegraphics[width=\linewidth]{img/table2/22d.png} & \centering \includegraphics[width=\linewidth]{img/table2/23d.png} & \centering \includegraphics[width=\linewidth]{img/table2/2p.png} & \centering\arraybslash \includegraphics[width=\linewidth]{img/table2/2g.png}\\&&&&\\ \centering \includegraphics[width=\linewidth]{img/table2/3.png} & \centering \includegraphics[width=\linewidth]{img/table2/32d.png} & \centering \includegraphics[width=\linewidth]{img/table2/33d.png} & \centering \includegraphics[width=\linewidth]{img/table2/3p.png} & \centering\arraybslash \includegraphics[width=\linewidth]{img/table2/3g.png}\\&&&&\\ \centering \includegraphics[width=\linewidth]{img/table2/4.png} & \centering \includegraphics[width=\linewidth]{img/table2/42d.png} & \centering \includegraphics[width=\linewidth]{img/table2/43d.png} & \centering \includegraphics[width=\linewidth]{img/table2/4p.png} & \centering\arraybslash \includegraphics[width=\linewidth]{img/table2/4g.png}\\&&&&\\ \centering \includegraphics[width=\linewidth]{img/table2/5.png} & \centering \includegraphics[width=\linewidth]{img/table2/52d.png} & \centering \includegraphics[width=\linewidth]{img/table2/53d.png} & \centering \includegraphics[width=\linewidth]{img/table2/5p.png} & \centering\arraybslash \includegraphics[width=\linewidth]{img/table2/5g.png}\\ \caption{Qualitative comparison of the performances of the extended pipeline with the baseline methods (with the best threshold for mask generation) on the ShapeNet dataset.} \label{table4} \end{longtable} \end{table} \subsection{Ablation Studies} \label{sec:org694248a} In this section, we investigate the influence of different dilation kernels on the surface mask. We conduct our ablation experiments on ShapeNet for the first setup i.e. with small induced noise (otherwise metric calculation would be inconclusive). We generate our training data with dilation kernels of various widths. We keep the Surface Mask Prediction Network and the SAP pipeline fixed with their optimal configurations. We then train the model with training data generated with a particular dilation kernel and then measured its performance. Table \ref{table5} reports the results. \begin{table}[H] \begin{longtable}{|l|r|r|r|r|} \hline \diagbox{Metric}{Kernel Width} & \raggedleft 3 & \raggedleft 5 & \raggedleft 7 & \raggedleft\arraybslash 9\\\hline Chamfer Distance & 0.000172 & \textbf{0.000174} & 0.000221 & 0.000263 \\\hline Hausdorff Distance & 0.047733 & \textbf{0.042084} & 0.060259 & 0.058211 \\\hline \caption{Ablation study over kernel width} \label{table5} \end{longtable} \end{table} As we have pointed out before (in Section \ref{orge630a53}), even though the metric shows better performance with kernel width 3, it is not an accurate assessment. We found that the predicted mesh contains very tiny holes at undesired places as being very thin, and the predicted surface mask could not capture the regions where the surface lies very accurately. On the other hand, we notice that training data generated with kernel width 5 gives the best prediction performance both qualitatively and quantitatively. \section{Conclusion} \label{sec:org93ea696} In this project, we have presented a novel method for non-watertight mesh reconstruction performing semantic segmentation on the PSR indicator grid. We demonstrate its effectiveness in reconstructing the non-watertight mesh, compared to the filtering-based baseline methods, both quantitatively and qualitatively. \textbf{Limitations.} The main limitation of our approach is the usage of dilation kernels to generate the ground truth surface mask. Therefore, it will be interesting to study if there is any way to predict the surface mask without the supervised training. Secondly, our experiments were restricted to reconstructing a single object surface. But we believe that our method can be extended to reconstruct large scenes by combining small non-watertight surface patches, reconstructed in a sliding-window manner. Finally, our initial approach is not end-to-end. Therefore, taking supervision directly from the non-watertight mesh via chamfer loss can be an interesting end-to-end approach to consider for future studies. \section{Acknowledgements} \label{sec:orge30ce62} I want to thank Songyou Peng and Chiyu `Max' Jiang for supervising the project and providing me with the necessary guidance and valuable support throughout this research project. I want to thank Prof. Andreas Geiger for the helpful discussions. Also, I want to thank Songyou Peng and Madhav Iyengar for proofreading the report.